People sometimes ask me what types of AIS field-building projects I would like to see.
Here’s a list of 11 projects.
Background points/caveats
But first, a few background points.
These projects require people with specific skills/abilities/context in order for them to go well. Some of them also have downside risks. This is not a “list of projects Akash thinks anyone can do” but rather a “list of projects that Akash thinks could Actually Reduce P(Doom) if they were executed extremely well by an unusually well-qualified person/team.”
I strongly encourage people to reach out to experienced researchers/community-builders before doing big versions of any of these. (You may disagree with their judgment, but I think it’s important to at least have models of what they believe before you do something big.)
This list represents my opinions. As always, you should evaluate these ideas for yourself.
If you are interested in any of these, feel free to reach out to me. If I can’t help you, I might know someone else who can.
Reminder that you can apply for funding from the long-term future fund. You don’t have to apply to execute a specific project. You can apply for career exploration grants, grants that let you think about what you want to do next, and grants that allow you to test out different hypotheses/uncertainties.
I sometimes use the word “organization”, which might make it seem like I’m talking about 10+ people doing something over the course of several years. But I actually mean “I think a team of 1-3 people could probably test this out in a few weeks and get something ambitious started here within a few months if they had relevant skills/experiences/mentorship.
These projects are based on several assumptions about AI safety, and I won’t be able to articulate all of them in one post. Some assumptions include “AIS is an extremely important cause area” and “one of the best ways to make progress on AI safety is to get talented people working on technical research.” If I’m wrong, I think I’m wrong because I’m undervaluing non-technical interventions that could buy us more time (e.g., strategies in AI governance/strategy or strategies that involve outreach to leaders of AI companies). I plan to think more about those in the upcoming weeks.
Some projects I am excited about
Global Talent Search for AI Alignment Researchers
Purpose: Raise awareness about AI safety around the world to find highly talented AI safety researchers.
How this reduces P(doom): Maybe there are extremely promising researchers (e.g., people like Paul Christiano and Eliezer Yudkowsky) out in the world who don’t know about AI alignment or don’t know how to get involved. One global talent search program could find them. Alternatively, maybe we need 1000 full-time AI safety researchers who are 1-3 tiers below “alignment geniuses”. A separate global talent search program could find them.
Imaginary example: Crossover between the Atlas Fellowship, old CFAR, and MIRI. I imagine an organization that offers contests, workshops, and research fellowships in order to attract talented people around the world.
Skills needed: Strong models of community-building, strong understanding of AI safety concepts, really good ways of evaluating who is promising, good models of downside risks when conducting broad outreach
Olivia Jimenez and I are currently considering working on this. Please feel free to reach out if you have interest or advice.
Training Program for AI Alignment researchers
Purpose: Provide excellent training, support, internships, and mentorship for junior AI alignment researchers.
How this reduces P(doom): Maybe there are people who would become extremely promising researchers if they were provided sufficient support and mentorship. This program mentors them.
Imaginary example: Something like a big version of SERI-Mats with a strong emphasis on workshops/activities that help people develop strong inside views & strong research taste. (My impression is that SERI-Mats could become this one day, but I’d also be excited to see more programs “compete” with SERI-Mats).
Skills needed: Relationships with AI safety researchers, strong models of mentors, strong ability to attract and assess applicants, insight into how to pair mentors with mentees, good models of AI safety, good models of how to create organizations with epistemically rigorous cultures, good models of downside risks when conducting broad outreach.
Research Infrastructure & Coordination for AI alignment
Purpose: Provide excellent support for AI alignment researchers in major EA Hubs.
Imaginary example: Something like a big version of Lightcone Infrastructure that runs something like Bell Labs, regularly hosts high-quality events/workshops for AI alignment researchers, or accelerates research progress through alignment newsletters, podcasts, and debates (my impression is that Lightcone or Constellation could become this one day, but I’d be excited to see people try parts of this on their own).
Skills needed: Strong relationships with AI safety researchers, strong understanding of the AI safety community and its needs, and strong understanding of AI safety concepts. Very high context would be required to run a space; medium context would be required to perform the other projects.
I am currently considering starting an AI alignment podcast or newsletter. Please feel free to reach out if you have interest or advice.
Superconnecting: Active Grantmaking + Project Incubation
Purpose: Identify highly promising people who are already part of the EA community and get them funding/connections/mentorship to do AIS research or launch important/ambitious projects.
How this reduces P(doom): Maybe there are people who would become extremely promising researchers or ambitious generalists who are already part of the EA community but haven’t yet received the support, encouragement, or mentorship required to reach their potential.
Imaginary example: Crossover between the FTX Future Fund’s regranting program, a longtermist incubator, and CEA’s active stewardship vision. I envision a group of “superconnectors” who essentially serve as talent scouts for the EA community. They go to EA globals and run retreats/workshops for new EAs, as well as highly-skilled EAs who aren’t currently doing highly impactful work. They provide grants for people (or encourage people to apply for funding) to skill-up in AI safety or launch ambitious projects.
Skills needed: Strong models of community-building, large network or willingness to develop a large network, strong models of how to identify which people and projects are most promising, strong people skills/people judgment.
Targeted Outreach to Experienced Researchers
Purpose: Identify highly promising researchers in academia and industry, engage them with high-quality AI safety content, and support those who decide to shift their careers/research toward technical AIS.
How this reduces P(doom): Maybe there are extremely talented researchers who can already be identified based on their contributions in fields related to AI alignment (e.g., math, decision theory, probability theory, CS, philosophy) and/or their contributions to messy and pre-paradigmatic fields of research.
Imaginary example: An organization that systematically reads research in relevant fields, identifies promising researchers, and designs targeted outreach strategies to engage these researchers with high-quality sources in AI alignment research. The Center for AI Safety and the AI Safety Field Building Hub may do some of this, though they’re relatively new, and I’d be excited for more people to support them or compete with them.
Skills needed: Strong understanding of how to communicate with researchers, strong models of potential downside risks, strong understanding of AI safety concepts, good models of academia and “the outside world”, good people skills.
Note that people considering this are strongly encouraged to reach out to community-builders and AI safety researchers before conducting outreach to experienced researchers.
People interested in this may also wish to read the Pragmatic AI Safety Sequence and should familiarize themselves with potential risks associated with outreach to established researchers. Note that people disagree about how to weigh upside potential against downside risks, and “thinking for yourself” would be especially important here.
Understanding AI trends and AI safety outreach in China
Purpose: Understand the AI scene in China, conduct research about if/how AIS outreach should be conducted in China, deconfuse EA about AIS in China, and potentially pilot AIS outreach efforts in China.
How this reduces P(doom): Maybe there are effective ways to reach out to talented people in China in ways that sufficiently mitigate downside risks. My current impression is that China is one of the leaders in AI, and it seems plausible that China would have a lot of highly talented people who could contribute to technical AIS research. However, I’ve heard that AIS outreach in China has been neglected because EA leaders don’t understand China and don’t understand how to evaluate different kinds of outreach strategies in China (hence the focus on research/deconfusion/careful pilots).
Imaginary example: A think tank-style research group that develops strong models of a specific topic.
Skills needed: Strong understanding of China, fluency in Mandarin, strong ability to weigh upside potential and downside risks.
AIS Contests and Subproblems
Purpose: Identify (or develop) subproblems in alignment & turn these into highly-advertised contests.
How this reduces P(doom): Maybe there are subproblems in AI alignment that could be solved by researchers outside of the AI x-risk community. Alternatively, maybe contests are an effective way to get smart people interested in AI x-risk.
Imaginary example: An organization that gets really good at creating contests based on problems like ELK and The Shutdown Problem (among other examples) & then advertising these contests heavily.
Skills needed: Ideally a strong understanding of AI safety and the ability to identify/write-up subproblems. But I think this could work if someone was working closely with AI safety researchers to select & present subproblems.
Writing that explains AI safety to broader audiences
Purpose: Write extremely clear, engaging, and persuasive explanations of AI safety ideas.
How this reduces P(doom): There are not many introductory resources that clearly explain the importance of AI safety. Maybe there are people who would engage with AI safety if we had better introductory resources.
Imaginary example: A crossover between Nick Bostrom, Will MacAskill, Holden Karnofsky, and Eliezer Yudkowsky. A book or blog that is as rigorous as Bostrom’s writing (Superintelligence), as popular as Will’s writing (NYT bestseller with media attention), as clear as Holden’s writing (Cold Takes), and as explicit about x-risk as Yudkowsky’s writing (e.g., List of Lethalities)
Skills needed: Ideally a strong understanding of AI safety, but I think writing ability is probably the more important skill. In theory, someone with exceptional writing ability could work closely with AI safety researchers to select the most important topics/concepts and ensure that the descriptions/explanations are accurate. Also, strong models of potential downside risks of broad outreach.
Other projects I am excited about (though not as excited)
Operations org: Something that helps train aligned/competent EAs to be really good at operations. My rough sense is that many projects are bottlenecked by ops capacity. Note that sometimes people think “ops” just means stuff like “cleaning” and “making sure food arrives on time” and “doing boring stuff.” I think the bigger bottlenecks are in things like “having such a strong understanding of the mission that you know which tasks to prioritize”, “noticing what the major bottlenecks are”, and “having enough context to consistently do ops tasks that amplify the organization.”
EA Academy: Take a bunch of promising young/junior EAs and turn them into awesome ambitious generalists. Something that helps people skill-up in AIS, management, community-building, applied rationality, and other useful stuff. Sort of like a crossover between Icecone (the winter-break retreat that Lightcone Infrastructure organized) and CFAR with more of an emphasis on long-term career plans.
Amplification Org: Figure out how to amplify the Most Impactful People™. Help them find therapists, PAs, nutritionists, friends, etc. Solve problems that come up in their lives. Save them time and make them more productive. Figure out how to give Eliezer Yudkowsky 2 extra productive hours each week or how to make Paul Christiano 1.01-1.5X more effective.
I am grateful to Olivia Jimenez, Thomas Larsen, Miranda Zhang, and Joshua Clymer for feedback.
AI Safety field-building projects I’d like to see
Link post
People sometimes ask me what types of AIS field-building projects I would like to see.
Here’s a list of 11 projects.
Background points/caveats
But first, a few background points.
These projects require people with specific skills/abilities/context in order for them to go well. Some of them also have downside risks. This is not a “list of projects Akash thinks anyone can do” but rather a “list of projects that Akash thinks could Actually Reduce P(Doom) if they were executed extremely well by an unusually well-qualified person/team.”
I strongly encourage people to reach out to experienced researchers/community-builders before doing big versions of any of these. (You may disagree with their judgment, but I think it’s important to at least have models of what they believe before you do something big.)
This list represents my opinions. As always, you should evaluate these ideas for yourself.
If you are interested in any of these, feel free to reach out to me. If I can’t help you, I might know someone else who can.
Reminder that you can apply for funding from the long-term future fund. You don’t have to apply to execute a specific project. You can apply for career exploration grants, grants that let you think about what you want to do next, and grants that allow you to test out different hypotheses/uncertainties.
I sometimes use the word “organization”, which might make it seem like I’m talking about 10+ people doing something over the course of several years. But I actually mean “I think a team of 1-3 people could probably test this out in a few weeks and get something ambitious started here within a few months if they had relevant skills/experiences/mentorship.
These projects are based on several assumptions about AI safety, and I won’t be able to articulate all of them in one post. Some assumptions include “AIS is an extremely important cause area” and “one of the best ways to make progress on AI safety is to get talented people working on technical research.” If I’m wrong, I think I’m wrong because I’m undervaluing non-technical interventions that could buy us more time (e.g., strategies in AI governance/strategy or strategies that involve outreach to leaders of AI companies). I plan to think more about those in the upcoming weeks.
Some projects I am excited about
Global Talent Search for AI Alignment Researchers
Purpose: Raise awareness about AI safety around the world to find highly talented AI safety researchers.
How this reduces P(doom): Maybe there are extremely promising researchers (e.g., people like Paul Christiano and Eliezer Yudkowsky) out in the world who don’t know about AI alignment or don’t know how to get involved. One global talent search program could find them. Alternatively, maybe we need 1000 full-time AI safety researchers who are 1-3 tiers below “alignment geniuses”. A separate global talent search program could find them.
Imaginary example: Crossover between the Atlas Fellowship, old CFAR, and MIRI. I imagine an organization that offers contests, workshops, and research fellowships in order to attract talented people around the world.
Skills needed: Strong models of community-building, strong understanding of AI safety concepts, really good ways of evaluating who is promising, good models of downside risks when conducting broad outreach
Olivia Jimenez and I are currently considering working on this. Please feel free to reach out if you have interest or advice.
Training Program for AI Alignment researchers
Purpose: Provide excellent training, support, internships, and mentorship for junior AI alignment researchers.
How this reduces P(doom): Maybe there are people who would become extremely promising researchers if they were provided sufficient support and mentorship. This program mentors them.
Imaginary example: Something like a big version of SERI-Mats with a strong emphasis on workshops/activities that help people develop strong inside views & strong research taste. (My impression is that SERI-Mats could become this one day, but I’d also be excited to see more programs “compete” with SERI-Mats).
Skills needed: Relationships with AI safety researchers, strong models of mentors, strong ability to attract and assess applicants, insight into how to pair mentors with mentees, good models of AI safety, good models of how to create organizations with epistemically rigorous cultures, good models of downside risks when conducting broad outreach.
Research Infrastructure & Coordination for AI alignment
Purpose: Provide excellent support for AI alignment researchers in major EA Hubs.
Imaginary example: Something like a big version of Lightcone Infrastructure that runs something like Bell Labs, regularly hosts high-quality events/workshops for AI alignment researchers, or accelerates research progress through alignment newsletters, podcasts, and debates (my impression is that Lightcone or Constellation could become this one day, but I’d be excited to see people try parts of this on their own).
Skills needed: Strong relationships with AI safety researchers, strong understanding of the AI safety community and its needs, and strong understanding of AI safety concepts. Very high context would be required to run a space; medium context would be required to perform the other projects.
I am currently considering starting an AI alignment podcast or newsletter. Please feel free to reach out if you have interest or advice.
Superconnecting: Active Grantmaking + Project Incubation
Purpose: Identify highly promising people who are already part of the EA community and get them funding/connections/mentorship to do AIS research or launch important/ambitious projects.
How this reduces P(doom): Maybe there are people who would become extremely promising researchers or ambitious generalists who are already part of the EA community but haven’t yet received the support, encouragement, or mentorship required to reach their potential.
Imaginary example: Crossover between the FTX Future Fund’s regranting program, a longtermist incubator, and CEA’s active stewardship vision. I envision a group of “superconnectors” who essentially serve as talent scouts for the EA community. They go to EA globals and run retreats/workshops for new EAs, as well as highly-skilled EAs who aren’t currently doing highly impactful work. They provide grants for people (or encourage people to apply for funding) to skill-up in AI safety or launch ambitious projects.
Skills needed: Strong models of community-building, large network or willingness to develop a large network, strong models of how to identify which people and projects are most promising, strong people skills/people judgment.
Targeted Outreach to Experienced Researchers
Purpose: Identify highly promising researchers in academia and industry, engage them with high-quality AI safety content, and support those who decide to shift their careers/research toward technical AIS.
How this reduces P(doom): Maybe there are extremely talented researchers who can already be identified based on their contributions in fields related to AI alignment (e.g., math, decision theory, probability theory, CS, philosophy) and/or their contributions to messy and pre-paradigmatic fields of research.
Imaginary example: An organization that systematically reads research in relevant fields, identifies promising researchers, and designs targeted outreach strategies to engage these researchers with high-quality sources in AI alignment research. The Center for AI Safety and the AI Safety Field Building Hub may do some of this, though they’re relatively new, and I’d be excited for more people to support them or compete with them.
Skills needed: Strong understanding of how to communicate with researchers, strong models of potential downside risks, strong understanding of AI safety concepts, good models of academia and “the outside world”, good people skills.
Note that people considering this are strongly encouraged to reach out to community-builders and AI safety researchers before conducting outreach to experienced researchers.
People interested in this may also wish to read the Pragmatic AI Safety Sequence and should familiarize themselves with potential risks associated with outreach to established researchers. Note that people disagree about how to weigh upside potential against downside risks, and “thinking for yourself” would be especially important here.
Understanding AI trends and AI safety outreach in China
Purpose: Understand the AI scene in China, conduct research about if/how AIS outreach should be conducted in China, deconfuse EA about AIS in China, and potentially pilot AIS outreach efforts in China.
How this reduces P(doom): Maybe there are effective ways to reach out to talented people in China in ways that sufficiently mitigate downside risks. My current impression is that China is one of the leaders in AI, and it seems plausible that China would have a lot of highly talented people who could contribute to technical AIS research. However, I’ve heard that AIS outreach in China has been neglected because EA leaders don’t understand China and don’t understand how to evaluate different kinds of outreach strategies in China (hence the focus on research/deconfusion/careful pilots).
Imaginary example: A think tank-style research group that develops strong models of a specific topic.
Skills needed: Strong understanding of China, fluency in Mandarin, strong ability to weigh upside potential and downside risks.
AIS Contests and Subproblems
Purpose: Identify (or develop) subproblems in alignment & turn these into highly-advertised contests.
How this reduces P(doom): Maybe there are subproblems in AI alignment that could be solved by researchers outside of the AI x-risk community. Alternatively, maybe contests are an effective way to get smart people interested in AI x-risk.
Imaginary example: An organization that gets really good at creating contests based on problems like ELK and The Shutdown Problem (among other examples) & then advertising these contests heavily.
Skills needed: Ideally a strong understanding of AI safety and the ability to identify/write-up subproblems. But I think this could work if someone was working closely with AI safety researchers to select & present subproblems.
Writing that explains AI safety to broader audiences
Purpose: Write extremely clear, engaging, and persuasive explanations of AI safety ideas.
How this reduces P(doom): There are not many introductory resources that clearly explain the importance of AI safety. Maybe there are people who would engage with AI safety if we had better introductory resources.
Imaginary example: A crossover between Nick Bostrom, Will MacAskill, Holden Karnofsky, and Eliezer Yudkowsky. A book or blog that is as rigorous as Bostrom’s writing (Superintelligence), as popular as Will’s writing (NYT bestseller with media attention), as clear as Holden’s writing (Cold Takes), and as explicit about x-risk as Yudkowsky’s writing (e.g., List of Lethalities)
Skills needed: Ideally a strong understanding of AI safety, but I think writing ability is probably the more important skill. In theory, someone with exceptional writing ability could work closely with AI safety researchers to select the most important topics/concepts and ensure that the descriptions/explanations are accurate. Also, strong models of potential downside risks of broad outreach.
Other projects I am excited about (though not as excited)
Operations org: Something that helps train aligned/competent EAs to be really good at operations. My rough sense is that many projects are bottlenecked by ops capacity. Note that sometimes people think “ops” just means stuff like “cleaning” and “making sure food arrives on time” and “doing boring stuff.” I think the bigger bottlenecks are in things like “having such a strong understanding of the mission that you know which tasks to prioritize”, “noticing what the major bottlenecks are”, and “having enough context to consistently do ops tasks that amplify the organization.”
EA Academy: Take a bunch of promising young/junior EAs and turn them into awesome ambitious generalists. Something that helps people skill-up in AIS, management, community-building, applied rationality, and other useful stuff. Sort of like a crossover between Icecone (the winter-break retreat that Lightcone Infrastructure organized) and CFAR with more of an emphasis on long-term career plans.
Amplification Org: Figure out how to amplify the Most Impactful People™. Help them find therapists, PAs, nutritionists, friends, etc. Solve problems that come up in their lives. Save them time and make them more productive. Figure out how to give Eliezer Yudkowsky 2 extra productive hours each week or how to make Paul Christiano 1.01-1.5X more effective.
I am grateful to Olivia Jimenez, Thomas Larsen, Miranda Zhang, and Joshua Clymer for feedback.