AI Safety Camp is a program with a 5-year track record of enabling people to find careers in AI Safety.
We support up-and-coming researchers outside the Bay Area and London hubs.
We are out of funding. To make the 10th edition happen, fund our stipends and salaries.
What are this project’s goals and how will you achieve them?
AI Safety Camp is a program for inquiring how to work on ensuring future AI is safe, and try concretely working on that in a team.
For the 9th edition of AI Safety Camp we opened applications for 29 projects.
We are first to host a special area to support “Pause AI” work. With funding, we can scale from 4 projects for restricting corporate-AI development to 15 projects next edition.
We are excited about our new research lead format, since it combines:
Hands-on guidance: We guide research leads (RLs) to carefully consider and scope their project. Research leads in turn onboard teammates and guide their teammates through the process of doing new research.
Streamlined applications: Team applications were the most time-intensive portion of running AI Safety Camp. Reviewers were often unsure how to evaluate an applicant’s fit for a project that required specific skills and understandings. RLs usually have a clear sense of who they would want to work with for three months. So we instead guide RLs to prepare project-specific questions and interview their potential teammates.
Resource-efficiency: We are not competing with other programs for scarce mentor time. Instead, we prospect for thoughtful research leads who at some point could become well-recognized researchers. The virtual format also cuts on overhead – instead of sinking funds into venues and plane tickets, the money goes directly to funding people to focus on their work in AI safety.
Flexible hours: Participants can work remotely from their timezone alongside their degree or day job – to test their fit for an AI Safety career.
How will this funding be used?
We are fundraising to pay for:
Salaries for the organisers for the current AISC
Funding future camps (see budget section)
Whether we run the tenth edition, or put AISC indefinitely on hold depends on your donation.
Last June, we had to freeze a year’s worth of salary for three staff. Our ops coordinator had to leave, and Linda and Remmelt decided to run one more edition as volunteers.
AISC has previously gotten grants paid for by FTX money. After the FTX collapse, we froze $255K in funds to cover clawback claims. For the current AISC, we have $99K left from SFF that was earmarked for stipends – but nothing for salaries, and nothing for future AISCs.
If we have enough money we might also restart the in-person version of AISC. This decision will also depend on an ongoing external evaluation of AISC, which among other things, is evaluating the difference in impact of the virtual vs in-person AISCs.
By default we’ll decide what to prioritise with the funding we get. But if you want to have a say, we can discuss that. We can earmark your money for whatever you want.
Potential budgets for various versions of AISC
These are example budgets for different possible versions of the virtual AISC. If our funding lands somewhere in between, we’ll do something in between.
Virtual AISC—Budget version
Software etc
$2K
Organiser salaries, 2 ppl, 4 months
$56K
Stipends for participants
$0
Total
$58K
In the Budget version, the organisers do the minimum job required to get the program started, but no continuous support to AISC teams during their projects and no time for evaluations and improvement for future versions of the program.
Salaries are calculated based on $7K per person per month.
Virtual AISC—Normal version
Software etc
$2K
Organiser salaries, 3 ppl, 6 months
$126K
Stipends for participants
$185K
Total
$313K
For the non-budget version, we have one more staff and more paid hours per person, which means we can provide more support all-round.
Stipends estimate based on: $185K = $1.5K/research lead *40 + $1K/team member * 120 Number of research leads (40) and team members (120) are guesses based on how much we think AISC will grow.
Who is on your team and what’s your track record on similar projects?
We have run AI Safety Camp over five years, covering 8 editions, 74 teams, and 251 participants.
We iterated a lot, based on participant feedback. We converged on a research lead format we are excited about. We will carefully scale this format with your support.
As researchers ourselves, we can meet potential research leads where they are at. We can provide useful guidance and feedback in almost every area of AI Safety research.
We are particularly well-positioned to support epistemically diverse bets.
Organisers
Remmelt – coordinator of “do not build uncontrollable AI”
Remmelt collaborates with an ex-Pentagon engineer and prof. Roman Yampolskiy on fundamental controllability limits. Both researchers are funded by the Survival and Flourishing Fund.
Remmelt works with diverse organisers to restrict harmful AI scaling, including: Pause AI, creative professionals, anti-tech-solutionists, product safety experts, and climate change researchers.
Remmelt previously co-founded EA Netherlands and ran national conferences.
Linda—coordinator of “everything else”
After completing her physics PhD, Linda interned at MIRI and later joined the Refine fellowship.
Linda has a comprehensive understanding of technical AI Safety landscape. An autodidact, she studies the theory of agent foundations, cognitive neuroscience and mechanistic interpretability.
Several researchers (eg. at MIRI) noted that Linda picks up on new theoretical arguments surprisingly fast, even where the inferential distance is long.
Linda initiated and spearheaded AI Safety Camp, AI Safety Support, and Virtual AI Safety Unconference.
Track record
AI Safety Camp is primarily a learning-by-doing training program. People get to try a role and explore directions in AI safety, by collaborating on a concrete project.
Multiple alumni have told us that AI Safety Camp was how they got started in AI Safety. AISC topped the ‘average usefulness’ list in Daniel Filan’s survey.
These are just the positions we know about. Many more are engaged in AI Safety in other ways, eg. as PhD or independent researcher.
Update: Both of us now consider positions at OpenAI net negative and we are seriously concerned about positions at other AGI labs.
For statistics of previous editions, see here. We also recently commissioned Arb Research to run alumni surveys and interviews to carefully evaluate AI Safety Camp’s impact.
What are the most likely causes and outcomes if this project fails? (premortem)
Not receiving minimum funding.
There are now fewer funders.
The evaluator who evaluated us last round at SFF and LTFF was too busy. His guess, he replied, was that he was not currently super interested in most of the projects we found RLs for, and not super interested in the “do not build uncontrollable AI” area.
We look for epistemically diverse bets. We are known for being honest in our critiques when we think individuals or areas of work are mistakenly overlooked. We spent little time though on networking and clarifying our views to funders, which unfortunately led to the current situation.
Receiving funding, but not enough to cover an ops staff member.
Linda and Remmelt are researchers themselves, and a little worn out from running operations. Funding for a third staff member would make the program more sustainable.
Not being selective enough of projects.
We want to focus more time on inquiring with potential research leads about their cruxes and evaluating their plans. This round, we were volunteering, so we had to satisfice. We rejected ⅓ of proposals for “do not build uncontrollable AI” and ⅕ of proposals for “everything else”.
Receiving fewer applicants overall because of competition with new programs.
Team applications have been steady though per year (229 for ’22; 219 for ‘23; 222 for ’24).
Lacking the pipeline to carefully scale up “do not build uncontrollable AI” work.
Given Remmelt’s connections, we are the best-positioned program to do this.
What other funding are you or your project getting?
Funding case: AI Safety Camp 10
Link post
Project summary
AI Safety Camp is a program with a 5-year track record of enabling people to find careers in AI Safety.
We support up-and-coming researchers outside the Bay Area and London hubs.
We are out of funding. To make the 10th edition happen, fund our stipends and salaries.
What are this project’s goals and how will you achieve them?
AI Safety Camp is a program for inquiring how to work on ensuring future AI is safe, and try concretely working on that in a team.
For the 9th edition of AI Safety Camp we opened applications for 29 projects.
We are first to host a special area to support “Pause AI” work. With funding, we can scale from 4 projects for restricting corporate-AI development to 15 projects next edition.
We are excited about our new research lead format, since it combines:
Hands-on guidance: We guide research leads (RLs) to carefully consider and scope their project. Research leads in turn onboard teammates and guide their teammates through the process of doing new research.
Streamlined applications: Team applications were the most time-intensive portion of running AI Safety Camp. Reviewers were often unsure how to evaluate an applicant’s fit for a project that required specific skills and understandings. RLs usually have a clear sense of who they would want to work with for three months. So we instead guide RLs to prepare project-specific questions and interview their potential teammates.
Resource-efficiency: We are not competing with other programs for scarce mentor time. Instead, we prospect for thoughtful research leads who at some point could become well-recognized researchers. The virtual format also cuts on overhead – instead of sinking funds into venues and plane tickets, the money goes directly to funding people to focus on their work in AI safety.
Flexible hours: Participants can work remotely from their timezone alongside their degree or day job – to test their fit for an AI Safety career.
How will this funding be used?
We are fundraising to pay for:
Salaries for the organisers for the current AISC
Funding future camps (see budget section)
Whether we run the tenth edition, or put AISC indefinitely on hold depends on your donation.
Last June, we had to freeze a year’s worth of salary for three staff. Our ops coordinator had to leave, and Linda and Remmelt decided to run one more edition as volunteers.
AISC has previously gotten grants paid for by FTX money. After the FTX collapse, we froze $255K in funds to cover clawback claims. For the current AISC, we have $99K left from SFF that was earmarked for stipends – but nothing for salaries, and nothing for future AISCs.
If we have enough money we might also restart the in-person version of AISC. This decision will also depend on an ongoing external evaluation of AISC, which among other things, is evaluating the difference in impact of the virtual vs in-person AISCs.
By default we’ll decide what to prioritise with the funding we get. But if you want to have a say, we can discuss that. We can earmark your money for whatever you want.
Potential budgets for various versions of AISC
These are example budgets for different possible versions of the virtual AISC. If our funding lands somewhere in between, we’ll do something in between.
Virtual AISC—Budget version
In the Budget version, the organisers do the minimum job required to get the program started, but no continuous support to AISC teams during their projects and no time for evaluations and improvement for future versions of the program.
Salaries are calculated based on $7K per person per month.
Virtual AISC—Normal version
For the non-budget version, we have one more staff and more paid hours per person, which means we can provide more support all-round.
Stipends estimate based on: $185K = $1.5K/research lead *40 + $1K/team member * 120
Number of research leads (40) and team members (120) are guesses based on how much we think AISC will grow.
Who is on your team and what’s your track record on similar projects?
We have run AI Safety Camp over five years, covering 8 editions, 74 teams, and 251 participants.
We iterated a lot, based on participant feedback. We converged on a research lead format we are excited about. We will carefully scale this format with your support.
As researchers ourselves, we can meet potential research leads where they are at. We can provide useful guidance and feedback in almost every area of AI Safety research.
We are particularly well-positioned to support epistemically diverse bets.
Organisers
Remmelt – coordinator of “do not build uncontrollable AI”
Remmelt collaborates with an ex-Pentagon engineer and prof. Roman Yampolskiy on fundamental controllability limits. Both researchers are funded by the Survival and Flourishing Fund.
Remmelt works with diverse organisers to restrict harmful AI scaling, including:
Pause AI, creative professionals, anti-tech-solutionists, product safety experts, and climate change researchers.
At AISC, Remmelt wrote a comprehensive outline of the control problem, presented here.
Remmelt previously co-founded EA Netherlands and ran national conferences.
Linda—coordinator of “everything else”
After completing her physics PhD, Linda interned at MIRI and later joined the Refine fellowship.
Linda has a comprehensive understanding of technical AI Safety landscape. An autodidact, she studies the theory of agent foundations, cognitive neuroscience and mechanistic interpretability.
Several researchers (eg. at MIRI) noted that Linda picks up on new theoretical arguments surprisingly fast, even where the inferential distance is long.
At AISC, Linda co-published RL in Newcomblike Environments, selected for a NeurIPS spotlight presentation.
Linda initiated and spearheaded AI Safety Camp, AI Safety Support, and Virtual AI Safety Unconference.
Track record
AI Safety Camp is primarily a learning-by-doing training program. People get to try a role and explore directions in AI safety, by collaborating on a concrete project.
Multiple alumni have told us that AI Safety Camp was how they got started in AI Safety.
AISC topped the ‘average usefulness’ list in Daniel Filan’s survey.
Papers that came out of the camp include:
Goal Misgeneralization, AI Governance and the Policymaking Process, Detecting Spiky Corruption in Markov Decision Processes, RL in Newcomblike Environments, Using soft maximin for risk averse multi-objective decision-making, Reflection Mechanisms as an Alignment Target, Representation noising effectively prevents harmful fine-tuning
Projects started at AI Safety Camp went on to receive a total of $613K in grants:
$83K from SFF, $83K from SFF
Organizations launched out of camp conversations include:
Arb Research, AI Safety Support, and AI Standards Lab.
Alumni went on to take positions at:
FHI (1 job+4 scholars+2 interns), GovAI (2 jobs), Cooperative AI (1 job), Center on Long-Term Risk (1 job), Future Society (1 job), FLI (1 job), MIRI (1 intern), CHAI (2 interns), DeepMind (1 job+2 interns), OpenAI (1 job), Anthropic (1 contract), Redwood (2 jobs), Conjecture (3 jobs), EleutherAI (1 job), Apart (1 job), Aligned AI (1 job), Leap Labs (1 founder, 1 job), Apollo (2 founders, 4 jobs), Arb (2 founders), AISS (2 founders), AISL (2+ founders), ACS (2 founders), ERO (1 founder), BlueDot (1 founder)
These are just the positions we know about. Many more are engaged in AI Safety in other ways, eg. as PhD or independent researcher.
Update: Both of us now consider positions at OpenAI net negative and we are seriously concerned about positions at other AGI labs.
For statistics of previous editions, see here. We also recently commissioned Arb Research to run alumni surveys and interviews to carefully evaluate AI Safety Camp’s impact.
What are the most likely causes and outcomes if this project fails? (premortem)
Not receiving minimum funding.
There are now fewer funders.
The evaluator who evaluated us last round at SFF and LTFF was too busy.
His guess, he replied, was that he was not currently super interested in most of the projects we found RLs for, and not super interested in the “do not build uncontrollable AI” area.
We look for epistemically diverse bets. We are known for being honest in our critiques when we think individuals or areas of work are mistakenly overlooked. We spent little time though on networking and clarifying our views to funders, which unfortunately led to the current situation.
Receiving funding, but not enough to cover an ops staff member.
Linda and Remmelt are researchers themselves, and a little worn out from running operations. Funding for a third staff member would make the program more sustainable.
Not being selective enough of projects.
We want to focus more time on inquiring with potential research leads about their cruxes and evaluating their plans. This round, we were volunteering, so we had to satisfice. We rejected ⅓ of proposals for “do not build uncontrollable AI” and ⅕ of proposals for “everything else”.
Receiving fewer applicants overall because of competition with new programs.
Team applications have been steady though per year (229 for ’22; 219 for ‘23; 222 for ’24).
Lacking the pipeline to carefully scale up “do not build uncontrollable AI” work.
Given Remmelt’s connections, we are the best-positioned program to do this.
What other funding are you or your project getting?
No other funding sources.