The Short Timelines Strategy for AI Safety University Groups
Advice for AI safety university group organizers.
Acknowledgements
I collected most of these ideas in early January while attending OASIS 4.0, a three-day workshop for AI safety university group organizers in Berkeley, CA. Thank you to everyone who gave their input, and a special thanks to Jeremy Kintana, Neav Topaz, Tzu Kit Chan, and Chris Tardy for your feedback on earlier versions of this writeup. The contents of this post donât necessarily reflect the opinion of anyone but myself. All mistakes are my own.
Summary
Given short timelines, AI safety university groups should:
Prioritize grad students and skilled researchers who could have a meaningful impact within 2-3 years
Run selective upskilling programs and small, high-context gatherings
Protect researcher time for those doing impactful work
Have a succession plan with clear documentation
Invest in AI governance fieldbuilding and community wellbeing
Be cautious and coordinated in advocacy efforts
Context
It seems like everyoneâs timelines have been shortening.
One of the most respected voices on AI timelines is Ajeya Cotra. In November 2023, her median estimate for when 99% of currently fully remote jobs will be automatable was 13 years. In January, she wrote that AI progress on tasks that take human experts multiple hours is almost twice as fast as she expected. On timelines, she now defers to engineers at places like METR and Redwood who have hands-on experience with frontier AI.
Daniel Kokotajlo, another highly respected voice on timelines, has had a median timeline of 2027 for years. His probability distribution:[1]
19% â 2025
19% â 2026
12% â 2027
6% â 2028
6% â 2029
4% â 2030
2% â 2031
2% â 2032
2% â 2033
2% â 2034
2% â 2035
When 99% of currently fully remote jobs are automatable, human AI safety researchers will be largely obsolete. If humans still control them, AIs will be the ones doing alignment research, not humans. Therefore, fieldbuilding is on a deadline.
This affects how AI safety university groups should allocate resources. If timelines are indeed likely to be short, hereâs what I think AI safety university groups should do.
Resource Allocation
High-Priority Activities
Grad Student Outreach
Target grad students with relevant skills
Send them relevant papers
Invite them to your programs
Prof Outreach
Focus on those with relevant research areas
Consider visiting their office hours if they donât reply to your emails
Ideas for prof engagement, from least to most involved:
Share relevant funding opportunities
Invite them to present at or attend your groupâs events
Host moderated faculty panels and follow up individually
If a prof or grad student wants to learn more about AI safety research, you could recommend they request a call with Arkose
Upskilling Programs
Run selective programs for highly skilled participants
Steal from existing curricula (e.g., BlueDot Impact, ARENA)
Technical vs. Governance
Last year, 80,000 Hours replaced technical AI safety research with AI governance and policy as their top-ranked career path. See their reasoning here. Aside from being more neglected than technical AI safety, governance could also be more tractable given short timelines, especially for students.
Compared to established fields like physics, itâs faster to skill up in technical AI safety. Because it has less foundational research to study, itâs easier to get caught up to the cutting edge. This is even more true of AI governance â thereâs still hardly any meaningful AI regulation anywhere in the world.
Donât neglect governance. Consider:
Directing technically-skilled students who canât contribute to technical safety within 3 years toward governance (or technical AI governance)
Investing in outreach to those with skills relevant to AI governance, like policy grad students
Responding to relevant government Notices of Inquiry or Requests for Comment (RFCs)[2]
These likely go by a different name if youâre not in the US
Beyond potentially influencing important policy decisions, responding to RFCs is an excellent way to test your fit for policy work; policy think tanks spend lots of time on RFC responses, and their other outputs draw on similar skills
When writing a response, treat it as a focused research project targeting 1-3 sub-questions
You might find profs willing to supervise this work
Time Management
If you do impactful[3] research, guard your time
Historically, the most impactful careers to come out of AI safety university groups have often been those of the organizers themselves
Therefore, donât trade full-time research to do more group organizing; prioritize your own development
Organize activities that complement your research (e.g., a paper reading group for papers you wanted to read anyway)
If you donât do impactful research:
Seriously evaluate whether you could become an impactful researcher within 2-3 years
If yes, consider using your group to support that transition (e.g., organizing study groups for topics you want to learn)
If no, make sure your work aligns with your strategic model
Succession Planning
Even with short timelines, it might be important to avoid letting your group die. Why?
New capable students arrive each year
Timelines could be longer than you think
Itâs much easier to find a successor than for a future aspiring fieldbuilder to resurrect your group
You can focus on activities that donât heavily trade off against your other priorities
Your group could be leveraged for future advocacy efforts (see below)
Your group could be important for student wellbeing (see below)
Key elements of succession:
Create and maintain documentation that details:
Role descriptions and responsibilities
Account passwords and assets
Past successes and failures
Have every leader write transition documents
Identify successors well in advance and gradually give them increasing responsibility
This allows you to transfer knowledge that isnât easily written down
Make information discoverable without relying on current leadership
Warning signs of bad succession planning:
Key info is siloed with individual leaders
You prioritize current activities at the expense of documentation
Interested students canât find basic information about applying to leadership by themselves
Community Building
Recommendations
Prioritize small, high-context gatherings over large events
Build tight-knit networks of capable people
This has the benefit of attracting more capable people in the future â your average new group member is likely to be as capable as your average current group member.
Community Support and Wellbeing
Over time, students might become increasingly distressed about AI developments. Juggling academic pressure with the fear of you and everyone you love dying soon is not easy. Feeling a strong sense of urgency and powerlessness is a recipe for burnout. Combined with the sense that the people around you donât understand your concerns, these ideas can be crushing.
As capabilities accelerate while timelines shrink, your groupâs most important function could become emotional support; a healthy community is more likely to maintain the sustained focus and energy needed to be impactful. Some ideas:
As a leader, model healthy behaviors
Find the balance between urgency and wellbeing
Avoid burnout; this will seriously hurt you and your productivity
Normalize talking about emotional impacts
Set clear boundaries and take visible breaks
Share your coping strategies
Create opportunities for connection
Plan social events unrelated to AI
Build genuine friendships within the leadership team
Include time for personal check-ins during meetings
Remember that strong social bonds make hard times easier
Advocacy
A Word of Caution
If youâre funded by Open Philanthropy, your group is not allowed to engage in political or lobbying activities
Read the terms of your agreement carefully
Reach out to Open Philanthropy if youâre unsure whether something youâre planning is allowed
AI safety is in danger of being left-coded
This survey found that EAs and alignment researchers identify as left-leaning and nonconservative
Given that most universities are also left-leaning, successful university group advocacy could unintentionally strengthen this association, hurting the chances of getting meaningful AI safety regulation from conservative policymakers
Yet historically, universities have played an important role in building momentum for social movements (e.g., environmentalism, civil rights)
Advocates should therefore frame AI safety as a broadly shared, nonpartisan issue
Contact Kairos leadership before doing anything that risks damaging the AI safety communityâs credibility: contact@kairos-project.org
Potential Activities
Opinion Pieces
Focus on well-researched, thoughtful perspectives
Start with university publications
Build toward high-impact venues
This seems difficult as an unknown student; one method would be to ghostwrite for a prominent professor at your school
Open Letters and Statements
Leverage academic credibility
AI safety university groups have helped get professors to sign the CAIS letter
Should You Stage a Protest?
You might consider it if:
A different (non-crazy) campus group is already planning to stage an AI protest
There was a severe warning shot
E.g., a rogue AI is known to have caused a global pandemic but lacked the ability to fully overpower humanity
This could dramatically increase public appetite for AI regulation
There is a coordinated (inter)national protest by a well-respected organization
This might be communicated by e.g., Kairos or the Center for AI Safety
But you should:
Have a clear (set of) policy objective(s)
Be professional and non-violent
First communicate with the broader AI safety community
- ^
As of early 2025 (source). I adjusted the probabilities given that 2024 is over, as Daniel says to do.
- ^
Thanks to Aidan Kierans for this idea and the listed details.
- ^
Given short timelines, many areas of AI safety research are unlikely to be impactful. There are some good considerations for reevaluating your plans here.
I think youâre maybe overstating how much more promising grad students are than undergrads for short-term technical impact. Historically, people without much experience in AI safety have often produced some of the best work. And it sounds like youâre mostly optimizing for people who can be in a position to make big contributions within two years; I think that undergrads will often look more promising than grad students given that time window.
Interesting, thanks for the feedback. Thatâs encouraging for AI safety groupsâitâs easier to involve undergrads than grad students.
đ€â€ïžâđ„ Glad to see you posting this Josh! I imagine itâll be helpful for future organizers who are also thinking about this :)
yes, agree with thiss, this is super helpful! would want to ask more from you, Josh, via our Discord later if we have timw!
Thanks Tzu!
One other potential suggestion: Organizers should consider focusing on their own career development rather than field-building if their timelines are shortening and they think they can have a direct impact sooner than they can have an impact through field-building. Personally I regret much of the time I spent starting an AI safety club in college because it traded off against building skills and experience in direct work. I think my impact through direct work has been significantly greater than my impact through field-building, and I shouldâve spent more time on direct work in college.
Great post, thanks for sharing!
Great post