RSS

Build­ing the field of AI safety

TagLast edit: 28 Oct 2022 15:56 UTC by Lizka

Building the field of AI safety refers to the family of interventions aimed at growing, shaping or otherwise improving AI safety as an intellectual community.

Related entries

AI risk | AI safety | existential risk | building effective altruism

There should be a pub­lic ad­ver­sar­ial col­lab­o­ra­tion on AI x-risk

pradyuprasad23 Jan 2023 4:09 UTC
56 points
5 comments2 min readEA link

Cost-effec­tive­ness of stu­dent pro­grams for AI safety research

Center for AI Safety10 Jul 2023 17:23 UTC
53 points
7 comments15 min readEA link

An­nounc­ing the Cam­bridge Bos­ton Align­ment Ini­ti­a­tive [Hiring!]

kuhanj2 Dec 2022 1:07 UTC
83 points
0 comments1 min readEA link

AI safety needs to scale, and here’s how you can do it

Esben Kran2 Feb 2024 7:17 UTC
33 points
2 comments5 min readEA link
(apartresearch.com)

An­nounc­ing the AI Safety Field Build­ing Hub, a new effort to provide AISFB pro­jects, men­tor­ship, and funding

Vael Gates28 Jul 2022 21:29 UTC
126 points
6 comments6 min readEA link

An­nounc­ing the Har­vard AI Safety Team

Xander12330 Jun 2022 18:34 UTC
128 points
4 comments5 min readEA link

Spread­ing mes­sages to help with the most im­por­tant century

Holden Karnofsky25 Jan 2023 20:35 UTC
129 points
21 comments18 min readEA link
(www.cold-takes.com)

Model­ing the im­pact of AI safety field-build­ing programs

Center for AI Safety10 Jul 2023 17:22 UTC
86 points
0 comments7 min readEA link

An­nounc­ing AI Safety Bulgaria

Aleksandar Angelov3 Mar 2024 17:53 UTC
16 points
0 comments1 min readEA link

Chilean AIS Hackathon Retrospective

Agustín Covarrubias 🔸9 May 2023 1:34 UTC
67 points
0 comments5 min readEA link

Cost-effec­tive­ness of pro­fes­sional field-build­ing pro­grams for AI safety research

Center for AI Safety10 Jul 2023 17:26 UTC
38 points
2 comments18 min readEA link

In­ter­ested in work­ing from a new Bos­ton AI Safety Hub?

Topaz17 Mar 2025 13:32 UTC
25 points
0 comments2 min readEA link

Vael Gates: Risks from Highly-Ca­pable AI (March 2023)

Vael Gates1 Apr 2023 20:54 UTC
31 points
4 comments1 min readEA link
(docs.google.com)

Talk: AI safety field­build­ing at MATS

Ryan Kidd23 Jun 2024 23:06 UTC
20 points
1 comment10 min readEA link

An­nounce­ment: there are now monthly co­or­di­na­tion calls for AIS field­builders in Europe

gergo22 Nov 2024 10:30 UTC
32 points
0 comments1 min readEA link

AISafety.info “How can I help?” FAQ

StevenKaas5 Jun 2023 22:09 UTC
48 points
1 comment2 min readEA link

[SEE NEW EDITS] No, *You* Need to Write Clearer

Nicholas Kross29 Apr 2023 5:04 UTC
71 points
8 comments5 min readEA link
(www.thinkingmuchbetter.com)

[Question] AI strat­egy ca­reer pipeline

Zach Stein-Perlman22 May 2023 0:00 UTC
72 points
23 comments1 min readEA link

Ex­pand­ing EA’s AI Builder Com­mu­nity—Writ­ing about my job

Alejandro Acelas 🔸21 Jul 2025 8:22 UTC
26 points
0 comments6 min readEA link

My Proven AI Safety Ex­pla­na­tion (as a com­put­ing stu­dent)

Mica White6 Feb 2024 3:58 UTC
8 points
4 comments6 min readEA link

Help us seed AI Safety Brussels

gergo7 Aug 2024 6:17 UTC
50 points
4 comments3 min readEA link

AI Safety Ar­gu­ments: An In­ter­ac­tive Guide

Lukas Trötzmüller🔸1 Feb 2023 19:21 UTC
32 points
5 comments3 min readEA link

Cam­bridge AI Safety Hub is look­ing for full- or part-time organisers

hannah15 Jul 2023 14:31 UTC
12 points
0 comments1 min readEA link

Thoughts about AI safety field-build­ing in LMIC

Renan Araujo23 Jun 2023 23:22 UTC
57 points
4 comments12 min readEA link

Im­pact Assess­ment of AI Safety Camp (Arb Re­search)

Sam Holton23 Jan 2024 16:32 UTC
87 points
23 comments11 min readEA link

The Bar for Con­tribut­ing to AI Safety is Lower than You Think

Chris Leong17 Aug 2024 10:52 UTC
14 points
5 comments2 min readEA link

MATS Win­ter 2023-24 Retrospective

utilistrutil11 May 2024 0:09 UTC
62 points
2 comments49 min readEA link

Launch­ing ap­pli­ca­tions for AI Safety Ca­reers Course In­dia 2024

varun_agr1 May 2024 5:30 UTC
23 points
1 comment1 min readEA link

AGI Safety Fun­da­men­tals pro­gramme is con­tract­ing a low-code engineer

Jamie B26 Aug 2022 15:43 UTC
39 points
4 comments5 min readEA link

An economist’s per­spec­tive on AI safety

David Stinson7 Jun 2024 7:55 UTC
7 points
1 comment9 min readEA link

On­board­ing stu­dents to EA/​AIS in 4 days with an in­ten­sive fellowship

gergo5 Dec 2023 10:07 UTC
17 points
0 comments4 min readEA link

De­com­pos­ing al­ign­ment to take ad­van­tage of paradigms

Christopher King4 Jun 2023 14:26 UTC
2 points
0 comments4 min readEA link

Sur­vey on the ac­cel­er­a­tion risks of our new RFPs to study LLM capabilities

Ajeya10 Nov 2023 23:59 UTC
38 points
1 comment8 min readEA link

Su­per­vised Pro­gram for Align­ment Re­search (SPAR) at UC Berkeley: Spring 2023 summary

mic19 Aug 2023 2:32 UTC
18 points
1 comment6 min readEA link
(www.lesswrong.com)

In­tro­duc­ing Kairos: a new AI safety field­build­ing or­ga­ni­za­tion (the new home for SPAR and FSP)

Agustín Covarrubias 🔸25 Oct 2024 21:59 UTC
81 points
2 comments2 min readEA link

Jobs that can help with the most im­por­tant century

Holden Karnofsky12 Feb 2023 18:19 UTC
57 points
2 comments32 min readEA link
(www.cold-takes.com)

Keep Chas­ing AI Safety Press Coverage

Gil4 Apr 2023 20:40 UTC
106 points
16 comments5 min readEA link

ML Sum­mer Boot­camp Reflec­tion: Aalto EA Finland

Aayush Kucheria12 Jan 2023 8:24 UTC
15 points
2 comments9 min readEA link

AIS Nether­lands is look­ing for a Found­ing Ex­ec­u­tive Direc­tor (EOI form)

gergo19 Mar 2025 9:24 UTC
49 points
4 comments4 min readEA link

[Question] I have thou­sands of copies of HPMOR in Rus­sian. How to use them with the most im­pact?

MikhailSamin27 Dec 2022 11:07 UTC
39 points
10 comments1 min readEA link

Up­date on cause area fo­cus work­ing group

Bastian_Stern10 Aug 2023 1:21 UTC
140 points
18 comments5 min readEA link

How the AI safety tech­ni­cal land­scape has changed in the last year, ac­cord­ing to some practitioners

tlevin26 Jul 2024 19:06 UTC
84 points
1 comment2 min readEA link

AI Safety Field Growth Anal­y­sis 2025

Stephen McAleese27 Sep 2025 17:02 UTC
76 points
13 comments3 min readEA link

AGI safety field build­ing pro­jects I’d like to see

Severin24 Jan 2023 23:30 UTC
25 points
2 comments9 min readEA link

Up­date on Har­vard AI Safety Team and MIT AI Alignment

Xander1232 Dec 2022 6:09 UTC
71 points
3 comments8 min readEA link

An Overview of the AI Safety Fund­ing Situation

Stephen McAleese12 Jul 2023 14:54 UTC
140 points
15 comments15 min readEA link

We’ve au­to­mated x-risk-pilling people

MikhailSamin5 Oct 2025 11:41 UTC
0 points
9 comments1 min readEA link
(whycare.aisgf.us)

Open Philan­thropy is pass­ing AI safety uni­ver­sity group fund­ing to Kairos

abergal22 Jul 2025 17:11 UTC
55 points
0 comments1 min readEA link

AISafety.world is a map of the AIS ecosystem

Hamish McDoodles6 Apr 2023 11:47 UTC
192 points
8 comments1 min readEA link

Wash­ing­ton Post ar­ti­cle about EA uni­ver­sity groups

Lizka5 Jul 2023 12:58 UTC
35 points
5 comments1 min readEA link

AI safety uni­ver­sity groups: a promis­ing op­por­tu­nity to re­duce ex­is­ten­tial risk

mic30 Jun 2022 18:37 UTC
53 points
1 comment11 min readEA link

Why Every Or­gani­sa­tion Needs a Pri­vacy Policy (FBB #10)

gergo17 Sep 2025 13:41 UTC
22 points
0 comments3 min readEA link

Kairos is hiring a Head of Oper­a­tions/​Found­ing Generalist

Agustín Covarrubias 🔸12 Mar 2025 20:58 UTC
59 points
1 comment5 min readEA link

Brand­ing AI Safety Groups: A Field Guide

Agustín Covarrubias 🔸13 May 2024 17:17 UTC
44 points
6 comments7 min readEA link

The Tree of Life: Stan­ford AI Align­ment The­ory of Change

GabeM2 Jul 2022 18:32 UTC
69 points
5 comments14 min readEA link

Thread: Reflec­tions on the AGI Safety Fun­da­men­tals course?

Clifford18 May 2023 13:11 UTC
27 points
7 comments1 min readEA link

Ret­ro­spec­tive on the AI Safety Field Build­ing Hub

Vael Gates2 Feb 2023 2:06 UTC
64 points
2 comments9 min readEA link

[Question] Are al­ign­ment re­searchers de­vot­ing enough time to im­prov­ing their re­search ca­pac­ity?

Carson Jones4 Nov 2022 0:58 UTC
11 points
1 comment3 min readEA link

Per­sonal thoughts on ca­reers in AI policy and strategy

carrickflynn27 Sep 2017 16:52 UTC
56 points
28 comments18 min readEA link

In­tro­duc­ing the Path­fin­der Fel­low­ship: Fund­ing and Men­tor­ship for AI Safety Group Organizers

Agustín Covarrubias 🔸22 Jul 2025 17:11 UTC
49 points
0 comments2 min readEA link

Arkose may be clos­ing, but you can help

Arkose1 May 2025 11:09 UTC
58 points
6 comments2 min readEA link

An­nounc­ing “Key Phenom­ena in AI Risk” (fa­cil­i­tated read­ing group)

nora9 May 2023 16:52 UTC
28 points
0 comments2 min readEA link

AI Safety Univer­sity Or­ga­niz­ing: Early Take­aways from Thir­teen Groups

Agustín Covarrubias 🔸2 Oct 2024 14:39 UTC
46 points
3 comments9 min readEA link

Con­crete Steps to Get Started in Trans­former Mechanis­tic Interpretability

Neel Nanda26 Dec 2022 13:00 UTC
18 points
0 comments12 min readEA link

Ob­ser­va­tions on the fund­ing land­scape of EA and AI safety

Vilhelm Skoglund2 Oct 2023 9:45 UTC
136 points
12 comments15 min readEA link

Ap­ply to lead a pro­ject dur­ing the next vir­tual AI Safety Camp

Linda Linsefors13 Sep 2023 13:29 UTC
16 points
0 comments5 min readEA link
(aisafety.camp)

Re­la­tion­ship be­tween EA Com­mu­nity and AI safety

Tom Barnes🔸18 Sep 2023 13:49 UTC
157 points
15 comments1 min readEA link

Kairos is hiring: Found­ing Gen­er­al­ist & SPAR Contractor

Agustín Covarrubias 🔸7 Oct 2025 18:15 UTC
16 points
0 comments4 min readEA link

An­nounc­ing Athena—Women in AI Align­ment Research

Claire Short7 Nov 2023 22:02 UTC
180 points
28 comments3 min readEA link

An­nounc­ing aisafety.training

JJ Hepburn17 Jan 2023 1:55 UTC
110 points
4 comments1 min readEA link

Ideas for im­prov­ing epistemics in AI safety outreach

mic21 Aug 2023 19:56 UTC
31 points
0 comments3 min readEA link
(www.lesswrong.com)

Offer­ing AI safety sup­port calls for ML professionals

Vael Gates15 Feb 2024 23:48 UTC
52 points
1 comment1 min readEA link

Tran­scripts of in­ter­views with AI researchers

Vael Gates9 May 2022 6:03 UTC
140 points
14 comments2 min readEA link

List of AI safety newslet­ters and other resources

Lizka1 May 2023 17:24 UTC
49 points
5 comments4 min readEA link

How many peo­ple are work­ing (di­rectly) on re­duc­ing ex­is­ten­tial risk from AI?

Benjamin Hilton17 Jan 2023 14:03 UTC
118 points
4 comments4 min readEA link
(80000hours.org)

Shal­low re­view of live agen­das in al­ign­ment & safety

technicalities27 Nov 2023 11:33 UTC
76 points
8 comments29 min readEA link

Align­ment Grant­mak­ing is Fund­ing-Limited Right Now [cross­post]

johnswentworth2 Aug 2023 20:37 UTC
82 points
13 comments1 min readEA link
(www.lesswrong.com)

No­body’s on the ball on AGI alignment

leopold29 Mar 2023 14:26 UTC
328 points
66 comments9 min readEA link
(www.forourposterity.com)

AI Safety Seems Hard to Measure

Holden Karnofsky11 Dec 2022 1:31 UTC
90 points
4 comments14 min readEA link
(www.cold-takes.com)

Why EA Com­mu­nity building

Rob Gledhill14 Jun 2023 20:48 UTC
73 points
7 comments5 min readEA link

The longter­mist AI gov­er­nance land­scape: a ba­sic overview

Sam Clarke18 Jan 2022 12:58 UTC
172 points
13 comments9 min readEA link

Why *not* just send peo­ple to Blue­dot (FBB#4)

gergo25 Mar 2025 10:47 UTC
27 points
13 comments12 min readEA link

Re­sults of an in­for­mal sur­vey on AI grantmaking

Scott Alexander21 Aug 2024 13:19 UTC
130 points
28 comments1 min readEA link

Good job op­por­tu­ni­ties for helping with the most im­por­tant century

Holden Karnofsky18 Jan 2024 19:21 UTC
46 points
1 comment4 min readEA link
(www.cold-takes.com)

AI Safety Memes Wiki

plex24 Jul 2024 18:53 UTC
6 points
0 comments1 min readEA link
(aisafety.info)

Estab­lish­ing Oxford’s AI Safety Stu­dent Group: Les­sons Learnt and Our Model

Wilkin123421 Sep 2022 7:57 UTC
73 points
3 comments1 min readEA link

Tacit knowl­edge: how I *ex­actly* ap­proach EAG(x) con­fer­ences

gergo4 Jun 2025 18:14 UTC
92 points
5 comments4 min readEA link

Re­cur­sive Mid­dle Man­ager Hell

Raemon17 Jan 2023 19:02 UTC
74 points
3 comments11 min readEA link

In­tro­duc­ing The Field Build­ing Blog (FBB #0)

gergo7 Jan 2025 15:43 UTC
37 points
3 comments2 min readEA link

How MATS ad­dresses “mass move­ment build­ing” concerns

Ryan Kidd4 May 2023 0:55 UTC
79 points
3 comments3 min readEA link

CEA seeks co-founder for AI safety group sup­port spin-off

Agustín Covarrubias 🔸8 Apr 2024 15:42 UTC
62 points
0 comments4 min readEA link

LANAIS (Latin Amer­i­can Net­work for AI Safety) kick-off

Fernando Avalos23 Jun 2025 14:34 UTC
28 points
0 comments2 min readEA link

We need non-cy­ber­se­cu­rity peo­ple [too]

Jarrah5 May 2024 0:11 UTC
32 points
0 comments2 min readEA link

AI Safety’s Ta­lent Pipeline is Over-op­ti­mised for Researchers

Christopher Clay30 Aug 2025 11:02 UTC
113 points
15 comments6 min readEA link

AI Safety Field Build­ing vs. EA CB

kuhanj26 Jun 2023 23:21 UTC
80 points
16 comments6 min readEA link

The Benefits of Distil­la­tion in Research

Jonas Hallgren 🔸4 Mar 2023 19:19 UTC
45 points
2 comments5 min readEA link

Re­sults from a sur­vey on tool use and work­flows in al­ign­ment research

jacquesthibs19 Dec 2022 15:19 UTC
30 points
0 comments19 min readEA link

Pro­jects I would like to see (pos­si­bly at AI Safety Camp)

Linda Linsefors27 Sep 2023 21:27 UTC
9 points
0 comments4 min readEA link

AI safety field-build­ing sur­vey: Ta­lent needs, in­fras­truc­ture needs, and re­la­tion­ship to EA

michel27 Oct 2023 21:08 UTC
67 points
3 comments9 min readEA link

Ap­ply to the 2025 PIBBSS Sum­mer Re­search Fellowship

Dušan D. Nešić (Dushan)24 Dec 2024 10:28 UTC
6 points
0 comments2 min readEA link

Po­ten­tially Use­ful Pro­jects in Wise AI

Chris Leong5 Jun 2025 8:13 UTC
14 points
3 comments5 min readEA link

Ap­ply to MATS 8.0!

Ryan Kidd20 Mar 2025 2:17 UTC
33 points
0 comments4 min readEA link

AI Safety Micro­grant Round

Chris Leong14 Nov 2022 4:25 UTC
81 points
3 comments3 min readEA link

MATS Alumni Im­pact Analysis

utilistrutil2 Oct 2024 23:44 UTC
16 points
1 comment11 min readEA link

Offer: Team Con­flict Coun­sel­ing for AI Safety Orgs

Severin14 Apr 2025 15:17 UTC
23 points
1 comment1 min readEA link

Help us find founders for new AI safety projects

lukeprog1 Dec 2025 16:57 UTC
55 points
1 comment1 min readEA link

AI Safety Camp 11

Robert Kralisch7 Nov 2025 14:27 UTC
7 points
1 comment15 min readEA link

How Stu­art Rus­sels’s IASEAI con­fer­ence failed to live up to its po­ten­tial (FBB #8)

gergo7 Aug 2025 13:15 UTC
10 points
4 comments2 min readEA link

Arkose: Or­ga­ni­za­tional Up­dates & Ways to Get Involved

Arkose1 Aug 2024 13:03 UTC
28 points
1 comment1 min readEA link

Blueprints for AI Safety con­fer­ences (FBB #9)

gergo7 Aug 2025 13:16 UTC
12 points
0 comments7 min readEA link

Ta­lent Needs of Tech­ni­cal AI Safety Teams

Ryan Kidd24 May 2024 0:46 UTC
54 points
11 comments14 min readEA link

What’s go­ing on in video in AI Safety these days? (A list)

ChanaMessinger15 Sep 2025 20:30 UTC
59 points
11 comments4 min readEA link

Ret­ro­spec­tive: PIBBSS Fel­low­ship 2024

Dušan D. Nešić (Dushan)20 Dec 2024 15:55 UTC
7 points
0 comments4 min readEA link

Re­think Pri­ori­ties is look­ing for a (Co-)Founder for a New Pro­ject: Field Build­ing in Univer­si­ties for AI Policy Ca­reers in the US

KevinN28 Aug 2023 16:01 UTC
59 points
0 comments6 min readEA link
(careers.rethinkpriorities.org)

Stop Ap­ply­ing And Get To Work

Pauliina2 Dec 2025 17:57 UTC
35 points
1 comment2 min readEA link

Prob­lems of peo­ple new to AI safety and my pro­ject ideas to miti­gate them

Igor Ivanov3 Mar 2023 17:35 UTC
20 points
0 comments7 min readEA link

“AI Risk Dis­cus­sions” web­site: Ex­plor­ing in­ter­views from 97 AI Researchers

Vael Gates2 Feb 2023 1:00 UTC
46 points
1 comment1 min readEA link

Cost-effec­tive­ness anal­y­sis of ~1260 USD worth of so­cial me­dia ads for fel­low­ship marketing

gergo25 Jan 2024 15:18 UTC
61 points
5 comments2 min readEA link

An­nounc­ing an Em­piri­cal AI Safety Program

Joshc13 Sep 2022 21:39 UTC
64 points
7 comments2 min readEA link

For­mal­ize the Hash­iness Model of AGI Un­con­tain­abil­ity

Remmelt9 Nov 2024 16:10 UTC
2 points
0 comments5 min readEA link
(docs.google.com)

New home­page for AI safety re­sources – AISafety.com redesign

Bryce Robertson5 Nov 2025 10:28 UTC
22 points
5 comments1 min readEA link

AI Safety Col­lab 2025 Sum­mer—Lo­cal Or­ga­nizer Sign-ups Open

Evander H. 🔸25 Jun 2025 14:41 UTC
12 points
0 comments1 min readEA link

A de­cen­tral­ised vol­un­teer-based scal­able lab for AI safety

Ihor Kendiukhov17 Oct 2025 12:43 UTC
4 points
1 comment12 min readEA link

Pre­dict­ing re­searcher in­ter­est in AI alignment

Vael Gates2 Feb 2023 0:58 UTC
30 points
0 comments21 min readEA link
(docs.google.com)

⿻ Sym­bio­ge­n­e­sis vs. Con­ver­gent Consequentialism

plex21 Oct 2025 10:40 UTC
17 points
1 comment20 min readEA link

Holden Karnofsky on dozens of amaz­ing op­por­tu­ni­ties to make AI safer — and all his AGI takes

80000_Hours31 Oct 2025 12:13 UTC
70 points
0 comments25 min readEA link

ENAIS is look­ing for an Ex­ec­u­tive Direc­tor (ap­ply by 20th Oc­to­ber)

gergo3 Oct 2025 12:22 UTC
29 points
0 comments2 min readEA link

AI-Safety Mex­ico: A Pilot Sur­vey in Yu­catán.

Janeth Valdivia28 May 2025 23:19 UTC
5 points
1 comment5 min readEA link

Reflec­tions from Ooty re­treat 2.0

Aditya Arpitha Prasad24 Jul 2025 18:22 UTC
4 points
0 comments1 min readEA link
(www.lesswrong.com)

Part 3: A Pro­posed Ap­proach for AI Safety Move­ment Build­ing: Pro­jects, Pro­fes­sions, Skills, and Ideas for the Fu­ture [long post][bounty for feed­back]

PeterSlattery22 Mar 2023 0:54 UTC
22 points
8 comments32 min readEA link

[Question] Launch­ing Ap­pli­ca­tions for the Global AI Safety Fel­low­ship 2025!

Impact Academy27 Nov 2024 15:33 UTC
9 points
1 comment1 min readEA link

In­vi­ta­tion to an IRL re­treat on AI x-risks & post-ra­tio­nal­ity at Ooty, India

bhishma8 Jun 2025 14:05 UTC
2 points
0 comments1 min readEA link

AI safety un­der­val­ues founders

Ryan Kidd16 Nov 2025 1:59 UTC
24 points
0 comments5 min readEA link

ARENA 7.0 - Call for Applicants

James Hindmarch30 Sep 2025 15:07 UTC
6 points
0 comments6 min readEA link
(www.lesswrong.com)

Start an AIS safety field-build­ing or­ga­ni­za­tion at the city or na­tional level—an EOI form

gergo9 Jan 2025 8:42 UTC
38 points
4 comments2 min readEA link

Map of AI Safety v2

Bryce Robertson15 Apr 2025 13:04 UTC
59 points
6 comments1 min readEA link

Wi­den­ing AI Safety’s tal­ent pipeline by meet­ing peo­ple where they are

RubenCastaing25 Sep 2025 20:50 UTC
21 points
0 comments8 min readEA link

ARENA 2.0 - Im­pact Report

Callum McDougall26 Sep 2023 17:13 UTC
17 points
0 comments13 min readEA link

Anal­y­sis of AI Safety sur­veys for field-build­ing insights

Ash Jafari5 Dec 2022 17:37 UTC
30 points
7 comments5 min readEA link

Join AISafety.info’s Writ­ing & Edit­ing Hackathon (Aug 25-28) (Prizes to be won!)

leillustrations🔸5 Aug 2023 14:06 UTC
15 points
0 comments1 min readEA link

Want a sin­gle job to serve many AI safety pro­jects? Ash­gro is hiring an Oper­a­tions Associate

Richard Möhn24 Nov 2025 6:40 UTC
6 points
1 comment3 min readEA link

What AI Safety Ma­te­ri­als Do ML Re­searchers Find Com­pel­ling?

Vael Gates28 Dec 2022 2:03 UTC
130 points
12 comments2 min readEA link

Join the Em­pow­erAId Com­mu­nity of Prac­tice!

kkarameri7 Nov 2025 9:24 UTC
1 point
0 comments1 min readEA link

Whistle­blow­ing Twit­ter Bot

Mckiev 🔸26 Dec 2024 18:18 UTC
11 points
1 comment2 min readEA link
(www.lesswrong.com)

Ex­pres­sion of In­ter­est: Men­tors & Re­searchers at AI Safety Global Society

Caroline Shamiso Chitongo 🔸27 Jul 2025 16:03 UTC
14 points
0 comments2 min readEA link

Ap­pli­ca­tions Open: AI Safety In­dia Phase 1 – Fun­da­men­tals of Safe AI (Global Co­hort)

adityaraj@eanita28 Apr 2025 12:05 UTC
4 points
0 comments2 min readEA link

What We Can Do to Prevent Ex­tinc­tion by AI

Joe Rogero24 Feb 2025 17:15 UTC
23 points
3 comments11 min readEA link

ENAIS has launched a newslet­ter for AIS fieldbuilders

gergo22 Nov 2024 10:45 UTC
25 points
0 comments1 min readEA link

ML4Good Brasil—Ap­pli­ca­tions Open

Nia🔸3 May 2024 10:39 UTC
28 points
1 comment1 min readEA link

SERI MATS—Sum­mer 2023 Cohort

a_e_r8 Apr 2023 15:32 UTC
36 points
2 comments4 min readEA link

AI Se­cu­rity Lon­don Hackathon

prince29 Aug 2025 13:21 UTC
2 points
0 comments1 min readEA link

Rea­sons for and against work­ing on tech­ni­cal AI safety at a fron­tier AI lab

bilalchughtai7 Jan 2025 13:23 UTC
16 points
3 comments12 min readEA link
(www.lesswrong.com)

Ex­ec­u­tive Direc­tor for AIS Brus­sels—Ex­pres­sion of interest

gergo19 Dec 2024 9:15 UTC
29 points
0 comments4 min readEA link

Ap­ply to MATS 9.0!

Ryan Kidd10 Sep 2025 18:04 UTC
8 points
0 comments1 min readEA link

Stampy’s AI Safety Info—New Distil­la­tions #1 [March 2023]

markov7 Apr 2023 11:35 UTC
19 points
0 comments2 min readEA link
(aisafety.info)

Should you start a for-profit AI safety org?

Kat Woods 🔶 ⏸️15 Aug 2025 13:52 UTC
9 points
0 comments1 min readEA link

An­nounc­ing AISafety.info’s Write-a-thon (June 16-18) and Se­cond Distil­la­tion Fel­low­ship (July 3-Oc­to­ber 2)

StevenKaas3 Jun 2023 2:03 UTC
12 points
1 comment2 min readEA link

Scal­ing AI Safety in Europe: From Lo­cal Groups to In­ter­na­tional Coordination

mariuswenk3 Sep 2025 12:46 UTC
21 points
0 comments11 min readEA link

Ex­ec­u­tive Direc­tor for AIS France—Ex­pres­sion of interest

gergo19 Dec 2024 8:11 UTC
33 points
0 comments4 min readEA link

Ad­vice for En­ter­ing AI Safety Research

stecas2 Jun 2023 20:46 UTC
14 points
1 comment5 min readEA link

In­tro­duc­ing 11 New AI Safety Or­ga­ni­za­tions—Cat­alyze’s Win­ter 24/​25 Lon­don In­cu­ba­tion Pro­gram Cohort

Alexandra Bos10 Mar 2025 19:26 UTC
94 points
4 comments14 min readEA link

Ten AI safety pro­jects I’d like peo­ple to work on

JulianHazell24 Jul 2025 15:32 UTC
51 points
7 comments10 min readEA link

[Question] Does China have AI al­ign­ment re­sources/​in­sti­tu­tions? How can we pri­ori­tize cre­at­ing more?

JakubK4 Aug 2022 19:23 UTC
18 points
9 comments1 min readEA link

Start an AI safety group with the Path­fin­der Fellowship

Topaz7 Nov 2025 12:57 UTC
14 points
0 comments1 min readEA link

In­tro­duc­ing: Meri­dian Cam­bridge’s new on­line lec­ture se­ries cov­er­ing fron­tier AI and AI safety

Meridian5 Jun 2025 13:30 UTC
26 points
0 comments1 min readEA link

All AGI Safety ques­tions wel­come (es­pe­cially ba­sic ones) [April 2023]

StevenKaas8 Apr 2023 4:21 UTC
111 points
173 comments2 min readEA link

Col­lege tech­ni­cal AI safety hackathon ret­ro­spec­tive—Ge­or­gia Tech

yixiong14 Nov 2024 13:34 UTC
18 points
0 comments5 min readEA link
(yixiong.substack.com)

What new x- or s-risk field­build­ing or­gani­sa­tions would you like to see? An EOI form. (FBB #3)

gergo17 Feb 2025 12:37 UTC
32 points
3 comments2 min readEA link

Kairos is the new home for the Global Challenges Pro­ject, and we’re hiring for a GCP Director

Topaz18 Nov 2025 13:53 UTC
26 points
0 comments1 min readEA link

Aspiring AI Safety Re­searchers: Con­sider “Atyp­i­cal Jobs” in the Field In­stead.

Harrison 🔸6 Oct 2025 6:23 UTC
73 points
4 comments5 min readEA link

AI Safety Fun­da­men­tals: An In­for­mal Co­hort Start­ing Soon! (cross-posted to less­wrong.com)

Tiago4 Jun 2023 18:21 UTC
6 points
0 comments1 min readEA link
(www.lesswrong.com)

Dutch AI Safety Co­or­di­na­tion Fo­rum: An Experiment

HenningB21 Nov 2023 16:18 UTC
21 points
0 comments4 min readEA link

In­ter­views with 97 AI Re­searchers: Quan­ti­ta­tive Analysis

Maheen Shermohammed2 Feb 2023 4:50 UTC
76 points
4 comments7 min readEA link

A map of work needed to achieve safe AI

Tristan Katz11 Sep 2025 11:33 UTC
16 points
0 comments1 min readEA link

An­nounc­ing the Fu­turekind Win­ter Fel­low­ship 2025/​6: Build­ing the Fu­ture of AI and An­i­mal Protection

Aditya_Karanam13 Oct 2025 11:15 UTC
7 points
0 comments4 min readEA link

In­tro­duc­ing Deep Dive, a 201 AI policy course

Kambar17 Jun 2025 16:50 UTC
31 points
2 comments2 min readEA link

[Ap­ply] What I Love About AI Safety Field­build­ing at Cam­bridge (& We’re Hiring for a Lead­er­ship Role)

Harrison 🔸14 Feb 2025 17:41 UTC
16 points
0 comments3 min readEA link

Cat­alyze is Hiring: AI Safety In­cu­ba­tion Pro­gram Lead & Ta­lent Lead

Catalyze Impact16 Sep 2025 16:38 UTC
14 points
0 comments5 min readEA link

Ap­ply for MATS Win­ter 2023-24!

utilistrutil21 Oct 2023 2:34 UTC
34 points
2 comments5 min readEA link
(www.lesswrong.com)

Mis­takes I made run­ning an AI safety stu­dent group

cb26 Feb 2025 15:07 UTC
26 points
0 comments7 min readEA link

Where on the con­tinuum of pure EA to pure AIS should you be? (Uni Group Or­ga­niz­ers Fo­cus)

jessica_mccurdy🔸26 Jun 2023 23:46 UTC
44 points
0 comments5 min readEA link

Writ­ing About My Job: Re­search Manager

Joseph6 Oct 2025 15:16 UTC
27 points
0 comments6 min readEA link

An­nounc­ing the AI Welfare Dis­cord Server

Tim Duffy21 Jul 2025 16:36 UTC
7 points
0 comments1 min readEA link

Stress Ex­ter­nal­ities More in AI Safety Pitches

NickGabs26 Sep 2022 20:31 UTC
32 points
9 comments2 min readEA link

[Question] Plat­form for Pro­ject Spit­bal­ling? (e.g., for AI field build­ing)

Marcel23 Apr 2023 15:45 UTC
7 points
2 comments1 min readEA link

*New* Canada AI Safety & Gover­nance community

Wyatt Tessari L'Allié29 Aug 2022 15:58 UTC
32 points
2 comments1 min readEA link

What we learned from run­ning an Aus­tralian AI Safety Unconference

Alexander Saeri26 Oct 2023 0:46 UTC
34 points
0 comments5 min readEA link

The Cen­ter for AI Policy Has Shut Down

Tristan W16 Sep 2025 17:33 UTC
121 points
25 comments14 min readEA link

Printable re­sources for AI Safety tabling

gergo28 Aug 2024 9:39 UTC
30 points
0 comments1 min readEA link

MATS Spring 2024 Ex­ten­sion Retrospective

HenningB16 Feb 2025 20:29 UTC
13 points
0 comments15 min readEA link
(www.lesswrong.com)

[Question] (More) recom­men­da­tions for non-tech­ni­cal read­ings on AI?

Joseph25 Sep 2025 1:12 UTC
9 points
0 comments2 min readEA link

A Brief Overview of AI Safety/​Align­ment Orgs, Fields, Re­searchers, and Re­sources for ML Researchers

Austin Witte2 Feb 2023 6:19 UTC
18 points
5 comments2 min readEA link

On run­ning a city-wide uni­ver­sity group

gergo6 Nov 2023 9:43 UTC
26 points
3 comments9 min readEA link

I de­signed an AI safety course (for a philos­o­phy de­part­ment)

Eleni_A23 Sep 2023 21:56 UTC
28 points
3 comments2 min readEA link

Set up an AIS newslet­ter for your group in 10 min­utes per month (June edi­tion)

gergo18 Jun 2024 6:31 UTC
34 points
0 comments1 min readEA link

Les­sons learned from start­ing an AI safety uni­ver­sity group

Josh Thorsteinson 🔸19 Sep 2025 16:26 UTC
30 points
1 comment19 min readEA link

Re­sources I send to AI re­searchers about AI safety

Vael Gates11 Jan 2023 1:24 UTC
43 points
0 comments1 min readEA link

[Part-time AI Safety Re­search Pro­gram] MARS 3.0 Ap­pli­ca­tions Open for Par­ti­ci­pants & Re­cruit­ing Mentors

Cambridge AI Safety Hub7 May 2025 22:52 UTC
4 points
0 comments2 min readEA link

You can run more than one fel­low­ship per semester if you want to

gergo12 Dec 2023 8:49 UTC
6 points
1 comment3 min readEA link

Air-gap­ping eval­u­a­tion and support

Ryan Kidd26 Dec 2022 22:52 UTC
22 points
12 comments2 min readEA link

Good­fire — The Startup Try­ing to De­code How AI Thinks

Strad Slater23 Nov 2025 10:22 UTC
2 points
0 comments5 min readEA link
(williamslater2003.medium.com)

We all teach: here’s how to do it better

Michael Noetel 🔸30 Sep 2022 2:06 UTC
179 points
12 comments24 min readEA link

Reflec­tions on AI Wis­dom, plus an­nounc­ing Wise AI Wednesdays

Chris Leong5 Jun 2025 12:16 UTC
11 points
0 comments3 min readEA link

[Question] Best in­tro­duc­tory overviews of AGI safety?

JakubK13 Dec 2022 19:04 UTC
21 points
8 comments2 min readEA link
(www.lesswrong.com)

In­tro­duc­ing the Mox Guest Program

Austin1 Oct 2025 18:38 UTC
34 points
0 comments2 min readEA link
(moxsf.com)

We are shar­ing a new web­site tem­plate for AI Safety groups!

AIS Hungary13 Mar 2024 16:40 UTC
11 points
2 comments1 min readEA link

[Job ad] MATS is hiring!

Ryan Kidd9 Oct 2024 20:23 UTC
18 points
0 comments5 min readEA link

20+ tips, tricks, les­sons and thoughts on host­ing hackathons

gergo6 Nov 2023 10:59 UTC
14 points
0 comments11 min readEA link

Why I think that teach­ing philos­o­phy is high impact

Eleni_A19 Dec 2022 23:00 UTC
17 points
2 comments2 min readEA link

[Question] How did the AI Safety tal­ent pipeline come to work so well?

Alejandro Acelas 🔸24 Jul 2025 7:24 UTC
7 points
2 comments1 min readEA link

Zurich AI Safety is look­ing for (Co-)Direc­tors—EOI

mariuswenk3 Sep 2025 17:43 UTC
15 points
0 comments4 min readEA link

Am­plify is hiring! Work with us to sup­port field-build­ing ini­ti­a­tives through digi­tal mar­ket­ing

gergo28 Aug 2024 14:12 UTC
28 points
1 comment4 min readEA link

Ap­pli­ca­tions Now Open for Deep Dive: A 201 AI Policy Course by ENAIS

Kambar2 Jul 2025 8:32 UTC
10 points
5 comments1 min readEA link

The Short Timelines Strat­egy for AI Safety Univer­sity Groups

Josh Thorsteinson 🔸7 Mar 2025 4:26 UTC
52 points
8 comments5 min readEA link

At­tend SPAR’s vir­tual demo day! (ca­reer fair + talks)

Agustín Covarrubias 🔸2 May 2025 23:45 UTC
17 points
1 comment2 min readEA link
(demoday.sparai.org)

AI Safety Col­lab 2025 - Feed­back on Plans & Ex­pres­sion of Interest

Evander H. 🔸7 Jan 2025 16:41 UTC
28 points
2 comments1 min readEA link

Your group needs all the help it can get (FBB #1)

gergo7 Jan 2025 16:42 UTC
45 points
6 comments4 min readEA link

ML4Good UK—Ap­pli­ca­tions Open

Nia🔸2 Jan 2024 18:20 UTC
21 points
0 comments1 min readEA link

In­sti­tu­tional-themed web­site tem­plate for AIS groups

Kambar29 Apr 2025 21:11 UTC
21 points
0 comments1 min readEA link

Vael Gates: Risks from Ad­vanced AI (June 2022)

Vael Gates14 Jun 2022 0:49 UTC
45 points
5 comments30 min readEA link

MATS is hiring!

Ryan Kidd8 Apr 2025 20:45 UTC
14 points
2 comments6 min readEA link

Re­cruit the World’s best for AGI Alignment

Greg_Colbourn ⏸️ 30 Mar 2023 16:41 UTC
34 points
8 comments22 min readEA link

How To Be­come A Mechanis­tic In­ter­pretabil­ity Researcher

Neel Nanda2 Sep 2025 23:38 UTC
31 points
0 comments55 min readEA link

‘GiveWell for AI Safety’: Les­sons learned in a week

Lydia Nottingham30 May 2025 16:10 UTC
45 points
1 comment6 min readEA link

[Question] Work­shop (hackathon, res­i­dence pro­gram, etc.) about for-profit AI Safety pro­jects?

Roman Leventov26 Jan 2024 9:49 UTC
13 points
1 comment1 min readEA link

Why peo­ple want to work on AI safety (but don’t)

Emily Grundy24 Jan 2023 6:41 UTC
70 points
10 comments7 min readEA link

Now is the Time for Moonshots

Alejandro Acelas 🔸18 Jul 2025 15:59 UTC
2 points
0 comments1 min readEA link
(lukedrago.substack.com)

Can TikToks com­mu­ni­cate AI policy and risk?

Caitlin Borke7 May 2025 12:27 UTC
72 points
1 comment1 min readEA link

15 Lev­ers to In­fluence Fron­tier AI Companies

Jan Wehner🔸26 Sep 2025 8:36 UTC
16 points
0 comments10 min readEA link

An­nounc­ing the Co­op­er­a­tive AI Re­search Fellowship

leo_hyams11 Sep 2025 13:48 UTC
10 points
0 comments8 min readEA link

Hack­ing away at p(doom)

jmuthu16 Sep 2025 14:03 UTC
9 points
1 comment7 min readEA link

[Job ad] LISA CEO

Ryan Kidd9 Feb 2025 0:18 UTC
5 points
0 comments2 min readEA link

The Hasty Start of Bu­dapest AI Safety, 6-month up­date from a non-STEM founder

gergo3 Jan 2024 12:56 UTC
9 points
1 comment7 min readEA link

MATS AI Safety Strat­egy Cur­ricu­lum v2

DanielFilan7 Oct 2024 23:01 UTC
29 points
1 comment13 min readEA link

AIS Hun­gary is hiring a part-time Tech­ni­cal Lead! (Dead­line: Dec 31st)

gergo17 Dec 2024 14:08 UTC
9 points
0 comments2 min readEA link

[Pre­sen­ta­tion] In­tro to AI Safety

Eitan6 Jan 2025 13:04 UTC
14 points
0 comments1 min readEA link

Vol­un­teer Op­por­tu­ni­ties with the AI Safety Aware­ness Foundation

NoahCWilson🔸8 Mar 2025 4:41 UTC
7 points
0 comments2 min readEA link

Why AI Safety Camp strug­gles with fundrais­ing (FBB #2)

gergo21 Jan 2025 17:25 UTC
67 points
10 comments7 min readEA link

Launch­ing Am­plify: Re­ceive mar­ket­ing sup­port for your lo­cal groups and other field-build­ing initiatives

gergo28 Aug 2024 14:12 UTC
37 points
0 comments2 min readEA link

AI Safety Un­con­fer­ence NeurIPS 2022

Orpheus_Lummis7 Nov 2022 15:39 UTC
13 points
5 comments1 min readEA link
(aisafetyevents.org)

How Apart Re­search would use marginal fund­ing to scale AI safety tal­ent development

JaimeRV23 Nov 2025 16:59 UTC
31 points
0 comments6 min readEA link

Ap­ply for ARBOx: an ML safety in­ten­sive [dead­line 13 Dec ’24]

Nick Marsh1 Dec 2024 18:13 UTC
20 points
0 comments1 min readEA link

What are some other in­tro­duc­tions to AI safety?

Vishakha Agrawal17 Feb 2025 11:48 UTC
9 points
0 comments7 min readEA link
(aisafety.info)

Why I’m ex­cited about AI safety tal­ent de­vel­op­ment initiatives

JulianHazell28 Aug 2025 18:12 UTC
61 points
9 comments3 min readEA link
(thirdthing.ai)

Un­der­stand­ing AI World Models w/​ Chris Canal

Jacob-Haimes27 Jan 2025 16:37 UTC
5 points
0 comments1 min readEA link
(kairos.fm)
No comments.