RSS

Build­ing the field of AI safety

TagLast edit: Oct 28, 2022, 3:56 PM by Lizka

Building the field of AI safety refers to the family of interventions aimed at growing, shaping or otherwise improving AI safety as an intellectual community.

Related entries

AI risk | AI safety | existential risk | building effective altruism

There should be a pub­lic ad­ver­sar­ial col­lab­o­ra­tion on AI x-risk

pradyuprasadJan 23, 2023, 4:09 AM
56 points
5 comments2 min readEA link

Cost-effec­tive­ness of stu­dent pro­grams for AI safety research

Center for AI SafetyJul 10, 2023, 5:23 PM
53 points
7 comments15 min readEA link

An­nounc­ing the Cam­bridge Bos­ton Align­ment Ini­ti­a­tive [Hiring!]

kuhanjDec 2, 2022, 1:07 AM
83 points
0 comments1 min readEA link

An­nounc­ing the AI Safety Field Build­ing Hub, a new effort to provide AISFB pro­jects, men­tor­ship, and funding

Vael GatesJul 28, 2022, 9:29 PM
126 points
6 comments6 min readEA link

An­nounc­ing the Har­vard AI Safety Team

Xander123Jun 30, 2022, 6:34 PM
128 points
4 comments5 min readEA link

Spread­ing mes­sages to help with the most im­por­tant century

Holden KarnofskyJan 25, 2023, 8:35 PM
128 points
21 comments18 min readEA link
(www.cold-takes.com)

AI safety needs to scale, and here’s how you can do it

Esben KranFeb 2, 2024, 7:17 AM
33 points
2 comments5 min readEA link
(apartresearch.com)

Model­ing the im­pact of AI safety field-build­ing programs

Center for AI SafetyJul 10, 2023, 5:22 PM
83 points
0 comments7 min readEA link

An­nounc­ing AI Safety Bulgaria

Aleksandar AngelovMar 3, 2024, 5:53 PM
15 points
0 comments1 min readEA link

Cost-effec­tive­ness of pro­fes­sional field-build­ing pro­grams for AI safety research

Center for AI SafetyJul 10, 2023, 5:26 PM
38 points
2 comments18 min readEA link

Chilean AIS Hackathon Retrospective

Agustín Covarrubias 🔸May 9, 2023, 1:34 AM
67 points
0 comments5 min readEA link

In­ter­ested in work­ing from a new Bos­ton AI Safety Hub?

TopazMar 17, 2025, 1:32 PM
25 points
0 comments2 min readEA link

Vael Gates: Risks from Highly-Ca­pable AI (March 2023)

Vael GatesApr 1, 2023, 8:54 PM
31 points
4 comments1 min readEA link
(docs.google.com)

AISafety.info “How can I help?” FAQ

StevenKaasJun 5, 2023, 10:09 PM
48 points
1 comment1 min readEA link

[Question] AI strat­egy ca­reer pipeline

Zach Stein-PerlmanMay 22, 2023, 12:00 AM
72 points
23 comments1 min readEA link

Talk: AI safety field­build­ing at MATS

Ryan KiddJun 23, 2024, 11:06 PM
14 points
1 comment1 min readEA link

An­nounce­ment: there are now monthly co­or­di­na­tion calls for AIS field­builders in Europe

gergoNov 22, 2024, 10:30 AM
31 points
0 comments1 min readEA link

[SEE NEW EDITS] No, *You* Need to Write Clearer

Nicholas / Heather KrossApr 29, 2023, 5:04 AM
71 points
8 comments1 min readEA link
(www.thinkingmuchbetter.com)

AI Safety Ar­gu­ments: An In­ter­ac­tive Guide

Lukas Trötzmüller🔸Feb 1, 2023, 7:21 PM
32 points
5 comments3 min readEA link

The Bar for Con­tribut­ing to AI Safety is Lower than You Think

Chris LeongAug 17, 2024, 10:52 AM
14 points
5 comments2 min readEA link

AGI Safety Fun­da­men­tals pro­gramme is con­tract­ing a low-code engineer

Jamie BAug 26, 2022, 3:43 PM
39 points
4 comments5 min readEA link

De­com­pos­ing al­ign­ment to take ad­van­tage of paradigms

Christopher KingJun 4, 2023, 2:26 PM
2 points
0 comments4 min readEA link

Help us seed AI Safety Brussels

gergoAug 7, 2024, 6:17 AM
50 points
4 comments3 min readEA link

On­board­ing stu­dents to EA/​AIS in 4 days with an in­ten­sive fellowship

gergoDec 5, 2023, 10:07 AM
17 points
0 comments4 min readEA link

An economist’s per­spec­tive on AI safety

David StinsonJun 7, 2024, 7:55 AM
7 points
1 comment9 min readEA link

Cam­bridge AI Safety Hub is look­ing for full- or part-time organisers

hannahJul 15, 2023, 2:31 PM
12 points
0 comments1 min readEA link

My Proven AI Safety Ex­pla­na­tion (as a com­put­ing stu­dent)

Mica WhiteFeb 6, 2024, 3:58 AM
8 points
4 comments6 min readEA link

MATS Win­ter 2023-24 Retrospective

utilistrutilMay 11, 2024, 12:09 AM
62 points
2 comments1 min readEA link

Thoughts about AI safety field-build­ing in LMIC

Renan AraujoJun 23, 2023, 11:22 PM
56 points
4 comments12 min readEA link

Launch­ing ap­pli­ca­tions for AI Safety Ca­reers Course In­dia 2024

varun_agrMay 1, 2024, 5:30 AM
23 points
1 comment1 min readEA link

Im­pact Assess­ment of AI Safety Camp (Arb Re­search)

Sam HoltonJan 23, 2024, 4:32 PM
87 points
23 comments11 min readEA link

The longter­mist AI gov­er­nance land­scape: a ba­sic overview

Sam ClarkeJan 18, 2022, 12:58 PM
169 points
13 comments9 min readEA link

An­nounc­ing Athena—Women in AI Align­ment Research

Claire ShortNov 7, 2023, 10:02 PM
180 points
28 comments3 min readEA link

Wash­ing­ton Post ar­ti­cle about EA uni­ver­sity groups

LizkaJul 5, 2023, 12:58 PM
35 points
5 comments1 min readEA link

An Overview of the AI Safety Fund­ing Situation

Stephen McAleeseJul 12, 2023, 2:54 PM
134 points
15 comments15 min readEA link

Su­per­vised Pro­gram for Align­ment Re­search (SPAR) at UC Berkeley: Spring 2023 summary

micAug 19, 2023, 2:32 AM
18 points
1 comment6 min readEA link
(www.lesswrong.com)

[Question] I have thou­sands of copies of HPMOR in Rus­sian. How to use them with the most im­pact?

MikhailSaminDec 27, 2022, 11:07 AM
39 points
10 comments1 min readEA link

In­tro­duc­ing Kairos: a new AI safety field­build­ing or­ga­ni­za­tion (the new home for SPAR and FSP)

Agustín Covarrubias 🔸Oct 25, 2024, 9:59 PM
79 points
2 comments2 min readEA link

The Tree of Life: Stan­ford AI Align­ment The­ory of Change

GabeMJul 2, 2022, 6:32 PM
69 points
5 comments14 min readEA link

AI safety uni­ver­sity groups: a promis­ing op­por­tu­nity to re­duce ex­is­ten­tial risk

micJun 30, 2022, 6:37 PM
53 points
1 comment11 min readEA link

Shal­low re­view of live agen­das in al­ign­ment & safety

technicalitiesNov 27, 2023, 11:33 AM
76 points
8 comments29 min readEA link

Ideas for im­prov­ing epistemics in AI safety outreach

micAug 21, 2023, 7:56 PM
31 points
0 comments3 min readEA link
(www.lesswrong.com)

Align­ment Grant­mak­ing is Fund­ing-Limited Right Now [cross­post]

johnswentworthAug 2, 2023, 8:37 PM
82 points
13 comments1 min readEA link
(www.lesswrong.com)

An­nounc­ing aisafety.training

JJ HepburnJan 17, 2023, 1:55 AM
110 points
4 comments1 min readEA link

How many peo­ple are work­ing (di­rectly) on re­duc­ing ex­is­ten­tial risk from AI?

Benjamin HiltonJan 17, 2023, 2:03 PM
117 points
3 comments4 min readEA link
(80000hours.org)

Tran­scripts of in­ter­views with AI researchers

Vael GatesMay 9, 2022, 6:03 AM
140 points
14 comments2 min readEA link

Good job op­por­tu­ni­ties for helping with the most im­por­tant century

Holden KarnofskyJan 18, 2024, 7:21 PM
46 points
1 comment4 min readEA link
(www.cold-takes.com)

Why *not* just send peo­ple to Blue­dot (FBB#4)

gergoMar 25, 2025, 10:47 AM
24 points
13 comments12 min readEA link

In­tro­duc­ing The Field Build­ing Blog (FBB #0)

gergoJan 7, 2025, 3:43 PM
37 points
3 comments2 min readEA link

AISafety.world is a map of the AIS ecosystem

Hamish McDoodlesApr 6, 2023, 11:47 AM
192 points
8 comments1 min readEA link

Keep Chas­ing AI Safety Press Coverage

GilApr 4, 2023, 8:40 PM
106 points
16 comments5 min readEA link

Re­sources that (I think) new al­ign­ment re­searchers should know about

AkashOct 28, 2022, 10:13 PM
20 points
2 comments1 min readEA link

Re­cur­sive Mid­dle Man­ager Hell

RaemonJan 17, 2023, 7:02 PM
73 points
3 comments1 min readEA link

Sur­vey on the ac­cel­er­a­tion risks of our new RFPs to study LLM capabilities

AjeyaNov 10, 2023, 11:59 PM
38 points
1 comment8 min readEA link

AGI safety field build­ing pro­jects I’d like to see

SeverinJan 24, 2023, 11:30 PM
25 points
2 comments1 min readEA link

Ret­ro­spec­tive on the AI Safety Field Build­ing Hub

Vael GatesFeb 2, 2023, 2:06 AM
64 points
2 comments9 min readEA link

No­body’s on the ball on AGI alignment

leopoldMar 29, 2023, 2:26 PM
327 points
65 comments9 min readEA link
(www.forourposterity.com)

Per­sonal thoughts on ca­reers in AI policy and strategy

carrickflynnSep 27, 2017, 4:52 PM
56 points
28 comments18 min readEA link

Jobs that can help with the most im­por­tant century

Holden KarnofskyFeb 12, 2023, 6:19 PM
57 points
2 comments32 min readEA link
(www.cold-takes.com)

How MATS ad­dresses “mass move­ment build­ing” concerns

Ryan KiddMay 4, 2023, 12:55 AM
79 points
4 comments1 min readEA link

An­nounc­ing “Key Phenom­ena in AI Risk” (fa­cil­i­tated read­ing group)

noraMay 9, 2023, 4:52 PM
28 points
0 comments2 min readEA link

Pro­jects I would like to see (pos­si­bly at AI Safety Camp)

Linda LinseforsSep 27, 2023, 9:27 PM
9 points
0 comments1 min readEA link

Ap­ply to lead a pro­ject dur­ing the next vir­tual AI Safety Camp

Linda LinseforsSep 13, 2023, 1:29 PM
16 points
0 comments1 min readEA link
(aisafety.camp)

Re­la­tion­ship be­tween EA Com­mu­nity and AI safety

Tom Barnes🔸Sep 18, 2023, 1:49 PM
157 points
15 comments1 min readEA link

AI safety field-build­ing sur­vey: Ta­lent needs, in­fras­truc­ture needs, and re­la­tion­ship to EA

michelOct 27, 2023, 9:08 PM
67 points
3 comments9 min readEA link

Thread: Reflec­tions on the AGI Safety Fun­da­men­tals course?

CliffordMay 18, 2023, 1:11 PM
27 points
7 comments1 min readEA link

AIS Nether­lands is look­ing for a Found­ing Ex­ec­u­tive Direc­tor (EOI form)

gergoMar 19, 2025, 9:24 AM
49 points
4 comments4 min readEA link

Re­sults for a sur­vey of tool use and work­flows in al­ign­ment research

jacquesthibsDec 19, 2022, 3:19 PM
30 points
0 comments1 min readEA link

Con­crete Steps to Get Started in Trans­former Mechanis­tic Interpretability

Neel NandaDec 26, 2022, 1:00 PM
18 points
0 comments12 min readEA link

AI Safety Seems Hard to Measure

Holden KarnofskyDec 11, 2022, 1:31 AM
90 points
4 comments14 min readEA link
(www.cold-takes.com)

Estab­lish­ing Oxford’s AI Safety Stu­dent Group: Les­sons Learnt and Our Model

Wilkin1234Sep 21, 2022, 7:57 AM
72 points
3 comments1 min readEA link

The Benefits of Distil­la­tion in Research

Jonas Hallgren 🔸Mar 4, 2023, 7:19 PM
45 points
2 comments5 min readEA link

Up­date on cause area fo­cus work­ing group

Bastian_SternAug 10, 2023, 1:21 AM
140 points
18 comments5 min readEA link

Ob­ser­va­tions on the fund­ing land­scape of EA and AI safety

Vilhelm SkoglundOct 2, 2023, 9:45 AM
136 points
12 comments15 min readEA link

Why EA Com­mu­nity building

Rob GledhillJun 14, 2023, 8:48 PM
73 points
7 comments5 min readEA link

Offer­ing AI safety sup­port calls for ML professionals

Vael GatesFeb 15, 2024, 11:48 PM
52 points
1 comment1 min readEA link

Kairos is hiring a Head of Oper­a­tions/​Found­ing Generalist

Agustín Covarrubias 🔸Mar 12, 2025, 8:58 PM
59 points
1 comment5 min readEA link

AI Safety Memes Wiki

plexJul 24, 2024, 6:53 PM
6 points
0 comments1 min readEA link
(aisafety.info)

AI Safety Field Build­ing vs. EA CB

kuhanjJun 26, 2023, 11:21 PM
80 points
16 comments6 min readEA link

How the AI safety tech­ni­cal land­scape has changed in the last year, ac­cord­ing to some practitioners

tlevinJul 26, 2024, 7:06 PM
83 points
1 comment1 min readEA link

Are al­ign­ment re­searchers de­vot­ing enough time to im­prov­ing their re­search ca­pac­ity?

Carson JonesNov 4, 2022, 12:58 AM
11 points
1 comment1 min readEA link

ML Sum­mer Boot­camp Reflec­tion: Aalto EA Finland

Aayush KucheriaJan 12, 2023, 8:24 AM
15 points
2 comments9 min readEA link

CEA seeks co-founder for AI safety group sup­port spin-off

Agustín Covarrubias 🔸Apr 8, 2024, 3:42 PM
62 points
0 comments4 min readEA link

Re­sults of an in­for­mal sur­vey on AI grantmaking

Scott AlexanderAug 21, 2024, 1:19 PM
127 points
28 comments1 min readEA link

We need non-cy­ber­se­cu­rity peo­ple [too]

JarrahMay 5, 2024, 12:11 AM
32 points
0 comments2 min readEA link

Brand­ing AI Safety Groups: A Field Guide

Agustín Covarrubias 🔸May 13, 2024, 5:17 PM
44 points
6 comments1 min readEA link

AI Safety Univer­sity Or­ga­niz­ing: Early Take­aways from Thir­teen Groups

Agustín Covarrubias 🔸Oct 2, 2024, 2:39 PM
46 points
3 comments9 min readEA link

What are some low-cost out­side-the-box ways to do/​fund al­ign­ment re­search?

trevor1Nov 11, 2022, 5:57 AM
2 points
3 comments1 min readEA link

List of AI safety newslet­ters and other resources

LizkaMay 1, 2023, 5:24 PM
49 points
5 comments4 min readEA link

Pod­cast: Shoshan­nah Tekofsky on skil­ling up in AI safety, vis­it­ing Berkeley, and de­vel­op­ing novel re­search ideas

AkashNov 25, 2022, 8:47 PM
14 points
0 comments1 min readEA link

Up­date on Har­vard AI Safety Team and MIT AI Alignment

Xander123Dec 2, 2022, 6:09 AM
71 points
3 comments1 min readEA link

Map of AI Safety v2

Bryce RobertsonApr 15, 2025, 1:04 PM
52 points
2 comments1 min readEA link

Col­lege tech­ni­cal AI safety hackathon ret­ro­spec­tive—Ge­or­gia Tech

yixiongNov 14, 2024, 1:34 PM
18 points
0 comments5 min readEA link
(yixiong.substack.com)

The Short Timelines Strat­egy for AI Safety Univer­sity Groups

Josh Thorsteinson 🔸Mar 7, 2025, 4:26 AM
50 points
8 comments5 min readEA link

Vol­un­teer Op­por­tu­ni­ties with the AI Safety Aware­ness Foundation

NoahCWilson🔸Mar 8, 2025, 4:41 AM
7 points
0 comments2 min readEA link

In­tro­duc­ing 11 New AI Safety Or­ga­ni­za­tions—Cat­alyze’s Win­ter 24/​25 Lon­don In­cu­ba­tion Pro­gram Cohort

Alexandra BosMar 10, 2025, 7:26 PM
88 points
4 comments14 min readEA link

ENAIS has launched a newslet­ter for AIS fieldbuilders

gergoNov 22, 2024, 10:45 AM
25 points
0 comments1 min readEA link

[Question] Launch­ing Ap­pli­ca­tions for the Global AI Safety Fel­low­ship 2025!

Impact AcademyNov 27, 2024, 3:33 PM
9 points
1 comment1 min readEA link

AI Safety Un­con­fer­ence NeurIPS 2022

Orpheus_LummisNov 7, 2022, 3:39 PM
13 points
5 comments1 min readEA link
(aisafetyevents.org)

Ap­ply for ARBOx: an ML safety in­ten­sive [dead­line 13 Dec ’24]

Nick MarshDec 1, 2024, 6:13 PM
20 points
0 comments1 min readEA link

Vael Gates: Risks from Ad­vanced AI (June 2022)

Vael GatesJun 14, 2022, 12:49 AM
45 points
5 comments30 min readEA link

Why I think that teach­ing philos­o­phy is high impact

Eleni_ADec 19, 2022, 11:00 PM
17 points
2 comments2 min readEA link

Air-gap­ping eval­u­a­tion and support

Ryan KiddDec 26, 2022, 10:52 PM
22 points
12 comments1 min readEA link

AI Safety field-build­ing pro­jects I’d like to see

AkashSep 11, 2022, 11:45 PM
31 points
4 comments6 min readEA link
(www.lesswrong.com)

*New* Canada AI Safety & Gover­nance community

Wyatt Tessari L'AlliéAug 29, 2022, 3:58 PM
32 points
2 comments1 min readEA link

Stress Ex­ter­nal­ities More in AI Safety Pitches

NickGabsSep 26, 2022, 8:31 PM
31 points
9 comments2 min readEA link

[Question] Best in­tro­duc­tory overviews of AGI safety?

JakubKDec 13, 2022, 7:04 PM
21 points
8 comments2 min readEA link
(www.lesswrong.com)

We all teach: here’s how to do it better

Michael Noetel 🔸Sep 30, 2022, 2:06 AM
172 points
12 comments24 min readEA link

[Question] Does China have AI al­ign­ment re­sources/​in­sti­tu­tions? How can we pri­ori­tize cre­at­ing more?

JakubKAug 4, 2022, 7:23 PM
18 points
9 comments1 min readEA link

Anal­y­sis of AI Safety sur­veys for field-build­ing insights

Ash JafariDec 5, 2022, 5:37 PM
30 points
7 comments5 min readEA link

What AI Safety Ma­te­ri­als Do ML Re­searchers Find Com­pel­ling?

Vael GatesDec 28, 2022, 2:03 AM
130 points
12 comments1 min readEA link

An­nounc­ing an Em­piri­cal AI Safety Program

JoshcSep 13, 2022, 9:39 PM
64 points
7 comments2 min readEA link

AI Safety Micro­grant Round

Chris LeongNov 14, 2022, 4:25 AM
81 points
3 comments3 min readEA link

Re­sources I send to AI re­searchers about AI safety

Vael GatesJan 11, 2023, 1:24 AM
43 points
0 comments1 min readEA link

Why peo­ple want to work on AI safety (but don’t)

Emily GrundyJan 24, 2023, 6:41 AM
70 points
10 comments7 min readEA link

We Ran an Align­ment Workshop

aiden amentJan 21, 2023, 5:37 AM
6 points
0 comments3 min readEA link

“AI Risk Dis­cus­sions” web­site: Ex­plor­ing in­ter­views from 97 AI Researchers

Vael GatesFeb 2, 2023, 1:00 AM
46 points
1 comment1 min readEA link

Pre­dict­ing re­searcher in­ter­est in AI alignment

Vael GatesFeb 2, 2023, 12:58 AM
30 points
0 comments21 min readEA link
(docs.google.com)

In­ter­views with 97 AI Re­searchers: Quan­ti­ta­tive Analysis

Maheen ShermohammedFeb 2, 2023, 4:50 AM
76 points
4 comments7 min readEA link

A Brief Overview of AI Safety/​Align­ment Orgs, Fields, Re­searchers, and Re­sources for ML Researchers

Austin WitteFeb 2, 2023, 6:19 AM
18 points
5 comments2 min readEA link

Talk to me about your sum­mer/​ca­reer plans

AkashJan 31, 2023, 6:29 PM
31 points
0 comments1 min readEA link

Prob­lems of peo­ple new to AI safety and my pro­ject ideas to miti­gate them

Igor IvanovMar 3, 2023, 5:35 PM
19 points
0 comments7 min readEA link

Where on the con­tinuum of pure EA to pure AIS should you be? (Uni Group Or­ga­niz­ers Fo­cus)

jessica_mccurdy🔸Jun 26, 2023, 11:46 PM
44 points
0 comments5 min readEA link

Ap­ply for MATS Win­ter 2023-24!

utilistrutilOct 21, 2023, 2:34 AM
34 points
2 comments5 min readEA link
(www.lesswrong.com)

I de­signed an AI safety course (for a philos­o­phy de­part­ment)

Eleni_ASep 23, 2023, 9:56 PM
27 points
3 comments2 min readEA link

What we learned from run­ning an Aus­tralian AI Safety Unconference

Alexander SaeriOct 26, 2023, 12:46 AM
34 points
0 comments5 min readEA link

Align­ment Me­gapro­jects: You’re Not Even Try­ing to Have Ideas

Nicholas / Heather KrossJul 12, 2023, 11:39 PM
7 points
1 comment1 min readEA link

ARENA 2.0 - Im­pact Report

Callum McDougallSep 26, 2023, 5:13 PM
17 points
0 comments13 min readEA link

Part 3: A Pro­posed Ap­proach for AI Safety Move­ment Build­ing: Pro­jects, Pro­fes­sions, Skills, and Ideas for the Fu­ture [long post][bounty for feed­back]

PeterSlatteryMar 22, 2023, 12:54 AM
22 points
8 comments32 min readEA link

Re­cruit the World’s best for AGI Alignment

Greg_Colbourn ⏸️ Mar 30, 2023, 4:41 PM
34 points
8 comments22 min readEA link

All AGI Safety ques­tions wel­come (es­pe­cially ba­sic ones) [April 2023]

StevenKaasApr 8, 2023, 4:21 AM
111 points
173 comments1 min readEA link

Stampy’s AI Safety Info—New Distil­la­tions #1 [March 2023]

markovApr 7, 2023, 11:35 AM
19 points
0 comments2 min readEA link
(aisafety.info)

[Question] Plat­form for Pro­ject Spit­bal­ling? (e.g., for AI field build­ing)

Marcel DApr 3, 2023, 3:45 PM
7 points
2 comments1 min readEA link

SERI MATS—Sum­mer 2023 Cohort

a_e_rApr 8, 2023, 3:32 PM
36 points
2 comments1 min readEA link

Join AISafety.info’s Writ­ing & Edit­ing Hackathon (Aug 25-28) (Prizes to be won!)

leillustrations🔸Aug 5, 2023, 2:06 PM
15 points
0 comments1 min readEA link

On run­ning a city-wide uni­ver­sity group

gergoNov 6, 2023, 9:43 AM
26 points
3 comments9 min readEA link

20+ tips, tricks, les­sons and thoughts on host­ing hackathons

gergoNov 6, 2023, 10:59 AM
14 points
0 comments11 min readEA link

Re­think Pri­ori­ties is look­ing for a (Co-)Founder for a New Pro­ject: Field Build­ing in Univer­si­ties for AI Policy Ca­reers in the US

KevinNAug 28, 2023, 4:01 PM
59 points
0 comments6 min readEA link
(careers.rethinkpriorities.org)

Dutch AI Safety Co­or­di­na­tion Fo­rum: An Experiment

HenningBNov 21, 2023, 4:18 PM
21 points
0 comments4 min readEA link

An­nounc­ing AISafety.info’s Write-a-thon (June 16-18) and Se­cond Distil­la­tion Fel­low­ship (July 3-Oc­to­ber 2)

StevenKaasJun 3, 2023, 2:03 AM
12 points
1 comment1 min readEA link

Ad­vice for En­ter­ing AI Safety Research

stecasJun 2, 2023, 8:46 PM
14 points
1 comment1 min readEA link

AI Safety Fun­da­men­tals: An In­for­mal Co­hort Start­ing Soon! (cross-posted to less­wrong.com)

TiagoJun 4, 2023, 6:21 PM
6 points
0 comments1 min readEA link
(www.lesswrong.com)

You can run more than one fel­low­ship per semester if you want to

gergoDec 12, 2023, 8:49 AM
6 points
1 comment3 min readEA link

Set up an AIS newslet­ter for your group in 10 min­utes per month (June edi­tion)

gergoJun 18, 2024, 6:31 AM
34 points
0 comments1 min readEA link

We are shar­ing a new web­site tem­plate for AI Safety groups!

AIS HungaryMar 13, 2024, 4:40 PM
10 points
2 comments1 min readEA link

ML4Good UK—Ap­pli­ca­tions Open

NiaJan 2, 2024, 6:20 PM
21 points
0 comments1 min readEA link

The Hasty Start of Bu­dapest AI Safety, 6-month up­date from a non-STEM founder

gergoJan 3, 2024, 12:56 PM
9 points
1 comment7 min readEA link

Learn­ing Math in Time for Alignment

Nicholas / Heather KrossJan 9, 2024, 1:02 AM
10 points
0 comments1 min readEA link

[Job ad] MATS is hiring!

Ryan KiddOct 9, 2024, 8:23 PM
18 points
0 comments5 min readEA link

Ap­ply to MATS 8.0!

Ryan KiddMar 20, 2025, 2:17 AM
33 points
0 comments1 min readEA link

Arkose: Or­ga­ni­za­tional Up­dates & Ways to Get Involved

ArkoseAug 1, 2024, 1:03 PM
28 points
1 comment1 min readEA link

ML4Good Brasil—Ap­pli­ca­tions Open

NiaMay 3, 2024, 10:39 AM
28 points
1 comment1 min readEA link

MATS AI Safety Strat­egy Cur­ricu­lum v2

DanielFilanOct 7, 2024, 11:01 PM
29 points
1 comment1 min readEA link

MATS is hiring!

Ryan KiddApr 8, 2025, 8:45 PM
14 points
2 comments1 min readEA link

Ta­lent Needs of Tech­ni­cal AI Safety Teams

Ryan KiddMay 24, 2024, 12:46 AM
51 points
11 comments14 min readEA link

MATS Alumni Im­pact Analysis

utilistrutilOct 2, 2024, 11:44 PM
16 points
1 comment1 min readEA link

Cost-effec­tive­ness anal­y­sis of ~1260 USD worth of so­cial me­dia ads for fel­low­ship marketing

gergoJan 25, 2024, 3:18 PM
61 points
5 comments2 min readEA link

[Question] Work­shop (hackathon, res­i­dence pro­gram, etc.) about for-profit AI Safety pro­jects?

Roman LeventovJan 26, 2024, 9:49 AM
13 points
1 comment1 min readEA link

Am­plify is hiring! Work with us to sup­port field-build­ing ini­ti­a­tives through digi­tal mar­ket­ing

gergoAug 28, 2024, 2:12 PM
28 points
1 comment4 min readEA link

Printable re­sources for AI Safety tabling

gergoAug 28, 2024, 9:39 AM
29 points
0 comments1 min readEA link

Launch­ing Am­plify: Re­ceive mar­ket­ing sup­port for your lo­cal groups and other field-build­ing initiatives

gergoAug 28, 2024, 2:12 PM
37 points
0 comments2 min readEA link

AIS Hun­gary is hiring a part-time Tech­ni­cal Lead! (Dead­line: Dec 31st)

gergoDec 17, 2024, 2:08 PM
9 points
0 comments2 min readEA link

Ex­ec­u­tive Direc­tor for AIS Brus­sels—Ex­pres­sion of interest

gergoDec 19, 2024, 9:15 AM
29 points
0 comments4 min readEA link

Ex­ec­u­tive Direc­tor for AIS France—Ex­pres­sion of interest

gergoDec 19, 2024, 8:11 AM
33 points
0 comments4 min readEA link

[Job ad] LISA CEO

Ryan KiddFeb 9, 2025, 12:18 AM
5 points
0 comments1 min readEA link

Ret­ro­spec­tive: PIBBSS Fel­low­ship 2024

Dušan D. Nešić (Dushan)Dec 20, 2024, 3:55 PM
7 points
0 comments1 min readEA link

Ap­ply to the 2025 PIBBSS Sum­mer Re­search Fellowship

Dušan D. Nešić (Dushan)Dec 24, 2024, 10:28 AM
6 points
0 comments1 min readEA link

Whistle­blow­ing Twit­ter Bot

Mckiev 🔸Dec 26, 2024, 6:18 PM
11 points
1 comment2 min readEA link
(www.lesswrong.com)

[Ap­ply] What I Love About AI Safety Field­build­ing at Cam­bridge (& We’re Hiring for a Lead­er­ship Role)

Harrison 🔸Feb 14, 2025, 5:41 PM
15 points
0 comments3 min readEA link

MATS Spring 2024 Ex­ten­sion Retrospective

HenningBFeb 16, 2025, 8:29 PM
13 points
0 comments15 min readEA link
(www.lesswrong.com)

What are some other in­tro­duc­tions to AI safety?

Vishakha AgrawalFeb 17, 2025, 11:48 AM
9 points
0 comments7 min readEA link
(aisafety.info)

[Pre­sen­ta­tion] In­tro to AI Safety

EitanJan 6, 2025, 1:04 PM
13 points
0 comments1 min readEA link

What new x- or s-risk field­build­ing or­gani­sa­tions would you like to see? An EOI form. (FBB #3)

gergoFeb 17, 2025, 12:37 PM
28 points
3 comments2 min readEA link

Rea­sons for and against work­ing on tech­ni­cal AI safety at a fron­tier AI lab

bilalchughtaiJan 7, 2025, 1:23 PM
16 points
3 comments12 min readEA link
(www.lesswrong.com)

Your group needs all the help it can get (FBB #1)

gergoJan 7, 2025, 4:42 PM
43 points
6 comments4 min readEA link

AI Safety Col­lab 2025 - Feed­back on Plans & Ex­pres­sion of Interest

Evander H. 🔸Jan 7, 2025, 4:41 PM
28 points
2 comments1 min readEA link

Start an AIS safety field-build­ing or­ga­ni­za­tion at the city or na­tional level—an EOI form

gergoJan 9, 2025, 8:42 AM
38 points
4 comments2 min readEA link

What We Can Do to Prevent Ex­tinc­tion by AI

Joe RogeroFeb 24, 2025, 5:15 PM
22 points
3 comments11 min readEA link

Why AI Safety Camp strug­gles with fundrais­ing (FBB #2)

gergoJan 21, 2025, 5:25 PM
63 points
10 comments7 min readEA link

Mis­takes I made run­ning an AI safety stu­dent group

cbFeb 26, 2025, 3:07 PM
25 points
0 comments7 min readEA link

For­mal­ize the Hash­iness Model of AGI Un­con­tain­abil­ity

RemmeltNov 9, 2024, 4:10 PM
2 points
0 comments5 min readEA link
(docs.google.com)

Un­der­stand­ing AI World Models w/​ Chris Canal

Jacob-HaimesJan 27, 2025, 4:37 PM
5 points
0 comments1 min readEA link
(kairos.fm)

Offer: Team Con­flict Coun­sel­ing for AI Safety Orgs

SeverinApr 14, 2025, 3:17 PM
23 points
1 comment1 min readEA link
No comments.