Archive
About
Search
Log In
Home
All
Wiki
Shortform
Recent
Comments
RSS
Neel Nanda
Karma:
5,598
I lead the DeepMind mechanistic interpretability team
All
Posts
Comments
New
Top
Old
Page
1
My Research Process: Key Mindsets—Truth-Seeking, Prioritisation, Moving Fast
Neel Nanda
Apr 27, 2025, 2:38 PM
30
points
1
comment
EA
link
How I Think About My Research Process: Explore, Understand, Distill
Neel Nanda
Apr 26, 2025, 10:31 AM
44
points
2
comments
EA
link
Neel Nanda’s Quick takes
Neel Nanda
Apr 6, 2025, 10:17 PM
8
points
3
comments
EA
link
Good Research Takes are Not Sufficient for Good Strategic Takes
Neel Nanda
Mar 22, 2025, 10:13 AM
117
points
0
comments
EA
link
(www.neelnanda.io)
The GDM AGI Safety+Alignment Team is Hiring for Applied Interpretability Research
Arthur Conmy
Feb 25, 2025, 10:38 PM
11
points
0
comments
7
min read
EA
link
MATS Applications + Research Directions I’m Currently Excited About
Neel Nanda
Feb 6, 2025, 11:03 AM
31
points
3
comments
EA
link
Concrete open problems in mechanistic interpretability: a technical overview
Neel Nanda
Jul 6, 2023, 11:35 AM
27
points
1
comment
29
min read
EA
link
Concrete Steps to Get Started in Transformer Mechanistic Interpretability
Neel Nanda
Dec 26, 2022, 1:00 PM
18
points
0
comments
12
min read
EA
link
A Barebones Guide to Mechanistic Interpretability Prerequisites
Neel Nanda
Nov 29, 2022, 6:43 PM
54
points
1
comment
3
min read
EA
link
(neelnanda.io)
An Extremely Opinionated Annotated List of My Favourite Mechanistic Interpretability Papers
Neel Nanda
Oct 18, 2022, 9:23 PM
19
points
0
comments
12
min read
EA
link
(www.neelnanda.io)
Concrete Advice for Forming Inside Views on AI Safety
Neel Nanda
Aug 17, 2022, 11:26 PM
58
points
4
comments
10
min read
EA
link
(www.alignmentforum.org)
Things That Make Me Enjoy Giving Career Advice
Neel Nanda
Jun 17, 2022, 8:49 PM
34
points
3
comments
9
min read
EA
link
How I Formed My Own Views About AI Safety
Neel Nanda
Feb 27, 2022, 6:52 PM
134
points
12
comments
14
min read
EA
link
(www.neelnanda.io)
Simplify EA Pitches to “Holy Shit, X-Risk”
Neel Nanda
Feb 11, 2022, 1:57 AM
186
points
82
comments
11
min read
EA
link
(www.neelnanda.io)
My Overview of the AI Alignment Landscape: A Bird’s Eye View
Neel Nanda
Dec 15, 2021, 11:46 PM
45
points
15
comments
16
min read
EA
link
(www.alignmentforum.org)
Optimisation-focused introduction to EA podcast episode
Neel Nanda
Jan 15, 2021, 9:59 AM
8
points
1
comment
1
min read
EA
link
(art19.com)
Retrospective on Teaching Rationality Workshops
Neel Nanda
Jan 3, 2021, 5:15 PM
42
points
9
comments
30
min read
EA
link
Local Group Event Idea: EA Community Talks
Neel Nanda
Dec 20, 2020, 5:12 PM
26
points
4
comments
5
min read
EA
link
Make a Public Commitment to Writing EA Forum Posts
Neel Nanda
Nov 18, 2020, 6:23 PM
21
points
11
comments
1
min read
EA
link
Helping each other become more effective
Neel Nanda
Oct 30, 2020, 9:33 PM
10
points
0
comments
11
min read
EA
link
(www.neelnanda.io)
Back to top
Next