RSS

Joe_Carlsmith

Karma: 3,426

Senior research analyst at Open Philanthropy. Doctorate in philosophy at the University of Oxford. Opinions my own.

AI for AI safety

Joe_CarlsmithMar 14, 2025, 3:00 PM
34 points
1 comment1 min readEA link
(joecarlsmith.substack.com)

Paths and waysta­tions in AI safety

Joe_CarlsmithMar 11, 2025, 6:52 PM
22 points
2 comments1 min readEA link
(joecarlsmith.substack.com)

When should we worry about AI power-seek­ing?

Joe_CarlsmithFeb 19, 2025, 7:44 PM
21 points
2 comments1 min readEA link
(joecarlsmith.substack.com)

What is it to solve the al­ign­ment prob­lem?

Joe_CarlsmithFeb 13, 2025, 6:42 PM
25 points
1 comment1 min readEA link
(joecarlsmith.substack.com)

How do we solve the al­ign­ment prob­lem?

Joe_CarlsmithFeb 13, 2025, 6:27 PM
28 points
1 comment1 min readEA link
(joecarlsmith.substack.com)

Fake think­ing and real thinking

Joe_CarlsmithJan 28, 2025, 8:05 PM
75 points
3 comments1 min readEA link
(joecarlsmith.substack.com)

Takes on “Align­ment Fak­ing in Large Lan­guage Models”

Joe_CarlsmithDec 18, 2024, 6:22 PM
72 points
1 comment1 min readEA link

In­cen­tive de­sign and ca­pa­bil­ity elicitation

Joe_CarlsmithNov 12, 2024, 8:56 PM
9 points
0 comments1 min readEA link

Op­tion control

Joe_CarlsmithNov 4, 2024, 5:54 PM
11 points
0 comments1 min readEA link

Mo­ti­va­tion control

Joe_CarlsmithOct 30, 2024, 5:15 PM
18 points
0 comments1 min readEA link

How might we solve the al­ign­ment prob­lem? (Part 1: In­tro, sum­mary, on­tol­ogy)

Joe_CarlsmithOct 28, 2024, 9:57 PM
18 points
0 comments1 min readEA link

Video and tran­script of pre­sen­ta­tion on Oth­er­ness and con­trol in the age of AGI

Joe_CarlsmithOct 8, 2024, 10:30 PM
18 points
1 comment1 min readEA link

What is it to solve the al­ign­ment prob­lem? (Notes)

Joe_CarlsmithAug 24, 2024, 9:19 PM
32 points
1 comment1 min readEA link

Value frag­ility and AI takeover

Joe_CarlsmithAug 5, 2024, 9:28 PM
38 points
3 comments1 min readEA link

A frame­work for think­ing about AI power-seeking

Joe_CarlsmithJul 24, 2024, 10:41 PM
44 points
11 comments1 min readEA link

Lov­ing a world you don’t trust

Joe_CarlsmithJun 18, 2024, 7:31 PM
65 points
7 comments1 min readEA link

On “first crit­i­cal tries” in AI alignment

Joe_CarlsmithJun 5, 2024, 12:19 AM
29 points
3 comments1 min readEA link

On attunement

Joe_CarlsmithMar 25, 2024, 12:47 PM
28 points
0 comments1 min readEA link