RSS

adamShimi

Karma: 238

Epistemologist specialized in the difficulties of alignment and how to solve AI X-Risks. Currently at Conjecture.

Blogging at For Methods.

The Com­pendium, A full ar­gu­ment about ex­tinc­tion risk from AGI

adamShimiOct 31, 2024, 12:02 PM
9 points
1 comment2 min readEA link
(www.thecompendium.ai)

How to Diver­sify Con­cep­tual AI Align­ment: the Model Be­hind Refine

adamShimiJul 20, 2022, 10:44 AM
43 points
0 comments9 min readEA link
(www.alignmentforum.org)

Refine: An In­cu­ba­tor for Con­cep­tual Align­ment Re­search Bets

adamShimiApr 15, 2022, 8:59 AM
47 points
0 comments4 min readEA link

In­tro­duc­ing the Prin­ci­ples of In­tel­li­gent Be­havi­our in Biolog­i­cal and So­cial Sys­tems (PIBBSS) Fellowship

adamShimiDec 18, 2021, 3:25 PM
37 points
5 comments10 min readEA link

On Solv­ing Prob­lems Be­fore They Ap­pear: The Weird Episte­molo­gies of Alignment

adamShimiOct 11, 2021, 8:21 AM
28 points
0 comments15 min readEA link