RSS

tlevin

Karma: 1,392

(Posting in a personal capacity unless stated otherwise.) I help allocate Open Phil’s resources to improve the governance of AI with a focus on avoiding catastrophic outcomes. Formerly co-founder of the Cambridge Boston Alignment Initiative, which supports AI alignment/​safety research and outreach programs at Harvard, MIT, and beyond, co-president of Harvard EA, Director of Governance Programs at the Harvard AI Safety Team and MIT AI Alignment, and occasional AI governance researcher. I’m also a proud GWWC pledger and vegan.

EU poli­cy­mak­ers reach an agree­ment on the AI Act

tlevin15 Dec 2023 6:03 UTC
108 points
13 comments1 min readEA link

Notes on nukes, IR, and AI from “Arse­nals of Folly” (and other books)

tlevin4 Sep 2023 19:02 UTC
20 points
2 comments6 min readEA link

levin’s Quick takes

tlevin25 Aug 2023 19:57 UTC
6 points
1 comment1 min readEA link

Ap­ply to HAIST/​MAIA’s AI Gover­nance Work­shop in DC (Feb 17-20)

Phosphorous28 Jan 2023 0:45 UTC
15 points
0 comments1 min readEA link
(www.lesswrong.com)

Get­ting Ac­tual Value from “Info Value”: Ex­am­ple from a Failed Experiment

Nikola26 Jan 2023 17:48 UTC
63 points
0 comments3 min readEA link

An­nounc­ing the Cam­bridge Bos­ton Align­ment Ini­ti­a­tive [Hiring!]

kuhanj2 Dec 2022 1:07 UTC
83 points
0 comments1 min readEA link

Com­mon-sense cases where “hy­po­thet­i­cal fu­ture peo­ple” matter

tlevin12 Aug 2022 14:05 UTC
107 points
21 comments4 min readEA link

[Question] What work has been done on the post-AGI dis­tri­bu­tion of wealth?

tlevin6 Jul 2022 18:59 UTC
16 points
3 comments1 min readEA link

(Even) More Early-Ca­reer EAs Should Try AI Safety Tech­ni­cal Research

tlevin30 Jun 2022 21:14 UTC
86 points
40 comments11 min readEA link

Univer­sity Groups Should Do More Retreats

tlevin6 Apr 2022 19:20 UTC
85 points
15 comments4 min readEA link