RSS

Evan R. Murphy

Karma: 604

I’m doing research and other work focused on AI safety/​security, governance and risk reduction. Currently my top projects are (last updated Feb 26, 2025):

General areas of interest for me are AI safety strategy, comparative AI alignment research, prioritizing technical alignment work, analyzing the published alignment plans of major AI labs, interpretability, deconfusion research and other AI safety-related topics.

Research that I’ve authored or co-authored:

Before getting into AI safety, I was a software engineer for 11 years at Google and various startups. You can find details about my previous work on my LinkedIn.

While I’m not always great at responding, I’m happy to connect with other researchers or people interested in AI alignment and effective altruism. Feel free to send me a private message!

AI Risk: Can We Thread the Nee­dle? [Recorded Talk from EA Sum­mit Van­cou­ver ’25]

Evan R. Murphy2 Oct 2025 19:05 UTC
8 points
0 comments2 min readEA link

Pro­posal: Fund­ing Diver­sifi­ca­tion for Top Cause Areas

Evan R. Murphy20 Nov 2022 11:30 UTC
29 points
8 comments2 min readEA link

New US Se­nate Bill on X-Risk Miti­ga­tion [Linkpost]

Evan R. Murphy4 Jul 2022 1:28 UTC
22 points
12 comments1 min readEA link
(www.hsgac.senate.gov)

New se­ries of posts an­swer­ing one of Holden’s “Im­por­tant, ac­tion­able re­search ques­tions”

Evan R. Murphy12 May 2022 21:22 UTC
9 points
0 comments1 min readEA link

Ac­tion: Help ex­pand fund­ing for AI Safety by co­or­di­nat­ing on NSF response

Evan R. Murphy20 Jan 2022 20:48 UTC
20 points
7 comments3 min readEA link

Peo­ple in bunkers, “sar­dines” and why biorisks may be over­rated as a global priority

Evan R. Murphy23 Oct 2021 0:19 UTC
22 points
6 comments3 min readEA link

Evan R. Mur­phy’s Quick takes

Evan R. Murphy22 Oct 2021 0:32 UTC
1 point
5 commentsEA link