RSS

Evan R. Murphy

Karma: 594

Formerly a software engineer at Google, now I’m doing independent AI alignment research.

Because of my focus on AI alignment, I tend to post more on LessWrong and AI Alignment Forum than I do here.

I’m always happy to connect with other researchers or people interested in AI alignment and effective altruism. Feel free to send me a private message!

Peo­ple in bunkers, “sar­dines” and why biorisks may be over­rated as a global priority

Evan R. Murphy23 Oct 2021 0:19 UTC
22 points
6 comments3 min readEA link

Ac­tion: Help ex­pand fund­ing for AI Safety by co­or­di­nat­ing on NSF response

Evan R. Murphy20 Jan 2022 20:48 UTC
20 points
7 comments3 min readEA link

New se­ries of posts an­swer­ing one of Holden’s “Im­por­tant, ac­tion­able re­search ques­tions”

Evan R. Murphy12 May 2022 21:22 UTC
9 points
0 comments1 min readEA link

New US Se­nate Bill on X-Risk Miti­ga­tion [Linkpost]

Evan R. Murphy4 Jul 2022 1:28 UTC
22 points
12 comments1 min readEA link
(www.hsgac.senate.gov)

Pro­posal: Fund­ing Diver­sifi­ca­tion for Top Cause Areas

Evan R. Murphy20 Nov 2022 11:30 UTC
29 points
8 comments2 min readEA link