RSS

Evan R. Murphy

Karma: 597

Formerly a software engineer at Google, now I’m doing independent AI alignment research.

Because of my focus on AI alignment, I tend to post more on LessWrong and AI Alignment Forum than I do here.

I’m always happy to connect with other researchers or people interested in AI alignment and effective altruism. Feel free to send me a private message!

Pro­posal: Fund­ing Diver­sifi­ca­tion for Top Cause Areas

Evan R. MurphyNov 20, 2022, 11:30 AM
29 points
8 comments2 min readEA link

New US Se­nate Bill on X-Risk Miti­ga­tion [Linkpost]

Evan R. MurphyJul 4, 2022, 1:28 AM
22 points
12 comments1 min readEA link
(www.hsgac.senate.gov)

New se­ries of posts an­swer­ing one of Holden’s “Im­por­tant, ac­tion­able re­search ques­tions”

Evan R. MurphyMay 12, 2022, 9:22 PM
9 points
0 comments1 min readEA link

Ac­tion: Help ex­pand fund­ing for AI Safety by co­or­di­nat­ing on NSF response

Evan R. MurphyJan 20, 2022, 8:48 PM
20 points
7 comments3 min readEA link

Peo­ple in bunkers, “sar­dines” and why biorisks may be over­rated as a global priority

Evan R. MurphyOct 23, 2021, 12:19 AM
22 points
6 comments3 min readEA link

Evan R. Mur­phy’s Quick takes

Evan R. MurphyOct 22, 2021, 12:32 AM
1 point
5 commentsEA link