RSS

evhub

Karma: 1,730

Evan Hubinger (he/​him/​his) (evanjhub@gmail.com)

I am a research scientist at Anthropic where I lead the Alignment Stress-Testing team. My posts and comments are my own and do not represent Anthropic’s positions, policies, strategies, or opinions.

Previously: MIRI, OpenAI

See: “Why I’m joining Anthropic

Selected work:

FLI AI Align­ment pod­cast: Evan Hub­inger on In­ner Align­ment, Outer Align­ment, and Pro­pos­als for Build­ing Safe Ad­vanced AI

evhub1 Jul 2020 20:59 UTC
13 points
2 comments1 min readEA link
(futureoflife.org)

You can talk to EA Funds be­fore applying

evhub28 Sep 2021 20:39 UTC
104 points
7 comments1 min readEA link

We must be very clear: fraud in the ser­vice of effec­tive al­tru­ism is unacceptable

evhub10 Nov 2022 23:31 UTC
709 points
85 comments3 min readEA link