RSS

MikhailSamin

Karma: 256

I’m good at explaining alignment to people in person, including to policymakers.

I got 250k people to read HPMOR and sent 1.3k copies to winners of math and computer science competitions; have taken the GWWC pledge; created a small startup that donated >100k$ to effective nonprofits.

I have a background in ML and strong intuitions about the AI alignment problem. In the past, I studied a bit of international law (with a focus on human rights) and wrote appeals that won cases against the Russian government in Russian courts. I grew up running political campaigns.

I’m interesting in chatting to potential collaborators and comms allies.

My website: https://​​contact.ms

Schedule a call with me: https://​​contact.ms/​​ea30

Sav­ing lives near the precipice

MikhailSamin29 Jul 2022 15:08 UTC
18 points
10 comments3 min readEA link

You won’t solve al­ign­ment with­out agent foundations

MikhailSamin6 Nov 2022 8:07 UTC
14 points
0 comments1 min readEA link

[Question] I have thou­sands of copies of HPMOR in Rus­sian. How to use them with the most im­pact?

MikhailSamin27 Dec 2022 11:07 UTC
39 points
10 comments1 min readEA link

Please won­der about the hard parts of the al­ign­ment problem

MikhailSamin11 Jul 2023 17:02 UTC
7 points
0 comments1 min readEA link

A tran­script of the TED talk by Eliezer Yudkowsky

MikhailSamin12 Jul 2023 12:12 UTC
39 points
0 comments1 min readEA link