RSS

MikhailSamin

Karma: 582

Are you interested in AI X-risk reduction and strategies? Do you have experience in comms or policy? Let’s chat!

aigsi.org develops educational materials and ads that most efficiently communicate core AI safety ideas to specific demographics, with a focus on producing a correct understanding of why smarter-than-human AI poses a risk of extinction. We plan to increase and leverage understanding of AI and existential risk from AI to impact the chance of institutions addressing x-risk.

Early results include ads that achieve a cost of $0.10 per click (to a website that explains the technical details of why AI experts are worried about extinction risk from AI) and $0.05 per engagement on ads that share simple ideas at the core of the problem.

Personally, I’m good at explaining existential risk from AI to people, including to policymakers. I have experience of changing minds of 34 people I talked to at an e/​acc event.

Previously, I got 250k people to read HPMOR and sent 1.3k copies to winners of math and computer science competitions (including dozens of IMO and IOI gold medalists); have taken the GWWC pledge; created a small startup that donated >100k$ to effective nonprofits.

I have a background in ML and strong intuitions about the AI alignment problem. I grew up running political campaigns and have a bit of a security mindset.

My website: contact.ms

You’re welcome to schedule a call with me before or after the conference: contact.ms/​ea30

Un­less its gov­er­nance changes, An­thropic is untrustworthy

MikhailSamin2 Dec 2025 17:07 UTC
60 points
4 comments29 min readEA link
(anthropic.ml)

Shar­ing in­for­ma­tion about Light­cone Infrastructure

MikhailSamin2 Nov 2025 8:51 UTC
−10 points
7 comments8 min readEA link

We’ve au­to­mated x-risk-pilling people

MikhailSamin5 Oct 2025 11:41 UTC
0 points
9 comments1 min readEA link
(whycare.aisgf.us)

How to Give in to Threats (with­out in­cen­tiviz­ing them)

MikhailSamin24 Mar 2025 0:53 UTC
9 points
0 comments5 min readEA link

Su­per­in­tel­li­gence’s goals are likely to be random

MikhailSamin14 Mar 2025 1:17 UTC
2 points
0 comments5 min readEA link

No one has the ball on 1500 Rus­sian olympiad win­ners who’ve re­ceived HPMOR

MikhailSamin23 Jan 2025 16:40 UTC
32 points
10 comments1 min readEA link

Claude 3 claims it’s con­scious, doesn’t want to die or be modified

MikhailSamin4 Mar 2024 23:05 UTC
8 points
3 comments14 min readEA link

FTX ex­pects to re­turn all cus­tomer money; claw­backs may go away

MikhailSamin14 Feb 2024 3:43 UTC
38 points
23 comments1 min readEA link
(www.nytimes.com)

An EA used de­cep­tive mes­sag­ing to ad­vance her pro­ject; we need mechanisms to avoid de­on­tolog­i­cally du­bi­ous plans

MikhailSamin13 Feb 2024 23:11 UTC
18 points
39 comments5 min readEA link

NYT is su­ing OpenAI&Microsoft for alleged copy­right in­fringe­ment; some quick thoughts

MikhailSamin28 Dec 2023 18:37 UTC
29 points
0 comments1 min readEA link

Some quick thoughts on “AI is easy to con­trol”

MikhailSamin7 Dec 2023 12:23 UTC
5 points
4 comments7 min readEA link

It’s OK to eat shrimp: EAs Make In­valid In­fer­ences About Fish Qualia and Mo­ral Patienthood

MikhailSamin13 Nov 2023 16:51 UTC
−4 points
34 comments7 min readEA link

A tran­script of the TED talk by Eliezer Yudkowsky

MikhailSamin12 Jul 2023 12:12 UTC
39 points
2 comments4 min readEA link

Try to solve the hard parts of the al­ign­ment problem

MikhailSamin11 Jul 2023 17:02 UTC
8 points
0 comments5 min readEA link

[Question] I have thou­sands of copies of HPMOR in Rus­sian. How to use them with the most im­pact?

MikhailSamin27 Dec 2022 11:07 UTC
39 points
10 comments1 min readEA link

You won’t solve al­ign­ment with­out agent foundations

MikhailSamin6 Nov 2022 8:07 UTC
14 points
0 comments8 min readEA link

Sav­ing lives near the precipice

MikhailSamin29 Jul 2022 15:08 UTC
18 points
10 comments3 min readEA link

Samin’s Quick takes

MikhailSamin24 Jul 2022 17:15 UTC
1 point
37 commentsEA link