RSS

Anthropic

TagLast edit: 26 Mar 2025 9:11 UTC by Jamie_Harris

Anthropic is an AI safety and research company. It was founded in May 2021 by siblings Dario Amodei and Daniela Amodei, who serve as CEO and President, respectively.[1][2]

Anthropic raised $124 million in a series A founding round. The round was led by Jaan Tallinn, and included participation from James McClave, Dustin Moskovitz, the Center for Emerging Risk Research (now Macroscopic Ventures), Eric Schmidt, and others.[3][4][5]

Anthropic raised a further $580 million in a series B founding round. The round was led by Sam Bankman-Fried, and included participation from Caroline Ellison, Jim McClave, Nishad Singh, as well as Tallinn and CERR.[6]

Further reading

Perry, Lucas (2022) Daniela and Dario Amodei on Anthropic, Future of Life Institute, March 4.

External links

Anthropic. Official website.

Apply for a job.

Related entries

AI safety | OpenAI

  1. ^
  2. ^

    Waters, Richard & Miles Kruppa (2021) Rebel AI group raises record cash after machine learning schism, Financial Times, May 28.

  3. ^
  4. ^

    Piper, Kelsey (2021) Future Perfect Newsletter, Vox, May 28.

  5. ^
  6. ^

    Coldewey, Devin (2022) Anthropic’s quest for better, more explainable AI attracts $580M, TechCrunch, April 29.

Chris Olah on what the hell is go­ing on in­side neu­ral networks

80000_Hours4 Aug 2021 15:13 UTC
5 points
0 comments133 min readEA link

[Question] Would an An­thropic/​OpenAI merger be good for AI safety?

M22 Nov 2023 20:21 UTC
6 points
1 comment1 min readEA link

In­tro­duc­ing Align­ment Stress-Test­ing at Anthropic

evhub12 Jan 2024 23:51 UTC
80 points
0 comments2 min readEA link

Dario Amodei — Machines of Lov­ing Grace

Matrice Jacobine🔸🏳️‍⚧️11 Oct 2024 21:39 UTC
66 points
0 comments1 min readEA link
(darioamodei.com)

Dear An­thropic peo­ple, please don’t re­lease Claude

Joseph Miller8 Feb 2023 2:44 UTC
28 points
5 comments1 min readEA link

Claude 3.5 Sonnet

Zach Stein-Perlman20 Jun 2024 18:00 UTC
31 points
0 comments1 min readEA link
(www.anthropic.com)

Spicy takes about AI policy (Clark, 2022)

Will Aldred9 Aug 2022 13:49 UTC
44 points
0 comments3 min readEA link
(twitter.com)

[Question] Is it pos­si­ble that SBF-linked funds haven’t yet been trans­ferred to An­thropic or that An­thropic would have to re­turn these funds?

donegal16 Nov 2022 9:33 UTC
5 points
0 comments1 min readEA link

2021 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

Larks23 Dec 2021 14:06 UTC
176 points
18 comments73 min readEA link

An­thropic’s lead­ing re­searchers acted as mod­er­ate accelerationists

Remmelt1 Sep 2025 23:23 UTC
79 points
4 comments42 min readEA link

Safety Con­scious Re­searchers should leave Anthropic

GideonF1 Apr 2025 10:12 UTC
57 points
3 comments5 min readEA link

[Question] I’m in­ter­view­ing Nova Das Sarma about AI safety and in­for­ma­tion se­cu­rity. What shouId I ask her?

Robert_Wiblin25 Mar 2022 15:38 UTC
17 points
13 comments1 min readEA link

We are fight­ing a shared bat­tle (a call for a differ­ent ap­proach to AI Strat­egy)

GideonF16 Mar 2023 14:37 UTC
59 points
10 comments15 min readEA link

Google in­vests $300mn in ar­tifi­cial in­tel­li­gence start-up An­thropic | FT

𝕮𝖎𝖓𝖊𝖗𝖆3 Feb 2023 19:43 UTC
155 points
5 comments1 min readEA link
(www.ft.com)

An­thropic’s sub­mis­sion to the White House’s RFI on AI policy

Agustín Covarrubias 🔸6 Mar 2025 22:47 UTC
48 points
7 comments1 min readEA link
(www.anthropic.com)

Jan Leike: “I’m ex­cited to join @An­throp­icAI to con­tinue the su­per­al­ign­ment mis­sion!”

defun 🔸28 May 2024 18:08 UTC
35 points
11 comments1 min readEA link
(x.com)

An­thropic is not be­ing con­sis­tently can­did about their con­nec­tion to EA

burner230 Mar 2025 13:30 UTC
312 points
88 comments2 min readEA link

Leav­ing Open Philan­thropy, go­ing to Anthropic

Joe_Carlsmith3 Nov 2025 17:41 UTC
141 points
14 comments18 min readEA link

An­thropic: Core Views on AI Safety: When, Why, What, and How

jonmenaster9 Mar 2023 17:30 UTC
107 points
6 comments22 min readEA link
(www.anthropic.com)

Fron­tier Model Forum

Zach Stein-Perlman26 Jul 2023 14:30 UTC
40 points
7 comments4 min readEA link
(blog.google)

Takes on “Align­ment Fak­ing in Large Lan­guage Models”

Joe_Carlsmith18 Dec 2024 18:22 UTC
72 points
1 comment62 min readEA link

AISN #50: AI Ac­tion Plan Re­sponses

Center for AI Safety31 Mar 2025 20:07 UTC
10 points
0 comments6 min readEA link
(newsletter.safe.ai)

The cur­rent state of RSPs

Zach Stein-Perlman4 Nov 2024 16:00 UTC
19 points
1 comment9 min readEA link

Con­sider keep­ing your threat mod­els pri­vate.

Miles Kodama1 Feb 2025 0:29 UTC
17 points
2 comments4 min readEA link

Holden Karnofsky on dozens of amaz­ing op­por­tu­ni­ties to make AI safer — and all his AGI takes

80000_Hours31 Oct 2025 12:13 UTC
70 points
0 comments25 min readEA link

An­thropic Faces Po­ten­tially “Busi­ness-End­ing” Copy­right Lawsuit

Garrison25 Jul 2025 17:01 UTC
31 points
10 comments9 min readEA link
(www.obsolete.pub)

AISN #56: Google Re­leases Veo 3

Center for AI Safety28 May 2025 15:57 UTC
6 points
0 comments4 min readEA link
(newsletter.safe.ai)

We read ev­ery labs safety plan so you don’t have to: 2025 edition

Algon29 Oct 2025 16:48 UTC
14 points
1 comment16 min readEA link
(aisafety.info)

We are on an ex­po­nen­tial curve—Claude Son­net 4.5

MountainPath29 Sep 2025 20:12 UTC
−7 points
1 comment1 min readEA link

Thoughts on re­spon­si­ble scal­ing poli­cies and regulation

Paul_Christiano24 Oct 2023 22:25 UTC
191 points
5 comments6 min readEA link

AISN #65: Mea­sur­ing Au­toma­tion and Su­per­in­tel­li­gence Mo­ra­to­rium Let­ter

Center for AI Safety29 Oct 2025 16:08 UTC
8 points
0 comments3 min readEA link
(newsletter.safe.ai)

I read ev­ery ma­jor AI lab’s safety plan so you don’t have to

sarahhw16 Dec 2024 14:12 UTC
68 points
2 comments11 min readEA link
(longerramblings.substack.com)

#197 – On whether An­thropic’s AI safety policy is up to the task (Nick Joseph on The 80,000 Hours Pod­cast)

80000_Hours22 Aug 2024 15:34 UTC
9 points
0 comments18 min readEA link

OpenAI and An­thropic Donate Cred­its for AI Fore­cast­ing Bench­mark Tournament

christian17 Jul 2024 21:50 UTC
2 points
0 comments1 min readEA link

Scal­able And Trans­fer­able Black-Box Jailbreaks For Lan­guage Models Via Per­sona Modulation

sjp7 Nov 2023 18:00 UTC
10 points
0 comments2 min readEA link
(arxiv.org)

Re­spon­si­ble Scal­ing Poli­cies Are Risk Man­age­ment Done Wrong

simeon_c25 Oct 2023 23:46 UTC
42 points
1 comment22 min readEA link
(www.navigatingrisks.ai)

Would any­one here know how to get ahold of … iunno An­thropic and Open Philan­thropy? I think they are go­ing to want to have a chat (Please don’t make me go to OpenAI with this. Not even a threat, se­ri­ously. They just part­ner with my alma mater and are the only in I have. I gen­uinely do not want to and I need your help).

Anti-Golem9 Jun 2025 13:59 UTC
−11 points
0 comments1 min readEA link

An­thropic rewrote its RSP

Zach Stein-Perlman15 Oct 2024 14:30 UTC
32 points
1 comment6 min readEA link

Say­ing Goodbye

sapphire3 Aug 2025 23:51 UTC
14 points
2 comments4 min readEA link

An­thropic teams up with Palan­tir and AWS to sell AI to defense customers

Matrice Jacobine🔸🏳️‍⚧️9 Nov 2024 11:47 UTC
28 points
1 comment2 min readEA link
(techcrunch.com)

AI Safety Newslet­ter #37: US Launches An­titrust In­ves­ti­ga­tions Plus, re­cent crit­i­cisms of OpenAI and An­thropic, and a sum­mary of Si­tu­a­tional Awareness

Center for AI Safety18 Jun 2024 18:08 UTC
15 points
0 comments5 min readEA link
(newsletter.safe.ai)

[Question] If FTX is liqui­dated, who ends up con­trol­ling An­thropic?

Ofer15 Nov 2022 15:04 UTC
63 points
8 comments1 min readEA link

An­thropic is Quietly Backpedal­ling on its Safety Commitments

Garrison23 May 2025 2:26 UTC
100 points
7 comments5 min readEA link
(www.obsolete.pub)

AISN #58: Se­nate Re­moves State AI Reg­u­la­tion Moratorium

Center for AI Safety3 Jul 2025 17:07 UTC
6 points
0 comments4 min readEA link
(newsletter.safe.ai)

Three Weeks In: What GPT-5 Still Gets Wrong

JAM27 Aug 2025 14:43 UTC
2 points
0 comments3 min readEA link

OMMC An­nounces RIP

Adam_Scholl1 Apr 2024 23:38 UTC
7 points
0 comments2 min readEA link

Hunger strike in front of An­thropic by one guy con­cerned about AI risk

Remmelt5 Sep 2025 4:00 UTC
19 points
18 comments1 min readEA link

[Question] An­thropic says it’s highly con­fi­dent a Chi­nese state-spon­sored group used AI to hack gov­ern­ments, chem­i­cal firms, and oth­ers. Why isn’t this get­ting more at­ten­tion?

adam.kruger16 Nov 2025 21:27 UTC
13 points
5 comments1 min readEA link

Video and tran­script of talk on AI welfare

Joe_Carlsmith22 May 2025 16:15 UTC
22 points
1 comment28 min readEA link
(joecarlsmith.substack.com)

Align­ment Fak­ing in Large Lan­guage Models

Ryan Greenblatt18 Dec 2024 17:19 UTC
142 points
9 comments10 min readEA link

In­tro­duc­ing Senti—An­i­mal Ethics AI Assistant

Animal_Ethics9 May 2024 7:33 UTC
41 points
2 comments2 min readEA link

6 In­sights From An­thropic’s Re­cent Dis­cus­sion On LLM Interpretability

Strad Slater19 Nov 2025 10:51 UTC
2 points
0 comments5 min readEA link
(williamslater2003.medium.com)
No comments.