RSS

Anthropic

TagLast edit: 22 Jul 2022 19:13 UTC by Leo

Anthropic is an AI safety and research company. It was founded in May 2021 by siblings Dario Amodei and Daniela Amodei, who serve as CEO and President, respectively.[1][2]

Anthropic raised $124 million in a series A founding round. The round was led by Jaan Tallinn, and included participation from James McClave, Dustin Moskovitz, the Center for Emerging Risk Research (now Polaris Ventures), Eric Schmidt, and others.[3][4][5]

Anthropic raised a further $580 million in a series B founding round. The round was led by Sam Bankman-Fried, and included participation from Caroline Ellison, Jim McClave, Nishad Singh, as well as Tallinn and CERR.[6]

Further reading

Perry, Lucas (2022) Daniela and Dario Amodei on Anthropic, Future of Life Institute, March 4.

External links

Anthropic. Official website.

Apply for a job.

Related entries

AI safety | OpenAI

  1. ^
  2. ^

    Waters, Richard & Miles Kruppa (2021) Rebel AI group raises record cash after machine learning schism, Financial Times, May 28.

  3. ^
  4. ^

    Piper, Kelsey (2021) Future Perfect Newsletter, Vox, May 28.

  5. ^
  6. ^

    Coldewey, Devin (2022) Anthropic’s quest for better, more explainable AI attracts $580M, TechCrunch, April 29.

Chris Olah on what the hell is go­ing on in­side neu­ral networks

80000_Hours4 Aug 2021 15:13 UTC
5 points
0 comments133 min readEA link

[Question] Would an An­thropic/​OpenAI merger be good for AI safety?

M22 Nov 2023 20:21 UTC
6 points
1 comment1 min readEA link

Claude 3.5 Sonnet

Zach Stein-Perlman20 Jun 2024 18:00 UTC
31 points
0 comments1 min readEA link
(www.anthropic.com)

Dario Amodei — Machines of Lov­ing Grace

Matrice Jacobine11 Oct 2024 21:39 UTC
66 points
0 comments1 min readEA link
(darioamodei.com)

Dear An­thropic peo­ple, please don’t re­lease Claude

Joseph Miller8 Feb 2023 2:44 UTC
27 points
5 comments1 min readEA link

Spicy takes about AI policy (Clark, 2022)

Will Aldred9 Aug 2022 13:49 UTC
44 points
0 comments3 min readEA link
(twitter.com)

In­tro­duc­ing Align­ment Stress-Test­ing at Anthropic

evhub12 Jan 2024 23:51 UTC
80 points
0 comments1 min readEA link

We are fight­ing a shared bat­tle (a call for a differ­ent ap­proach to AI Strat­egy)

Gideon Futerman16 Mar 2023 14:37 UTC
59 points
11 comments15 min readEA link

Fron­tier Model Forum

Zach Stein-Perlman26 Jul 2023 14:30 UTC
40 points
7 comments1 min readEA link
(blog.google)

[Question] Is it pos­si­ble that SBF-linked funds haven’t yet been trans­ferred to An­thropic or that An­thropic would have to re­turn these funds?

donegal16 Nov 2022 9:33 UTC
5 points
0 comments1 min readEA link

[Question] I’m in­ter­view­ing Nova Das Sarma about AI safety and in­for­ma­tion se­cu­rity. What shouId I ask her?

Robert_Wiblin25 Mar 2022 15:38 UTC
17 points
13 comments1 min readEA link

Google in­vests $300mn in ar­tifi­cial in­tel­li­gence start-up An­thropic | FT

𝕮𝖎𝖓𝖊𝖗𝖆3 Feb 2023 19:43 UTC
155 points
5 comments1 min readEA link
(www.ft.com)

2021 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

Larks23 Dec 2021 14:06 UTC
176 points
18 comments73 min readEA link

An­thropic: Core Views on AI Safety: When, Why, What, and How

jonmenaster9 Mar 2023 17:30 UTC
107 points
6 comments22 min readEA link
(www.anthropic.com)

Jan Leike: “I’m ex­cited to join @An­throp­icAI to con­tinue the su­per­al­ign­ment mis­sion!”

defun 🔸28 May 2024 18:08 UTC
35 points
11 comments1 min readEA link
(x.com)

An­thropic teams up with Palan­tir and AWS to sell AI to defense customers

Matrice Jacobine9 Nov 2024 11:47 UTC
26 points
1 comment2 min readEA link
(techcrunch.com)

[Question] If FTX is liqui­dated, who ends up con­trol­ling An­thropic?

Ofer15 Nov 2022 15:04 UTC
63 points
8 comments1 min readEA link

Pod­cast: Tam­era Lan­ham on AI risk, threat mod­els, al­ign­ment pro­pos­als, ex­ter­nal­ized rea­son­ing over­sight, and work­ing at Anthropic

Akash20 Dec 2022 21:39 UTC
14 points
1 comment1 min readEA link

Thoughts on re­spon­si­ble scal­ing poli­cies and regulation

Paul_Christiano24 Oct 2023 22:25 UTC
179 points
5 comments6 min readEA link

Re­spon­si­ble Scal­ing Poli­cies Are Risk Man­age­ment Done Wrong

simeon_c25 Oct 2023 23:46 UTC
42 points
1 comment1 min readEA link
(www.navigatingrisks.ai)

Scal­able And Trans­fer­able Black-Box Jailbreaks For Lan­guage Models Via Per­sona Modulation

soroushjp7 Nov 2023 18:00 UTC
10 points
0 comments2 min readEA link
(arxiv.org)

AI Safety Newslet­ter #37: US Launches An­titrust In­ves­ti­ga­tions Plus, re­cent crit­i­cisms of OpenAI and An­thropic, and a sum­mary of Si­tu­a­tional Awareness

Center for AI Safety18 Jun 2024 18:08 UTC
15 points
0 comments5 min readEA link
(newsletter.safe.ai)

OMMC An­nounces RIP

Adam_Scholl1 Apr 2024 23:38 UTC
7 points
0 comments2 min readEA link

An­thropic rewrote its RSP

Zach Stein-Perlman15 Oct 2024 14:30 UTC
29 points
1 comment1 min readEA link

#197 – On whether An­thropic’s AI safety policy is up to the task (Nick Joseph on The 80,000 Hours Pod­cast)

80000_Hours22 Aug 2024 15:34 UTC
9 points
0 comments18 min readEA link

In­tro­duc­ing Senti—An­i­mal Ethics AI Assistant

Animal_Ethics9 May 2024 7:33 UTC
40 points
2 comments2 min readEA link

OpenAI and An­thropic Donate Cred­its for AI Fore­cast­ing Bench­mark Tournament

christian17 Jul 2024 21:50 UTC
2 points
0 comments1 min readEA link

The cur­rent state of RSPs

Zach Stein-Perlman4 Nov 2024 16:00 UTC
19 points
1 comment1 min readEA link
No comments.