RSS

Anthropic

TagLast edit: 22 Jul 2022 19:13 UTC by Leo

Anthropic is an AI safety and research company. It was founded in May 2021 by siblings Dario Amodei and Daniela Amodei, who serve as CEO and President, respectively.[1][2]

Anthropic raised $124 million in a series A founding round. The round was led by Jaan Tallinn, and included participation from James McClave, Dustin Moskovitz, the Center for Emerging Risk Research (now Polaris Ventures), Eric Schmidt, and others.[3][4][5]

Anthropic raised a further $580 million in a series B founding round. The round was led by Sam Bankman-Fried, and included participation from Caroline Ellison, Jim McClave, Nishad Singh, as well as Tallinn and CERR.[6]

Further reading

Perry, Lucas (2022) Daniela and Dario Amodei on Anthropic, Future of Life Institute, March 4.

External links

Anthropic. Official website.

Apply for a job.

Related entries

AI safety | OpenAI

  1. ^
  2. ^

    Waters, Richard & Miles Kruppa (2021) Rebel AI group raises record cash after machine learning schism, Financial Times, May 28.

  3. ^
  4. ^

    Piper, Kelsey (2021) Future Perfect Newsletter, Vox, May 28.

  5. ^
  6. ^

    Coldewey, Devin (2022) Anthropic’s quest for better, more explainable AI attracts $580M, TechCrunch, April 29.

Chris Olah on what the hell is go­ing on in­side neu­ral networks

80000_Hours4 Aug 2021 15:13 UTC
5 points
0 comments135 min readEA link

[Question] Would an An­thropic/​OpenAI merger be good for AI safety?

M22 Nov 2023 20:21 UTC
5 points
0 comments1 min readEA link

Spicy takes about AI policy (Clark, 2022)

Will Aldred9 Aug 2022 13:49 UTC
44 points
0 comments3 min readEA link
(twitter.com)

Dear An­thropic peo­ple, please don’t re­lease Claude

Joseph Miller8 Feb 2023 2:44 UTC
26 points
5 comments1 min readEA link

In­tro­duc­ing Align­ment Stress-Test­ing at Anthropic

evhub12 Jan 2024 23:51 UTC
80 points
0 comments1 min readEA link

We are fight­ing a shared bat­tle (a call for a differ­ent ap­proach to AI Strat­egy)

GideonF16 Mar 2023 14:37 UTC
59 points
11 comments15 min readEA link

Fron­tier Model Forum

Zach Stein-Perlman26 Jul 2023 14:30 UTC
40 points
7 comments1 min readEA link
(blog.google)

[Question] I’m in­ter­view­ing Nova Das Sarma about AI safety and in­for­ma­tion se­cu­rity. What shouId I ask her?

Robert_Wiblin25 Mar 2022 15:38 UTC
17 points
14 comments1 min readEA link

Google in­vests $300mn in ar­tifi­cial in­tel­li­gence start-up An­thropic | FT

𝕮𝖎𝖓𝖊𝖗𝖆3 Feb 2023 19:43 UTC
155 points
6 comments1 min readEA link
(www.ft.com)

2021 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

Larks23 Dec 2021 14:06 UTC
176 points
18 comments75 min readEA link

[Question] Is it pos­si­ble that SBF-linked funds haven’t yet been trans­ferred to An­thropic or that An­thropic would have to re­turn these funds?

donegal16 Nov 2022 9:33 UTC
5 points
0 comments1 min readEA link

An­thropic: Core Views on AI Safety: When, Why, What, and How

jonmenaster9 Mar 2023 17:30 UTC
107 points
6 comments22 min readEA link
(www.anthropic.com)

Pod­cast: Tam­era Lan­ham on AI risk, threat mod­els, al­ign­ment pro­pos­als, ex­ter­nal­ized rea­son­ing over­sight, and work­ing at Anthropic

Akash20 Dec 2022 21:39 UTC
14 points
1 comment1 min readEA link

Scal­able And Trans­fer­able Black-Box Jailbreaks For Lan­guage Models Via Per­sona Modulation

soroushjp7 Nov 2023 18:00 UTC
5 points
0 comments2 min readEA link
(arxiv.org)

Call to de­mand an­swers from An­thropic about join­ing the AI race

sergia2 Mar 2023 17:26 UTC
14 points
71 comments1 min readEA link
(forum.effectivealtruism.org)

Thoughts on re­spon­si­ble scal­ing poli­cies and regulation

Paul_Christiano24 Oct 2023 22:25 UTC
177 points
5 comments6 min readEA link

Re­spon­si­ble Scal­ing Poli­cies Are Risk Man­age­ment Done Wrong

simeon_c25 Oct 2023 23:46 UTC
42 points
1 comment1 min readEA link
(www.navigatingrisks.ai)

[Question] If FTX is liqui­dated, who ends up con­trol­ling An­thropic?

Ofer15 Nov 2022 15:04 UTC
63 points
8 comments1 min readEA link