Thoughts on the AI Safety Sum­mit com­pany policy re­quests and responses

So8res31 Oct 2023 23:54 UTC
42 points
3 comments1 min readEA link

Me­tac­u­lus Pre­sents: Trans­for­ma­tive Science at Startup Speed

christian31 Oct 2023 21:12 UTC
5 points
0 comments1 min readEA link

AISN #25: White House Ex­ec­u­tive Order on AI, UK AI Safety Sum­mit, and Progress on Vol­un­tary Eval­u­a­tions of AI Risks

Center for AI Safety31 Oct 2023 19:24 UTC
21 points
0 comments6 min readEA link
(newsletter.safe.ai)

The UK AI Safety Sum­mit tomorrow

SebastianSchmidt31 Oct 2023 19:09 UTC
17 points
2 comments2 min readEA link

Global Catas­trophic Biolog­i­cal Risks: A Guide for Philan­thropists [Founders Pledge]

christian.r31 Oct 2023 15:42 UTC
31 points
0 comments6 min readEA link
(www.founderspledge.com)

[Question] Let’s cel­e­brate some wins

Nathan Young31 Oct 2023 10:43 UTC
37 points
12 comments1 min readEA link

[Closed] Agent Foun­da­tions track in MATS

Vanessa31 Oct 2023 8:14 UTC
19 points
0 comments1 min readEA link
(www.matsprogram.org)

Up­dates from Cam­paign for AI Safety

Jolyn Khoo31 Oct 2023 5:46 UTC
14 points
1 comment2 min readEA link
(www.campaignforaisafety.org)

[Question] Is there any work on cause pri­ori­ti­za­tion that takes into ac­count timelines be­ing wor­ld­view-de­pen­dent?

Chris Leong31 Oct 2023 2:25 UTC
13 points
2 comments1 min readEA link

Shrimp paste might cause more an­i­mal deaths than any other food product. Who’s work­ing on this?

Angelina Li30 Oct 2023 21:53 UTC
119 points
22 comments4 min readEA link

M&A in AI

Hauke Hillebrandt30 Oct 2023 17:43 UTC
9 points
1 comment6 min readEA link

Will re­leas­ing the weights of large lan­guage mod­els grant wide­spread ac­cess to pan­demic agents?

Jeff Kaufman30 Oct 2023 17:42 UTC
56 points
18 comments1 min readEA link
(arxiv.org)

doebem: Char­ity Eval­u­a­tion and Effec­tive Giv­ing in Brazil

Bruno Sterenberg30 Oct 2023 16:14 UTC
61 points
6 comments4 min readEA link

Im­prov­ing the Welfare of AIs: A Nearcasted Proposal

Ryan Greenblatt30 Oct 2023 14:57 UTC
43 points
0 comments20 min readEA link
(www.lesswrong.com)

Re­sponse to “Co­or­di­nated paus­ing: An eval­u­a­tion-based co­or­di­na­tion scheme for fron­tier AI de­vel­op­ers”

Matthew Wearden30 Oct 2023 12:49 UTC
7 points
1 comment6 min readEA link
(matthewwearden.co.uk)

Is x-risk the most cost-effec­tive if we count only the next few gen­er­a­tions?

Laura Duffy30 Oct 2023 12:43 UTC
120 points
7 comments20 min readEA link
(docs.google.com)

Pres­i­dent Bi­den Is­sues Ex­ec­u­tive Order on Safe, Se­cure, and Trust­wor­thy Ar­tifi­cial Intelligence

Tristan Williams30 Oct 2023 11:15 UTC
143 points
8 comments3 min readEA link
(www.whitehouse.gov)

Epistemic and Statis­ti­cal Uncer­tainty in CEAs—Draft

EdoArad30 Oct 2023 10:26 UTC
27 points
2 comments3 min readEA link

Foun­da­tion that sup­port over­seas vol­un­teer financially

Martingale30 Oct 2023 0:00 UTC
−11 points
1 comment1 min readEA link

One-time ac­tion with long-term con­se­quences. Cal­ifor­nia cit­i­zen-led bal­lot ini­ti­a­tive to fund re­search of psychedelic-as­sisted therapy

Ivan Madan29 Oct 2023 9:16 UTC
12 points
0 comments1 min readEA link

The AI Boom Mainly Benefits Big Firms, but long-term, mar­kets will concentrate

Hauke Hillebrandt29 Oct 2023 8:38 UTC
12 points
0 comments1 min readEA link

Book Re­view: Oral­ity and Liter­acy: The Tech­nol­o­giz­ing of the Word

Fergus Fettes28 Oct 2023 20:17 UTC
7 points
0 comments16 min readEA link

Re­grant up to $600,000 to AI safety pro­jects with GiveWiki

Dawn Drescher28 Oct 2023 19:56 UTC
22 points
0 comments3 min readEA link

AI Safety Hub Ser­bia Offi­cial Opening

Dušan D. Nešić (Dushan)28 Oct 2023 17:10 UTC
26 points
3 comments1 min readEA link
(forum.effectivealtruism.org)

AI Safety Hub Ser­bia Offi­cial Opening

Dušan D. Nešić (Dushan)28 Oct 2023 17:03 UTC
20 points
1 comment1 min readEA link
(forum.effectivealtruism.org)

The Effec­tive Altru­ist Case for Us­ing Ge­netic En­hance­ment to End Poverty

Ives Parr28 Oct 2023 16:36 UTC
−3 points
21 comments35 min readEA link

Sum­mary: Ex­is­ten­tial risk from power-seek­ing AI by Joseph Carlsmith

rileyharris28 Oct 2023 15:05 UTC
11 points
0 comments6 min readEA link
(www.millionyearview.com)

An­nounc­ing the 2023 au­tumn co­hort of Hi-Med’s Ca­reer Fellowship

High Impact Medicine28 Oct 2023 13:33 UTC
10 points
0 comments1 min readEA link

Why I don’t pri­ori­tize con­scious­ness research

Vasco Grilo28 Oct 2023 8:07 UTC
45 points
5 comments6 min readEA link
(magnusvinding.com)

How we work, #1: Cost-effec­tive­ness is gen­er­ally the most im­por­tant fac­tor in our recommendations

GiveWell27 Oct 2023 21:40 UTC
36 points
8 comments6 min readEA link
(blog.givewell.org)

AI safety field-build­ing sur­vey: Ta­lent needs, in­fras­truc­ture needs, and re­la­tion­ship to EA

michel27 Oct 2023 21:08 UTC
67 points
3 comments9 min readEA link

New re­port on the state of AI safety in China

Geoffrey Miller27 Oct 2023 20:20 UTC
22 points
0 comments3 min readEA link
(concordia-consulting.com)

[Question] Things I can do that are some­what use­ful.

Jack Sands27 Oct 2023 17:53 UTC
1 point
5 comments1 min readEA link

[Question] Ex­am­ples of Cor­rupt Charities

Jack Sands27 Oct 2023 17:53 UTC
1 point
2 comments1 min readEA link

Effi­cacy of AI Ac­tivism: Have We Ever Said No?

charlieh94327 Oct 2023 16:52 UTC
78 points
25 comments20 min readEA link

We’re Not Ready: thoughts on “paus­ing” and re­spon­si­ble scal­ing policies

Holden Karnofsky27 Oct 2023 15:19 UTC
150 points
23 comments1 min readEA link

AI Ex­is­ten­tial Safety Fellowships

mmfli27 Oct 2023 12:14 UTC
15 points
1 comment1 min readEA link

My Left Kidney

MathiasKB27 Oct 2023 11:31 UTC
106 points
9 comments1 min readEA link
(www.astralcodexten.com)

Ex­is­ten­tial Hope and Ex­is­ten­tial Risk: Ex­plor­ing the value of op­ti­mistic ap­proaches to shap­ing the long-term future

Vilhelm Skoglund27 Oct 2023 9:07 UTC
36 points
3 comments24 min readEA link

Giv­ing Game: Giv­ing What We Can Melbourne

GraceAdams27 Oct 2023 5:45 UTC
6 points
0 comments1 min readEA link

Im­pact Eval­u­a­tion in EA

callum26 Oct 2023 23:18 UTC
137 points
12 comments5 min readEA link

Traps that doom your new org

John Salter26 Oct 2023 20:44 UTC
35 points
3 comments6 min readEA link

OpenAI’s new Pre­pared­ness team is hiring

leopold26 Oct 2023 20:41 UTC
85 points
13 comments1 min readEA link

EA Global: Lon­don 2024

Eli_Nathan26 Oct 2023 20:38 UTC
5 points
4 comments1 min readEA link

EA Global: Bay Area 2024 (Global Catas­trophic Risks)

Eli_Nathan26 Oct 2023 20:18 UTC
5 points
2 comments1 min readEA link

An­nounc­ing EA Global plans for 2024

Eli_Nathan26 Oct 2023 17:35 UTC
118 points
13 comments4 min readEA link

RA Bounty: Look­ing for feed­back on screen­play about AI Risk

Writer26 Oct 2023 14:27 UTC
8 points
0 comments1 min readEA link

UK Prime Minister Rishi Su­nak’s Speech on AI

Tobias Häberli26 Oct 2023 10:34 UTC
112 points
6 comments8 min readEA link
(www.gov.uk)

Sch­lep Blind­ness in EA

John Salter26 Oct 2023 8:49 UTC
75 points
22 comments1 min readEA link

Ap­ply to the Con­stel­la­tion Visit­ing Re­searcher Pro­gram and As­tra Fel­low­ship, in Berkeley this Winter

Anjay F26 Oct 2023 3:14 UTC
61 points
4 comments1 min readEA link