Ex­pert trap (Part 2 of 3) – how hind­sight, hi­er­ar­chy, and con­fir­ma­tion bi­ases break con­duc­tivity and ac­cu­racy of knowledge

Pawel Sysiak9 Jun 2023 22:53 UTC
3 points
0 comments7 min readEA link

An­nounce­ment: You can now listen to the “AI Safety Fun­da­men­tals” courses

peterhartree9 Jun 2023 16:32 UTC
101 points
8 comments1 min readEA link

効果的利他主義とは何か

EA Japan9 Jun 2023 14:59 UTC
4 points
0 comments1 min readEA link

Straw­men, steel­men, and mithril­men: get­ting the prin­ci­ple of char­ity right

MichaelPlant9 Jun 2023 13:02 UTC
89 points
11 comments2 min readEA link

[Question] How does AI progress af­fect other EA cause ar­eas?

Luis Mota Freitas9 Jun 2023 12:43 UTC
95 points
13 comments1 min readEA link

Have your say on the Aus­tralian Govern­ment’s AI Policy [Bris­bane]

Michael Noetel 🔸9 Jun 2023 0:15 UTC
6 points
0 comments1 min readEA link

[Ap­pli­ca­tions Open] CEA’s Univer­sity Group Ac­cel­er­a­tor Pro­gram (UGAP)

jessica_mccurdy8 Jun 2023 23:28 UTC
31 points
0 comments1 min readEA link

A sur­vey of con­crete risks de­rived from Ar­tifi­cial Intelligence

Guillem Bas8 Jun 2023 22:09 UTC
36 points
2 comments6 min readEA link
(riesgoscatastroficosglobales.com)

Wild An­i­mal Welfare Sce­nar­ios for AI Doom

utilistrutil8 Jun 2023 19:41 UTC
52 points
2 comments3 min readEA link

How to DIY a Fun­der’s Circle

Kirsten8 Jun 2023 19:23 UTC
29 points
0 comments2 min readEA link
(ealifestyles.substack.com)

X-risk Agnosticism

Richard Y Chappell🔸8 Jun 2023 15:02 UTC
34 points
1 comment5 min readEA link
(rychappell.substack.com)

[Question] What are your biggest challenges with on­line fundrais­ing?

Paweł Biegun8 Jun 2023 13:55 UTC
2 points
0 comments1 min readEA link

Profit for Good- an FAQ Re­spon­sive to EA Feedback

Brad West🔸8 Jun 2023 13:52 UTC
17 points
0 comments1 min readEA link

UK gov­ern­ment to host first global sum­mit on AI Safety

DavidNash8 Jun 2023 13:24 UTC
78 points
1 comment5 min readEA link
(www.gov.uk)

if you’re read­ing this it’s too late (a new the­ory on what is caus­ing the Great Stag­na­tion)

rogersbacon18 Jun 2023 11:54 UTC
−2 points
0 comments13 min readEA link
(www.secretorum.life)

Notes on how I want to han­dle criticism

Lizka8 Jun 2023 11:47 UTC
63 points
3 comments4 min readEA link

An Ex­er­cise to Build In­tu­itions on AGI Risk

Lauro Langosco8 Jun 2023 11:20 UTC
4 points
0 comments8 min readEA link
(www.alignmentforum.org)

Be­ware pop­u­lar dis­cus­sions of AI “sen­tience”

David Mathers🔸8 Jun 2023 8:57 UTC
42 points
6 comments9 min readEA link

EA Strat­egy Fort­night (June 12-24)

Ben_West🔸7 Jun 2023 23:07 UTC
141 points
26 comments3 min readEA link

Reflec­tive Equil­ibria and the Hunt for a For­mal­ized Pragmatism

BenjaminCaulfield7 Jun 2023 22:55 UTC
4 points
1 comment12 min readEA link

GiveDirectly Un­veils Challenges in Cash Trans­fer Pro­gram, Pledges Solu­tions to Sup­port Im­pov­er­ished Com­mu­ni­ties in the Demo­cratic Repub­lic of Congo: My Two Cents

Vee7 Jun 2023 22:22 UTC
2 points
9 comments4 min readEA link

The cur­rent al­ign­ment plan, and how we might im­prove it | EAG Bay Area 23

Buck7 Jun 2023 21:03 UTC
66 points
0 comments33 min readEA link

Seek­ing im­por­tant GH or IDEV work­ing pa­pers to evaluate

ryancbriggs7 Jun 2023 19:29 UTC
36 points
4 comments3 min readEA link

Could AI ac­cel­er­ate eco­nomic growth?

Tom_Davidson7 Jun 2023 19:07 UTC
28 points
0 comments6 min readEA link

Un­der­stand­ing how hard al­ign­ment is may be the most im­por­tant re­search di­rec­tion right now

Aron7 Jun 2023 19:05 UTC
26 points
3 comments6 min readEA link
(coordinationishard.substack.com)

A note of cau­tion about re­cent AI risk coverage

Sean_o_h7 Jun 2023 17:05 UTC
283 points
29 comments3 min readEA link

The Three M’s: Mea­sure­ment, Mul­ti­pli­ca­tion, Maximization

quinn7 Jun 2023 15:50 UTC
23 points
3 comments1 min readEA link

Suc­ces­sif: helping mid-ca­reer and se­nior pro­fes­sion­als have im­pact­ful careers

ClaireB7 Jun 2023 15:16 UTC
126 points
17 comments10 min readEA link

Large episte­molog­i­cal con­cerns I should maybe have about EA a priori

Luise7 Jun 2023 14:11 UTC
115 points
16 comments8 min readEA link

[Question] Would it make sense for EA fund­ing to be not so much fo­cused on top tal­ent?

Franziska Fischer7 Jun 2023 13:56 UTC
37 points
7 comments1 min readEA link

Ar­ti­cle Sum­mary: Cur­rent and Near-Term AI as a Po­ten­tial Ex­is­ten­tial Risk Factor

AndreFerretti7 Jun 2023 13:53 UTC
12 points
1 comment1 min readEA link
(dl.acm.org)

Re­think Pri­ori­ties is hiring a Com­pute Gover­nance Re­searcher or Re­search Assistant

MichaelA🔸7 Jun 2023 13:22 UTC
36 points
2 comments8 min readEA link
(careers.rethinkpriorities.org)

Map­ping out col­lapse research

FJehn7 Jun 2023 12:10 UTC
18 points
2 comments11 min readEA link
(existentialcrunch.substack.com)

Un­veiling the Challenges and Po­ten­tial of Re­search in Nige­ria: Nur­tur­ing Ta­lent in Re­source-Limited Settings

emmannaemeka7 Jun 2023 11:05 UTC
51 points
3 comments4 min readEA link

Free One-to-One Coach­ing for Pro­cras­ti­na­tion (~60 Spaces Available)

John Salter7 Jun 2023 10:57 UTC
31 points
7 comments1 min readEA link

[job ad] AAC is look­ing for a Learn­ing and Digi­tal Manager

SofiaBalderson7 Jun 2023 8:31 UTC
10 points
1 comment1 min readEA link

Func­tions of a com­mu­nity stan­dard in the 2%/​8% fuzzies/​utilons debate

DirectedEvolution7 Jun 2023 5:23 UTC
2 points
8 comments9 min readEA link

Launch­ing Light­speed Grants (Ap­ply by July 6th)

Habryka7 Jun 2023 2:53 UTC
117 points
6 comments1 min readEA link

Cul­ti­vate an ob­ses­sion with the ob­ject level

richard_ngo7 Jun 2023 1:39 UTC
24 points
0 comments1 min readEA link

Is there a case for Prona­tal­ism as an effec­tive cause area?

ben.smith7 Jun 2023 0:42 UTC
18 points
1 comment4 min readEA link

Tim Cook was asked about ex­tinc­tion risks from AI

Saul Munn6 Jun 2023 18:46 UTC
8 points
1 comment1 min readEA link

A Play­book for AI Risk Re­duc­tion (fo­cused on mis­al­igned AI)

Holden Karnofsky6 Jun 2023 18:05 UTC
81 points
17 comments1 min readEA link

Malaria ques­tion from an as­piring novelist

Andylute6 Jun 2023 16:11 UTC
4 points
1 comment1 min readEA link

AISN #9: State­ment on Ex­tinc­tion Risks, Com­pet­i­tive Pres­sures, and When Will AI Reach Hu­man-Level?

Center for AI Safety6 Jun 2023 15:56 UTC
12 points
2 comments7 min readEA link
(newsletter.safe.ai)

Trans­for­ma­tive AGI by 2043 is <1% likely

Ted Sanders6 Jun 2023 15:51 UTC
92 points
92 comments5 min readEA link
(arxiv.org)

Ex­pert trap: What is it? (Part 1 of 3) – how hind­sight, hi­er­ar­chy, and con­fir­ma­tion bi­ases break con­duc­tivity and ac­cu­racy of knowledge

Pawel Sysiak6 Jun 2023 15:05 UTC
3 points
0 comments8 min readEA link

Stampy’s AI Safety Info—New Distil­la­tions #3 [May 2023]

markov6 Jun 2023 14:27 UTC
10 points
2 comments1 min readEA link
(aisafety.info)

[Question] De­bates at EAGxNYC

Kaleem6 Jun 2023 14:17 UTC
35 points
14 comments1 min readEA link

Agen­tic Mess (A Failure Story)

Karl von Wendt6 Jun 2023 13:16 UTC
30 points
3 comments1 min readEA link

EA Ar­chi­tect: Disser­ta­tion on Im­prov­ing the So­cial Dy­nam­ics of Con­fined Spaces & Shelters Prece­dents Report

t466 Jun 2023 11:58 UTC
42 points
5 comments8 min readEA link