RSS

Open Philan­thropy AI Wor­ld­views Contest

TagLast edit: 15 Mar 2023 18:53 UTC by Lorenzo Buonanno

The goal of the contest is to surface novel considerations that could influence Open Philanthropy’s views on AI timelines and AI risk. They plan to distribute $225,000 in prize money across six winning entries. This is the same contest that was preannounced late 2022, which is itself the spiritual successor to the now-defunct Future Fund competition.

The contest deadline is May 31, 2023. All work posted for the first time on or after September 23, 2022 is eligible. Use this form to submit your entry.

Related entries

Future Fund Worldview Prize | Criticism and Red Teaming Contest | AI risk | AI forecasting | prize

An­nounc­ing the Win­ners of the 2023 Open Philan­thropy AI Wor­ld­views Contest

Jason Schukraft30 Sep 2023 3:51 UTC
74 points
30 comments2 min readEA link

Trans­for­ma­tive AGI by 2043 is <1% likely

Ted Sanders6 Jun 2023 15:51 UTC
92 points
92 comments5 min readEA link
(arxiv.org)

The Con­trol Prob­lem: Un­solved or Un­solv­able?

Remmelt2 Jun 2023 15:42 UTC
4 points
9 comments14 min readEA link

A com­pute-based frame­work for think­ing about the fu­ture of AI

Matthew_Barnett31 May 2023 22:00 UTC
96 points
36 comments19 min readEA link

Prim­i­tive Global Dis­course Frame­work, Con­sti­tu­tional AI us­ing le­gal frame­works, and Mono­cul­ture—A loss of con­trol over the role of AGI in society

broptross1 Jun 2023 5:12 UTC
2 points
0 comments12 min readEA link

“The Race to the End of Hu­man­ity” – Struc­tural Uncer­tainty Anal­y­sis in AI Risk Models

Froolow19 May 2023 12:03 UTC
48 points
4 comments21 min readEA link

De­cep­tive Align­ment is <1% Likely by Default

DavidW21 Feb 2023 15:07 UTC
54 points
25 comments14 min readEA link

An­nounc­ing the Open Philan­thropy AI Wor­ld­views Contest

Jason Schukraft10 Mar 2023 2:33 UTC
137 points
33 comments3 min readEA link
(www.openphilanthropy.org)

Without a tra­jec­tory change, the de­vel­op­ment of AGI is likely to go badly

Max H30 May 2023 0:21 UTC
1 point
0 comments13 min readEA link

Pes­simism about AI Safety

Max_He-Ho2 Apr 2023 7:57 UTC
5 points
0 comments25 min readEA link
(www.lesswrong.com)

Sta­tus Quo Eng­ines—AI essay

Ilana_Goldowitz_Jimenez28 May 2023 14:33 UTC
1 point
0 comments15 min readEA link

AGI ris­ing: why we are in a new era of acute risk and in­creas­ing pub­lic aware­ness, and what to do now

Greg_Colbourn2 May 2023 10:17 UTC
68 points
35 comments13 min readEA link

Im­pli­ca­tions of AGI on Sub­jec­tive Hu­man Experience

Erica S. 30 May 2023 18:47 UTC
2 points
0 comments19 min readEA link
(docs.google.com)

Re­minder: AI Wor­ld­views Con­test Closes May 31

Jason Schukraft8 May 2023 17:40 UTC
20 points
0 comments1 min readEA link

Pre-An­nounc­ing the 2023 Open Philan­thropy AI Wor­ld­views Contest

Jason Schukraft21 Nov 2022 21:45 UTC
291 points
26 comments1 min readEA link

P(doom|AGI) is high: why the de­fault out­come of AGI is doom

Greg_Colbourn2 May 2023 10:40 UTC
13 points
28 comments3 min readEA link

Lan­guage Agents Re­duce the Risk of Ex­is­ten­tial Catastrophe

cdkg29 May 2023 9:59 UTC
29 points
6 comments26 min readEA link

Co­op­er­a­tion, Avoidance, and In­differ­ence: Alter­nate Fu­tures for Misal­igned AGI

Kiel Brennan-Marquez10 Dec 2022 20:32 UTC
4 points
1 comment18 min readEA link

2023 Open Philan­thropy AI Wor­ld­views Con­test: Odds of Ar­tifi­cial Gen­eral In­tel­li­gence by 2043

srhoades1014 Mar 2023 20:32 UTC
19 points
0 comments46 min readEA link

AI Doom and David Hume: A Defence of Em­piri­cism in AI Safety

Matt Beard30 May 2023 20:45 UTC
33 points
6 comments12 min readEA link

Con­sid­er­a­tions on trans­for­ma­tive AI and ex­plo­sive growth from a semi­con­duc­tor-in­dus­try per­spec­tive

Muireall31 May 2023 1:11 UTC
23 points
1 comment2 min readEA link
(muireall.space)

Ab­strac­tion is Big­ger than Nat­u­ral Abstraction

NicholasKross31 May 2023 0:00 UTC
2 points
0 comments1 min readEA link

Beyond Hu­mans: Why All Sen­tient Be­ings Mat­ter in Ex­is­ten­tial Risk

Teun_Van_Der_Weij31 May 2023 21:21 UTC
10 points
0 comments13 min readEA link

A moral back­lash against AI will prob­a­bly slow down AGI development

Geoffrey Miller31 May 2023 21:31 UTC
141 points
22 comments14 min readEA link
No comments.