RSS

Open Philan­thropy AI Wor­ld­views Contest

TagLast edit: 6 Jun 2025 20:37 UTC by Dane Valerie

The Open Philanthropy AI Worldviews Contest was a 2023 competition organized to surface novel considerations that could influence Open Philanthropy’s views on AI timelines and AI risk. A total of $225,000 in prize money was awarded across six winning entries. The contest served as the formal successor to a 2022 preannouncement and as a spiritual successor to the now-defunct Future Fund AI competition.

Contest details

Submissions were accepted from September 23, 2022 through May 31, 2023. Eligible entries had to be original, written in English, and first published during that window. There was no official word limit, though essays over 5,000 words were considered harder to engage with. Coauthored submissions were allowed, and participants could submit multiple entries but only win once.

Entries had to address one of the following questions:

  1. What was the probability that AGI would be developed by January 1, 2043?

  2. Conditional on AGI being developed by 2070, what was the probability that humanity would suffer an existential catastrophe due to loss of control over an AGI system?

Each essay was required to focus on a single question. Judging emphasized how well the submission uncovered or clarified considerations that changed a judge’s beliefs about either question.

Winners

First Prizes ($50k)

Second Prizes ($37.5k)

Third Prizes ($25k)

Caveats and comments

The judges did not endorse all conclusions in the winning entries. Many essays advanced multiple claims, of which some were judged more persuasive than others. In certain cases, entries were valued for their clarity in presenting viewpoints the judges did not personally agree with.

The diversity of submissions meant that another panel might have selected different winners. Open Philanthropy explicitly discouraged readers from overanchoring on the prizewinners’ topics as signals of institutional priorities or grantmaking direction.

Related entries

Future Fund Worldview Prize | Criticism and Red Teaming Contest | AI risk | AI forecasting | prize | existential risk | longtermism

An­nounc­ing the Win­ners of the 2023 Open Philan­thropy AI Wor­ld­views Contest

Jason Schukraft30 Sep 2023 3:51 UTC
74 points
30 comments2 min readEA link

Trans­for­ma­tive AGI by 2043 is <1% likely

Ted Sanders6 Jun 2023 15:51 UTC
98 points
92 comments5 min readEA link
(arxiv.org)

The Con­trol Prob­lem: Un­solved or Un­solv­able?

Remmelt2 Jun 2023 15:42 UTC
4 points
9 comments13 min readEA link

A com­pute-based frame­work for think­ing about the fu­ture of AI

Matthew_Barnett31 May 2023 22:00 UTC
96 points
36 comments19 min readEA link

An­nounc­ing the Open Philan­thropy AI Wor­ld­views Contest

Jason Schukraft10 Mar 2023 2:33 UTC
137 points
33 comments3 min readEA link
(www.openphilanthropy.org)

De­cep­tive Align­ment is <1% Likely by Default

DavidW21 Feb 2023 15:07 UTC
54 points
26 comments14 min readEA link

Pes­simism about AI Safety

Max_He-Ho2 Apr 2023 7:57 UTC
5 points
0 comments25 min readEA link
(www.lesswrong.com)

“The Race to the End of Hu­man­ity” – Struc­tural Uncer­tainty Anal­y­sis in AI Risk Models

Froolow19 May 2023 12:03 UTC
48 points
4 comments21 min readEA link

Prim­i­tive Global Dis­course Frame­work, Con­sti­tu­tional AI us­ing le­gal frame­works, and Mono­cul­ture—A loss of con­trol over the role of AGI in society

broptross1 Jun 2023 5:12 UTC
2 points
0 comments12 min readEA link

Without a tra­jec­tory change, the de­vel­op­ment of AGI is likely to go badly

Max H30 May 2023 0:21 UTC
1 point
0 comments13 min readEA link

Re­minder: AI Wor­ld­views Con­test Closes May 31

Jason Schukraft8 May 2023 17:40 UTC
20 points
0 comments1 min readEA link

Sta­tus Quo Eng­ines—AI essay

Ilana_Goldowitz_Jimenez28 May 2023 14:33 UTC
1 point
1 comment15 min readEA link

Pre-An­nounc­ing the 2023 Open Philan­thropy AI Wor­ld­views Contest

Jason Schukraft21 Nov 2022 21:45 UTC
291 points
26 comments1 min readEA link

P(doom|AGI) is high: why the de­fault out­come of AGI is doom

Greg_Colbourn ⏸️ 2 May 2023 10:40 UTC
15 points
28 comments3 min readEA link

AGI ris­ing: why we are in a new era of acute risk and in­creas­ing pub­lic aware­ness, and what to do now

Greg_Colbourn ⏸️ 2 May 2023 10:17 UTC
69 points
35 comments13 min readEA link

Im­pli­ca­tions of AGI on Sub­jec­tive Hu­man Experience

Erica S. 30 May 2023 18:47 UTC
2 points
0 comments19 min readEA link
(docs.google.com)

2023 Open Philan­thropy AI Wor­ld­views Con­test: Odds of Ar­tifi­cial Gen­eral In­tel­li­gence by 2043

srhoades1014 Mar 2023 20:32 UTC
19 points
0 comments46 min readEA link

A moral back­lash against AI will prob­a­bly slow down AGI development

Geoffrey Miller31 May 2023 21:31 UTC
147 points
22 comments14 min readEA link

Co­op­er­a­tion, Avoidance, and In­differ­ence: Alter­nate Fu­tures for Misal­igned AGI

Kiel Brennan-Marquez10 Dec 2022 20:32 UTC
4 points
1 comment18 min readEA link

Con­sid­er­a­tions on trans­for­ma­tive AI and ex­plo­sive growth from a semi­con­duc­tor-in­dus­try per­spec­tive

Muireall31 May 2023 1:11 UTC
23 points
1 comment2 min readEA link
(muireall.space)

Beyond Hu­mans: Why All Sen­tient Be­ings Mat­ter in Ex­is­ten­tial Risk

Teun van der Weij31 May 2023 21:21 UTC
12 points
0 comments13 min readEA link

Lan­guage Agents Re­duce the Risk of Ex­is­ten­tial Catastrophe

cdkg29 May 2023 9:59 UTC
29 points
6 comments26 min readEA link
No comments.