RSS

Open Philan­thropy AI Wor­ld­views Contest

TagLast edit: Jun 6, 2025, 8:37 PM by Dane Valerie

The Open Philanthropy AI Worldviews Contest was a 2023 competition organized to surface novel considerations that could influence Open Philanthropy’s views on AI timelines and AI risk. A total of $225,000 in prize money was awarded across six winning entries. The contest served as the formal successor to a 2022 preannouncement and as a spiritual successor to the now-defunct Future Fund AI competition.

Contest details

Submissions were accepted from September 23, 2022 through May 31, 2023. Eligible entries had to be original, written in English, and first published during that window. There was no official word limit, though essays over 5,000 words were considered harder to engage with. Coauthored submissions were allowed, and participants could submit multiple entries but only win once.

Entries had to address one of the following questions:

  1. What was the probability that AGI would be developed by January 1, 2043?

  2. Conditional on AGI being developed by 2070, what was the probability that humanity would suffer an existential catastrophe due to loss of control over an AGI system?

Each essay was required to focus on a single question. Judging emphasized how well the submission uncovered or clarified considerations that changed a judge’s beliefs about either question.

Winners

First Prizes ($50k)

Second Prizes ($37.5k)

Third Prizes ($25k)

Caveats and comments

The judges did not endorse all conclusions in the winning entries. Many essays advanced multiple claims, of which some were judged more persuasive than others. In certain cases, entries were valued for their clarity in presenting viewpoints the judges did not personally agree with.

The diversity of submissions meant that another panel might have selected different winners. Open Philanthropy explicitly discouraged readers from overanchoring on the prizewinners’ topics as signals of institutional priorities or grantmaking direction.

Related entries

Future Fund Worldview Prize | Criticism and Red Teaming Contest | AI risk | AI forecasting | prize | existential risk | longtermism

An­nounc­ing the Win­ners of the 2023 Open Philan­thropy AI Wor­ld­views Contest

Jason SchukraftSep 30, 2023, 3:51 AM
74 points
30 comments2 min readEA link

Trans­for­ma­tive AGI by 2043 is <1% likely

Ted SandersJun 6, 2023, 3:51 PM
98 points
92 comments5 min readEA link
(arxiv.org)

The Con­trol Prob­lem: Un­solved or Un­solv­able?

RemmeltJun 2, 2023, 3:42 PM
4 points
9 comments14 min readEA link

A com­pute-based frame­work for think­ing about the fu­ture of AI

Matthew_BarnettMay 31, 2023, 10:00 PM
96 points
36 comments19 min readEA link

An­nounc­ing the Open Philan­thropy AI Wor­ld­views Contest

Jason SchukraftMar 10, 2023, 2:33 AM
137 points
33 comments3 min readEA link
(www.openphilanthropy.org)

De­cep­tive Align­ment is <1% Likely by Default

DavidWFeb 21, 2023, 3:07 PM
54 points
26 comments14 min readEA link

Pes­simism about AI Safety

Max_He-HoApr 2, 2023, 7:57 AM
5 points
0 comments25 min readEA link
(www.lesswrong.com)

“The Race to the End of Hu­man­ity” – Struc­tural Uncer­tainty Anal­y­sis in AI Risk Models

FroolowMay 19, 2023, 12:03 PM
48 points
4 comments21 min readEA link

Prim­i­tive Global Dis­course Frame­work, Con­sti­tu­tional AI us­ing le­gal frame­works, and Mono­cul­ture—A loss of con­trol over the role of AGI in society

broptrossJun 1, 2023, 5:12 AM
2 points
0 comments12 min readEA link

Without a tra­jec­tory change, the de­vel­op­ment of AGI is likely to go badly

Max HMay 30, 2023, 12:21 AM
1 point
0 comments13 min readEA link

Re­minder: AI Wor­ld­views Con­test Closes May 31

Jason SchukraftMay 8, 2023, 5:40 PM
20 points
0 comments1 min readEA link

Sta­tus Quo Eng­ines—AI essay

Ilana_Goldowitz_JimenezMay 28, 2023, 2:33 PM
1 point
0 comments15 min readEA link

Pre-An­nounc­ing the 2023 Open Philan­thropy AI Wor­ld­views Contest

Jason SchukraftNov 21, 2022, 9:45 PM
291 points
26 comments1 min readEA link

P(doom|AGI) is high: why the de­fault out­come of AGI is doom

Greg_Colbourn ⏸️ May 2, 2023, 10:40 AM
13 points
28 comments3 min readEA link

AGI ris­ing: why we are in a new era of acute risk and in­creas­ing pub­lic aware­ness, and what to do now

Greg_Colbourn ⏸️ May 2, 2023, 10:17 AM
68 points
35 comments13 min readEA link

Im­pli­ca­tions of AGI on Sub­jec­tive Hu­man Experience

Erica S. May 30, 2023, 6:47 PM
2 points
0 comments19 min readEA link
(docs.google.com)

2023 Open Philan­thropy AI Wor­ld­views Con­test: Odds of Ar­tifi­cial Gen­eral In­tel­li­gence by 2043

srhoades10Mar 14, 2023, 8:32 PM
19 points
0 comments46 min readEA link

A moral back­lash against AI will prob­a­bly slow down AGI development

Geoffrey MillerMay 31, 2023, 9:31 PM
145 points
22 comments14 min readEA link

Co­op­er­a­tion, Avoidance, and In­differ­ence: Alter­nate Fu­tures for Misal­igned AGI

Kiel Brennan-MarquezDec 10, 2022, 8:32 PM
4 points
1 comment18 min readEA link

Con­sid­er­a­tions on trans­for­ma­tive AI and ex­plo­sive growth from a semi­con­duc­tor-in­dus­try per­spec­tive

MuireallMay 31, 2023, 1:11 AM
23 points
1 comment2 min readEA link
(muireall.space)

Ab­strac­tion is Big­ger than Nat­u­ral Abstraction

Nicholas KrossMay 31, 2023, 12:00 AM
2 points
0 comments1 min readEA link

Beyond Hu­mans: Why All Sen­tient Be­ings Mat­ter in Ex­is­ten­tial Risk

Teun van der WeijMay 31, 2023, 9:21 PM
12 points
0 comments13 min readEA link

Lan­guage Agents Re­duce the Risk of Ex­is­ten­tial Catastrophe

cdkgMay 29, 2023, 9:59 AM
29 points
6 comments26 min readEA link
No comments.