The Open Philanthropy AI Worldviews Contest was a 2023 competition organized to surface novel considerations that could influence Open Philanthropy’s views on AI timelines and AI risk. A total of $225,000 in prize money was awarded across six winning entries. The contest served as the formal successor to a 2022 preannouncement and as a spiritual successor to the now-defunct Future Fund AI competition.
Contest details
Submissions were accepted from September 23, 2022 through May 31, 2023. Eligible entries had to be original, written in English, and first published during that window. There was no official word limit, though essays over 5,000 words were considered harder to engage with. Coauthored submissions were allowed, and participants could submit multiple entries but only win once.
Entries had to address one of the following questions:
What was the probability that AGI would be developed by January 1, 2043?
Conditional on AGI being developed by 2070, what was the probability that humanity would suffer an existential catastrophe due to loss of control over an AGI system?
Each essay was required to focus on a single question. Judging emphasized how well the submission uncovered or clarified considerations that changed a judge’s beliefs about either question.
Winners
First Prizes ($50k)
AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years by Basil Halperin, Zachary Mazlish, and Trevor Chow
Evolution provides no evidence for the sharp left turn by Quintin Pope
Second Prizes ($37.5k)
Deceptive Alignment is <1% Likely by Default by David Wheaton
AGI Catastrophe and Takeover: Some Reference Class-Based Priors by Zach Freitas-Groff
Third Prizes ($25k)
Imitation Learning is Probably Existentially Safe by Michael Cohen
‘Dissolving’ AI Risk – Parameter Uncertainty in AI Future Forecasting by Alex Bates
Caveats and comments
The judges did not endorse all conclusions in the winning entries. Many essays advanced multiple claims, of which some were judged more persuasive than others. In certain cases, entries were valued for their clarity in presenting viewpoints the judges did not personally agree with.
The diversity of submissions meant that another panel might have selected different winners. Open Philanthropy explicitly discouraged readers from overanchoring on the prizewinners’ topics as signals of institutional priorities or grantmaking direction.
Related entries
Future Fund Worldview Prize | Criticism and Red Teaming Contest | AI risk | AI forecasting | prize | existential risk | longtermism