RSS

Open Philan­thropy AI Wor­ld­views Contest

TagLast edit: Mar 15, 2023, 6:53 PM by Lorenzo Buonanno🔸

The goal of the contest is to surface novel considerations that could influence Open Philanthropy’s views on AI timelines and AI risk. They plan to distribute $225,000 in prize money across six winning entries. This is the same contest that was preannounced late 2022, which is itself the spiritual successor to the now-defunct Future Fund competition.

The contest deadline is May 31, 2023. All work posted for the first time on or after September 23, 2022 is eligible. Use this form to submit your entry.

Related entries

Future Fund Worldview Prize | Criticism and Red Teaming Contest | AI risk | AI forecasting | prize

An­nounc­ing the Win­ners of the 2023 Open Philan­thropy AI Wor­ld­views Contest

Jason SchukraftSep 30, 2023, 3:51 AM
74 points
30 comments2 min readEA link

Trans­for­ma­tive AGI by 2043 is <1% likely

Ted SandersJun 6, 2023, 3:51 PM
98 points
92 comments5 min readEA link
(arxiv.org)

The Con­trol Prob­lem: Un­solved or Un­solv­able?

RemmeltJun 2, 2023, 3:42 PM
4 points
9 comments14 min readEA link

A com­pute-based frame­work for think­ing about the fu­ture of AI

Matthew_BarnettMay 31, 2023, 10:00 PM
96 points
36 comments19 min readEA link

Prim­i­tive Global Dis­course Frame­work, Con­sti­tu­tional AI us­ing le­gal frame­works, and Mono­cul­ture—A loss of con­trol over the role of AGI in society

broptrossJun 1, 2023, 5:12 AM
2 points
0 comments12 min readEA link

“The Race to the End of Hu­man­ity” – Struc­tural Uncer­tainty Anal­y­sis in AI Risk Models

FroolowMay 19, 2023, 12:03 PM
48 points
4 comments21 min readEA link

De­cep­tive Align­ment is <1% Likely by Default

DavidWFeb 21, 2023, 3:07 PM
54 points
26 comments14 min readEA link

An­nounc­ing the Open Philan­thropy AI Wor­ld­views Contest

Jason SchukraftMar 10, 2023, 2:33 AM
137 points
33 comments3 min readEA link
(www.openphilanthropy.org)

Without a tra­jec­tory change, the de­vel­op­ment of AGI is likely to go badly

Max HMay 30, 2023, 12:21 AM
1 point
0 comments13 min readEA link

Pes­simism about AI Safety

Max_He-HoApr 2, 2023, 7:57 AM
5 points
0 comments25 min readEA link
(www.lesswrong.com)

Sta­tus Quo Eng­ines—AI essay

Ilana_Goldowitz_JimenezMay 28, 2023, 2:33 PM
1 point
0 comments15 min readEA link

AGI ris­ing: why we are in a new era of acute risk and in­creas­ing pub­lic aware­ness, and what to do now

Greg_Colbourn ⏸️ May 2, 2023, 10:17 AM
68 points
35 comments13 min readEA link

Im­pli­ca­tions of AGI on Sub­jec­tive Hu­man Experience

Erica S. May 30, 2023, 6:47 PM
2 points
0 comments19 min readEA link
(docs.google.com)

Re­minder: AI Wor­ld­views Con­test Closes May 31

Jason SchukraftMay 8, 2023, 5:40 PM
20 points
0 comments1 min readEA link

Pre-An­nounc­ing the 2023 Open Philan­thropy AI Wor­ld­views Contest

Jason SchukraftNov 21, 2022, 9:45 PM
291 points
26 comments1 min readEA link

P(doom|AGI) is high: why the de­fault out­come of AGI is doom

Greg_Colbourn ⏸️ May 2, 2023, 10:40 AM
13 points
28 comments3 min readEA link

Lan­guage Agents Re­duce the Risk of Ex­is­ten­tial Catastrophe

cdkgMay 29, 2023, 9:59 AM
29 points
6 comments26 min readEA link

Co­op­er­a­tion, Avoidance, and In­differ­ence: Alter­nate Fu­tures for Misal­igned AGI

Kiel Brennan-MarquezDec 10, 2022, 8:32 PM
4 points
1 comment18 min readEA link

2023 Open Philan­thropy AI Wor­ld­views Con­test: Odds of Ar­tifi­cial Gen­eral In­tel­li­gence by 2043

srhoades10Mar 14, 2023, 8:32 PM
19 points
0 comments46 min readEA link

Con­sid­er­a­tions on trans­for­ma­tive AI and ex­plo­sive growth from a semi­con­duc­tor-in­dus­try per­spec­tive

MuireallMay 31, 2023, 1:11 AM
23 points
1 comment2 min readEA link
(muireall.space)

Ab­strac­tion is Big­ger than Nat­u­ral Abstraction

Nicholas / Heather KrossMay 31, 2023, 12:00 AM
2 points
0 comments1 min readEA link

Beyond Hu­mans: Why All Sen­tient Be­ings Mat­ter in Ex­is­ten­tial Risk

Teun van der WeijMay 31, 2023, 9:21 PM
12 points
0 comments13 min readEA link

A moral back­lash against AI will prob­a­bly slow down AGI development

Geoffrey MillerMay 31, 2023, 9:31 PM
143 points
22 comments14 min readEA link
No comments.