EA Forum Prize: Winners for October 2019

CEA is pleased to an­nounce the win­ners of the Oc­to­ber 2019 EA Fo­rum Prize!

In first place (for a prize of $750): “Real­ity is of­ten un­der­pow­ered,” by Gre­gory Lewis.

In sec­ond place (for a prize of $500): “Tech­ni­cal AGI safety re­search out­side AI,” by Richard Ngo.

In third place (for a prize of $250): “Shap­ley val­ues: Bet­ter than coun­ter­fac­tu­als,” by Nuno Sem­pere.

The fol­low­ing users were each awarded a Com­ment Prize ($50):

See this post for the pre­vi­ous round of prizes.

What is the EA Fo­rum Prize?

Cer­tain posts and com­ments ex­em­plify the kind of con­tent we most want to see on the EA Fo­rum. They are well-re­searched and well-or­ga­nized; they care about in­form­ing read­ers, not just per­suad­ing them.

The Prize is an in­cen­tive to cre­ate con­tent like this. But more im­por­tantly, we see it as an op­por­tu­nity to show­case ex­cel­lent work as an ex­am­ple and in­spira­tion to the Fo­rum’s users.

About the win­ning posts and comments

Note: I write this sec­tion in first per­son based on my own thoughts, rather than by at­tempt­ing to sum­ma­rize the views of the other judges.

Real­ity is of­ten underpowered

In this post, Lewis makes a pow­er­ful ar­gu­ment that we ought to pay more at­ten­tion when we find our­selves work­ing with what­ever data we can scrounge from data-poor en­vi­ron­ments, and con­sider other ways of de­vel­op­ing our judg­ments and pre­dic­tions.

Some el­e­ments of this post I es­pe­cially ap­pre­ci­ated:

  • The au­thor’s points are ap­pli­ca­ble to work in many differ­ent cause ar­eas, and he ex­plic­itly points out ways in which they are more or less ap­pli­ca­ble de­pend­ing on the prob­lem at hand.

  • He opens with a mem­o­rable story be­fore mak­ing his gen­eral points (I ex­pect that this prac­tice will of­ten make Fo­rum posts more mem­o­rable, and thus more likely to be ap­plied when they mat­ter).

  • Rather than sim­ply iden­ti­fy­ing a prob­lem, he points out ways in which we might be able to over­come it, in­clud­ing a sec­tion with “fi­nal EA take­aways”; I love to see posts that, when rele­vant, end with a set of ac­tion­able sug­ges­tions.

Tech­ni­cal AGI safety re­search out­side AI

To quote one com­menter, “I think posts of this type (which list op­tions for peo­ple who want to work in a cause area) are valuable”. I have a sense that fields of re­search are more likely to thrive when they can pre­sent schol­ars with in­ter­est­ing open prob­lems, and Ngo takes the ex­tra step of iden­ti­fy­ing prob­lems that might ap­peal to peo­ple who might not oth­er­wise con­sider work­ing on AGI safety. This post is a good idea, ex­e­cuted well, and I don’t have much else to say — but I will note the abun­dant hy­per­links to sources in­side and out­side of EA.

Shap­ley val­ues: Bet­ter than counterfactuals

To be hon­est, my fa­vorite part of this post may be the very hon­est epistemic sta­tus (“en­thu­si­asm on the verge of par­ti­san­ship”).

...but the rest of the post was also quite good: many, many ex­am­ples, plus a helpful link to a calcu­la­tor that read­ers could use to try ap­ply­ing Shap­ley val­ues them­selves. As with “Real­ity is of­ten un­der­pow­ered”, the ad­vice here could be used in many differ­ent situ­a­tions (the ex­am­ples help to lay out how Shap­ley val­ues might help us un­der­stand the im­pact of giv­ing, hiring, di­rect work, pub­lic com­mu­ni­ca­tion…).

I was also pleased to see the au­thor’s replies to com­menters (and the fact that they ed­ited their epistemic sta­tus af­ter one ex­change).

The win­ning comments

I won’t write up an anal­y­sis of each com­ment. In­stead, here are my thoughts on se­lect­ing com­ments for the prize.

The vot­ing process

The win­ning posts were cho­sen by five peo­ple:

  • Aaron Gertler, a Fo­rum mod­er­a­tor (Denise Melchin has de­cided to step back from the panel for the fore­see­able fu­ture).

  • Two of the high­est-karma users at the time the new Fo­rum was launched (Peter Hur­ford and Rob Wiblin).

  • Two users who have a re­cent his­tory of strong posts and com­ments (Larks and Khor­ton).

All posts pub­lished in the titu­lar month qual­ified for vot­ing, save for those in the fol­low­ing cat­e­gories:

  • Pro­ce­du­ral posts from CEA and EA Funds (for ex­am­ple, posts an­nounc­ing a new ap­pli­ca­tion round for one of the Funds)

  • Posts link­ing to oth­ers’ con­tent with lit­tle or no ad­di­tional commentary

  • Posts which ac­crued zero or nega­tive net karma af­ter be­ing posted

    • Ex­am­ple: a post which had 2 karma upon pub­li­ca­tion and wound up with 2 karma or less

Vot­ers re­cused them­selves from vot­ing on posts writ­ten by them­selves or their col­leagues. Other­wise, they used their own in­di­vi­d­ual crite­ria for choos­ing posts, though they broadly agree with the goals out­lined above.

Judges each had ten votes to dis­tribute be­tween the month’s posts. They also had a num­ber of “ex­tra” votes equal to [10 - the num­ber of votes made last month]. For ex­am­ple, a judge who cast 7 votes last month would have 13 this month. No judge could cast more than three votes for any sin­gle post.

------

The win­ning com­ments were cho­sen by Aaron Gertler, though the other judges had the chance to eval­u­ate the win­ners be­fore­hand and veto com­ments they didn’t think should win.

Feedback

If you have thoughts on how the Prize has changed the way you read or write on the Fo­rum, or ideas for ways we should change the cur­rent for­mat, please write a com­ment or con­tact Aaron Gertler.