EA Forum Prize: Winners for February 2020

CEA is pleased to an­nounce the win­ners of the Fe­bru­ary 2020 EA Fo­rum Prize!

In first place (for a prize of $750): “My per­sonal cruxes for work­ing on AI safety,” by Buck Sh­legeris.

In sec­ond place (for a prize of $500): “Bi­ases in our es­ti­mates of Scale, Ne­glect­ed­ness and Solv­abil­ity?,” by Michael St. Jules.

In third place (for a prize of $250): “A Qual­i­ta­tive Anal­y­sis of Value Drift in EA,” by Marisa Jur­czyk.

The fol­low­ing users were each awarded a Com­ment Prize ($50):

For the pre­vi­ous round of prizes, see this post.

What is the EA Fo­rum Prize?

Cer­tain posts and com­ments ex­em­plify the kind of con­tent we most want to see on the EA Fo­rum. They are well-re­searched and well-or­ga­nized; they care about in­form­ing read­ers, not just per­suad­ing them.

The Prize is an in­cen­tive to cre­ate con­tent like this. But more im­por­tantly, we see it as an op­por­tu­nity to show­case ex­cel­lent work as an ex­am­ple and in­spira­tion to the Fo­rum’s users.

About the win­ning posts and comments

Note: I write this sec­tion in first per­son based on my own thoughts, rather than by at­tempt­ing to sum­ma­rize the views of the other judges.

My per­sonal cruxes for work­ing on AI safety

“I ed­ited [the tran­script] for style and clar­ity, and also to oc­ca­sion­ally have me say smarter things than I ac­tu­ally said.”

The “en­hanced tran­script” for­mat seems very promis­ing for other Fo­rum con­tent, and I hope to see more peo­ple try it out!

As for this en­hanced tran­script: here, Buck rea­sons through a difficult prob­lem us­ing tech­niques we en­courage — lay­ing out his “cruxes,” or points that would lead him to change his mind if he came to be­lieve they were false. This prac­tice en­courages dis­cus­sion, since it makes it eas­ier for peo­ple to figure out where their views differ from yours and which points are most im­por­tant to dis­cuss. (You can see this both in the Q&A sec­tion of the tran­script and in com­ments on the post it­self.)

I also re­ally ap­pre­ci­ated Buck’s in­tro­duc­tion to the talk, where he sug­gested to listen­ers how they might best learn from his work, as well as his con­clud­ing sum­mary at the end of the post.

Fi­nally, I’ll quote one of the com­menters on the post:

I think the part I like the most, even more than the awe­some de­con­struc­tion of ar­gu­ments and their un­der­ly­ing hy­pothe­ses, is the sheer num­ber of times you said “I don’t know” or “I’m not sure” or “this might be false”.

Also: Con­grat­u­la­tions to Buck for win­ning the top prize twice in three months!

Bi­ases in our es­ti­mates of Scale, Ne­glect­ed­ness and Solv­abil­ity?

Cause pri­ori­ti­za­tion is still a young field, and it’s great to see some­one come in and ap­ply a sim­ple, rea­son­able cri­tique that may im­prove many differ­ent re­search pro­jects in a con­crete way.

It’s also great to check the com­ments and re­al­ize that Michael ed­ited the post af­ter pub­lish­ing to im­prove it fur­ther — a prac­tice I’d like to see more of!

Aside from that, this is just a lot of solid math be­ing con­ducted around an im­por­tant sub­ject, with im­pli­ca­tions for any­one who wants to work on pri­ori­ti­za­tion re­search. If we want to be effec­tive, we need to have strong epistemic norms, and avoid­ing bi­ased es­ti­mates is a key part of that.

A Qual­i­ta­tive Anal­y­sis of Value Drift in EA

Value drift isn’t dis­cussed of­ten on the Fo­rum, but I’d like to see that change.

I re­mem­ber meet­ing quite a few peo­ple when I started to learn about EA (in 2013), and then re­al­iz­ing later on that I hadn’t heard from some of them in years — even though they were highly al­igned and in­ter­ested in EA work when I met them.

If we can figure out how to make that sort of thing hap­pen less of­ten, we’ll have a bet­ter chance of keep­ing the move­ment strong over the long haul.

Marisa’s piece doesn’t try to draw any strong con­clu­sions — which makes sense, given the sam­ple size and the ex­plo­ra­tory na­ture of the re­search — but I ap­pre­ci­ated its beau­tiful for­mat­ting. I also like how she:

  • Refer­ences non-EA re­search on so­cial move­ments. (This is some­thing the com­mu­nity as a whole may not be do­ing enough of.)

  • In­cludes a set of di­rect quotes from in­ter­vie­wees. (Ac­tual hu­man speech offers nu­ance and de­tail that are hard to match with a sum­mary of mul­ti­ple an­swers.).

  • Offers fu­ture re­search di­rec­tions for peo­ple who see this post and want to work on similar is­sues.

The win­ning comments

I won’t write up an anal­y­sis of each com­ment. In­stead, here are my thoughts on se­lect­ing com­ments for the prize.

The vot­ing process

The win­ning posts were cho­sen by five peo­ple:

All posts pub­lished in the titu­lar month qual­ified for vot­ing, save for those in the fol­low­ing cat­e­gories:

  • Pro­ce­du­ral posts from CEA and EA Funds (for ex­am­ple, posts an­nounc­ing a new ap­pli­ca­tion round for one of the Funds)

  • Posts link­ing to oth­ers’ con­tent with lit­tle or no ad­di­tional commentary

  • Posts which ac­crued zero or nega­tive net karma af­ter be­ing posted

    • Ex­am­ple: a post which had 2 karma upon pub­li­ca­tion and wound up with 2 karma or less

Vot­ers re­cused them­selves from vot­ing on posts writ­ten by them­selves or their col­leagues. Other­wise, they used their own in­di­vi­d­ual crite­ria for choos­ing posts, though they broadly agree with the goals out­lined above.

Judges each had ten votes to dis­tribute be­tween the month’s posts. They also had a num­ber of “ex­tra” votes equal to [10 - the num­ber of votes made last month]. For ex­am­ple, a judge who cast 7 votes last month would have 13 this month. No judge could cast more than three votes for any sin­gle post.


The win­ning com­ments were cho­sen by Aaron Gertler, though the other judges had the chance to nom­i­nate other com­ments and to veto com­ments they didn’t think should win.


If you have thoughts on how the Prize has changed the way you read or write on the Fo­rum, or ideas for ways we should change the cur­rent for­mat, please write a com­ment or con­tact me.

Also: if you haven’t yet, please con­sider filling out the EA Fo­rum Feed­back sur­vey! There’s a sec­tion fo­cused on the Prize, in ad­di­tion to many other ques­tions that will help us im­prove the Fo­rum.

No comments.