EA Forum Prize: Winners for October 2019
CEA is pleased to announce the winners of the October 2019 EA Forum Prize!
In first place (for a prize of $750): “Reality is often underpowered,” by Gregory Lewis.
In second place (for a prize of $500): “Technical AGI safety research outside AI,” by Richard Ngo.
In third place (for a prize of $250): “Shapley values: Better than counterfactuals,” by Nuno Sempere.
The following users were each awarded a Comment Prize ($50):
Will Bradshaw and Oscar Horta for a conversation on terminology around wild animal suffering
Raemon on donor lotteries
harald on bone marrow data
Max Daniel’s comments on Atomic Obsession
See this post for the previous round of prizes.
What is the EA Forum Prize?
Certain posts and comments exemplify the kind of content we most want to see on the EA Forum. They are well-researched and well-organized; they care about informing readers, not just persuading them.
The Prize is an incentive to create content like this. But more importantly, we see it as an opportunity to showcase excellent work as an example and inspiration to the Forum’s users.
About the winning posts and comments
Note: I write this section in first person based on my own thoughts, rather than by attempting to summarize the views of the other judges.
Reality is often underpowered
In this post, Lewis makes a powerful argument that we ought to pay more attention when we find ourselves working with whatever data we can scrounge from data-poor environments, and consider other ways of developing our judgments and predictions.
Some elements of this post I especially appreciated:
The author’s points are applicable to work in many different cause areas, and he explicitly points out ways in which they are more or less applicable depending on the problem at hand.
He opens with a memorable story before making his general points (I expect that this practice will often make Forum posts more memorable, and thus more likely to be applied when they matter).
Rather than simply identifying a problem, he points out ways in which we might be able to overcome it, including a section with “final EA takeaways”; I love to see posts that, when relevant, end with a set of actionable suggestions.
Technical AGI safety research outside AI
To quote one commenter, “I think posts of this type (which list options for people who want to work in a cause area) are valuable”. I have a sense that fields of research are more likely to thrive when they can present scholars with interesting open problems, and Ngo takes the extra step of identifying problems that might appeal to people who might not otherwise consider working on AGI safety. This post is a good idea, executed well, and I don’t have much else to say — but I will note the abundant hyperlinks to sources inside and outside of EA.
Shapley values: Better than counterfactuals
To be honest, my favorite part of this post may be the very honest epistemic status (“enthusiasm on the verge of partisanship”).
...but the rest of the post was also quite good: many, many examples, plus a helpful link to a calculator that readers could use to try applying Shapley values themselves. As with “Reality is often underpowered”, the advice here could be used in many different situations (the examples help to lay out how Shapley values might help us understand the impact of giving, hiring, direct work, public communication…).
I was also pleased to see the author’s replies to commenters (and the fact that they edited their epistemic status after one exchange).
The winning comments
I won’t write up an analysis of each comment. Instead, here are my thoughts on selecting comments for the prize.
The voting process
The winning posts were chosen by five people:
Aaron Gertler, a Forum moderator (Denise Melchin has decided to step back from the panel for the foreseeable future).
Two of the highest-karma users at the time the new Forum was launched (Peter Hurford and Rob Wiblin).
Two users who have a recent history of strong posts and comments (Larks and Khorton).
All posts published in the titular month qualified for voting, save for those in the following categories:
Procedural posts from CEA and EA Funds (for example, posts announcing a new application round for one of the Funds)
Posts linking to others’ content with little or no additional commentary
Posts which accrued zero or negative net karma after being posted
Example: a post which had 2 karma upon publication and wound up with 2 karma or less
Voters recused themselves from voting on posts written by themselves or their colleagues. Otherwise, they used their own individual criteria for choosing posts, though they broadly agree with the goals outlined above.
Judges each had ten votes to distribute between the month’s posts. They also had a number of “extra” votes equal to [10 - the number of votes made last month]. For example, a judge who cast 7 votes last month would have 13 this month. No judge could cast more than three votes for any single post.
------
The winning comments were chosen by Aaron Gertler, though the other judges had the chance to evaluate the winners beforehand and veto comments they didn’t think should win.
Feedback
If you have thoughts on how the Prize has changed the way you read or write on the Forum, or ideas for ways we should change the current format, please write a comment or contact Aaron Gertler.
- EA Forum Prize: Winners for November 2019 by 16 Jan 2020 0:56 UTC; 26 points) (
- 22 May 2020 3:48 UTC; 6 points) 's comment on Technical AGI safety research outside AI by (
- 22 May 2020 3:49 UTC; 6 points) 's comment on Shapley values: Better than counterfactuals by (
- 22 May 2020 3:48 UTC; 2 points) 's comment on Reality is often underpowered by (
Max Daniel is listed as one of the four recipients of a Comment Prize, but no comment is listed.
The prize was meant to refer to his full set of comments on that thread, rather than any particular comment. But to reduce possible confusion, I’ve linked to a particular comment.
Thanks. Just to be clear: before your edit, there was no thread linked, or at least no link showed up on my browser. I mention this in case it reflects a bug with the site rather than an oversight.
Sounds like a bug; I’ll keep an eye out for other instances. What browser were you using?