The Prize is an incentive to create content like this. But more importantly, we see it as an opportunity to showcase excellent work as an example and inspiration to the Forum’s users.
About the winning posts and comments
Note: I write this section in first person based on my own thoughts, rather than by attempting to summarize the views of the other judges.
“I edited [the transcript] for style and clarity, and also to occasionally have me say smarter things than I actually said.”
The “enhanced transcript” format seems very promising for other Forum content, and I hope to see more people try it out!
As for this enhanced transcript: here, Buck reasons through a difficult problem using techniques we encourage — laying out his “cruxes,” or points that would lead him to change his mind if he came to believe they were false. This practice encourages discussion, since it makes it easier for people to figure out where their views differ from yours and which points are most important to discuss. (You can see this both in the Q&A section of the transcript and in comments on the post itself.)
I also really appreciated Buck’s introduction to the talk, where he suggested to listeners how they might best learn from his work, as well as his concluding summary at the end of the post.
Finally, I’ll quote one of the commenters on the post:
I think the part I like the most, even more than the awesome deconstruction of arguments and their underlying hypotheses, is the sheer number of times you said “I don’t know” or “I’m not sure” or “this might be false”.
Cause prioritization is still a young field, and it’s great to see someone come in and apply a simple, reasonable critique that may improve many different research projects in a concrete way.
It’s also great to check the comments and realize that Michael edited the post after publishing to improve it further — a practice I’d like to see more of!
Aside from that, this is just a lot of solid math being conducted around an important subject, with implications for anyone who wants to work on prioritization research. If we want to be effective, we need to have strong epistemic norms, and avoiding biased estimates is a key part of that.
Value drift isn’t discussed often on the Forum, but I’d like to see that change.
I remember meeting quite a few people when I started to learn about EA (in 2013), and then realizing later on that I hadn’t heard from some of them in years — even though they were highly aligned and interested in EA work when I met them.
If we can figure out how to make that sort of thing happen less often, we’ll have a better chance of keeping the movement strong over the long haul.
Marisa’s piece doesn’t try to draw any strong conclusions — which makes sense, given the sample size and the exploratory nature of the research — but I appreciated its beautiful formatting. I also like how she:
References non-EA research on social movements. (This is something the community as a whole may not be doing enough of.)
Includes a set of direct quotes from interviewees. (Actual human speech offers nuance and detail that are hard to match with a summary of multiple answers.).
Offers future research directions for people who see this post and want to work on similar issues.
Two of the highest-karma users at the time the new Forum was launched (Peter Hurford and Rob Wiblin).
Two users who have a history of strong posts and comments (Larks and Khorton).
All posts published in the titular month qualified for voting, save for those in the following categories:
Procedural posts from CEA and EA Funds (for example, posts announcing a new application round for one of the Funds)
Posts linking to others’ content with little or no additional commentary
Posts which accrued zero or negative net karma after being posted
Example: a post which had 2 karma upon publication and wound up with 2 karma or less
Voters recused themselves from voting on posts written by themselves or their colleagues. Otherwise, they used their own individual criteria for choosing posts, though they broadly agree with the goals outlined above.
Judges each had ten votes to distribute between the month’s posts. They also had a number of “extra” votes equal to [10 - the number of votes made last month]. For example, a judge who cast 7 votes last month would have 13 this month. No judge could cast more than three votes for any single post.
——
The winning comments were chosen by Aaron Gertler, though the other judges had the chance to nominate other comments and to veto comments they didn’t think should win.
Feedback
If you have thoughts on how the Prize has changed the way you read or write on the Forum, or ideas for ways we should change the current format, please write a comment or contact me.
Also: if you haven’t yet, please consider filling out the EA Forum Feedback survey! There’s a section focused on the Prize, in addition to many other questions that will help us improve the Forum.
EA Forum Prize: Winners for February 2020
CEA is pleased to announce the winners of the February 2020 EA Forum Prize!
In first place (for a prize of $750): “My personal cruxes for working on AI safety,” by Buck Shlegeris.
In second place (for a prize of $500): “Biases in our estimates of Scale, Neglectedness and Solvability?,” by Michael St. Jules.
In third place (for a prize of $250): “A Qualitative Analysis of Value Drift in EA,” by Marisa Jurczyk.
The following users were each awarded a Comment Prize ($50):
Matthew Dahlhausen on clean cookstoves
Will Bradshaw on the effects of increased longevity
technicalities summarizing reasons to be concerned about AI risk
Clay Shentrup on electoral reform
For the previous round of prizes, see this post.
What is the EA Forum Prize?
Certain posts and comments exemplify the kind of content we most want to see on the EA Forum. They are well-researched and well-organized; they care about informing readers, not just persuading them.
The Prize is an incentive to create content like this. But more importantly, we see it as an opportunity to showcase excellent work as an example and inspiration to the Forum’s users.
About the winning posts and comments
Note: I write this section in first person based on my own thoughts, rather than by attempting to summarize the views of the other judges.
My personal cruxes for working on AI safety
“I edited [the transcript] for style and clarity, and also to occasionally have me say smarter things than I actually said.”
The “enhanced transcript” format seems very promising for other Forum content, and I hope to see more people try it out!
As for this enhanced transcript: here, Buck reasons through a difficult problem using techniques we encourage — laying out his “cruxes,” or points that would lead him to change his mind if he came to believe they were false. This practice encourages discussion, since it makes it easier for people to figure out where their views differ from yours and which points are most important to discuss. (You can see this both in the Q&A section of the transcript and in comments on the post itself.)
I also really appreciated Buck’s introduction to the talk, where he suggested to listeners how they might best learn from his work, as well as his concluding summary at the end of the post.
Finally, I’ll quote one of the commenters on the post:
Also: Congratulations to Buck for winning the top prize twice in three months!
Biases in our estimates of Scale, Neglectedness and Solvability?
Cause prioritization is still a young field, and it’s great to see someone come in and apply a simple, reasonable critique that may improve many different research projects in a concrete way.
It’s also great to check the comments and realize that Michael edited the post after publishing to improve it further — a practice I’d like to see more of!
Aside from that, this is just a lot of solid math being conducted around an important subject, with implications for anyone who wants to work on prioritization research. If we want to be effective, we need to have strong epistemic norms, and avoiding biased estimates is a key part of that.
A Qualitative Analysis of Value Drift in EA
Value drift isn’t discussed often on the Forum, but I’d like to see that change.
I remember meeting quite a few people when I started to learn about EA (in 2013), and then realizing later on that I hadn’t heard from some of them in years — even though they were highly aligned and interested in EA work when I met them.
If we can figure out how to make that sort of thing happen less often, we’ll have a better chance of keeping the movement strong over the long haul.
Marisa’s piece doesn’t try to draw any strong conclusions — which makes sense, given the sample size and the exploratory nature of the research — but I appreciated its beautiful formatting. I also like how she:
References non-EA research on social movements. (This is something the community as a whole may not be doing enough of.)
Includes a set of direct quotes from interviewees. (Actual human speech offers nuance and detail that are hard to match with a summary of multiple answers.).
Offers future research directions for people who see this post and want to work on similar issues.
The winning comments
I won’t write up an analysis of each comment. Instead, here are my thoughts on selecting comments for the prize.
The voting process
The winning posts were chosen by five people:
One Forum moderator (Aaron Gertler).
Two of the highest-karma users at the time the new Forum was launched (Peter Hurford and Rob Wiblin).
Two users who have a history of strong posts and comments (Larks and Khorton).
All posts published in the titular month qualified for voting, save for those in the following categories:
Procedural posts from CEA and EA Funds (for example, posts announcing a new application round for one of the Funds)
Posts linking to others’ content with little or no additional commentary
Posts which accrued zero or negative net karma after being posted
Example: a post which had 2 karma upon publication and wound up with 2 karma or less
Voters recused themselves from voting on posts written by themselves or their colleagues. Otherwise, they used their own individual criteria for choosing posts, though they broadly agree with the goals outlined above.
Judges each had ten votes to distribute between the month’s posts. They also had a number of “extra” votes equal to [10 - the number of votes made last month]. For example, a judge who cast 7 votes last month would have 13 this month. No judge could cast more than three votes for any single post.
——
The winning comments were chosen by Aaron Gertler, though the other judges had the chance to nominate other comments and to veto comments they didn’t think should win.
Feedback
If you have thoughts on how the Prize has changed the way you read or write on the Forum, or ideas for ways we should change the current format, please write a comment or contact me.
Also: if you haven’t yet, please consider filling out the EA Forum Feedback survey! There’s a section focused on the Prize, in addition to many other questions that will help us improve the Forum.