EA Forum Prize: Winners for June 2019
CEA is pleased to announce the winners of the June 2019 EA Forum Prize!
In first place (for a prize of $999): “Information security careers for GCR reduction,” by Claire Zabel and Luke Muehlhauser.
In second place (for a prize of $500): A collection of posts on risks from nuclear war (1, 2, 3, 4, 5), by Luisa Rodriguez.
In third place (for a prize of $250): “Invertebrate Sentience: A Useful Empirical Resource,” by Jason Schukraft.
For the previous round of prizes, see our May post.
Note: Claire and Luke asked that their prize money be donated to the Long-Term Future Fund, because Open Phil has a policy against grantmakers receiving funds from Open Phil grantees.
What is the EA Forum Prize?
Certain posts exemplify the kind of content we most want to see on the EA Forum. They are well-researched and well-organized; they care about informing readers, not just persuading them.
The Prize is an incentive to create posts like this. But more importantly, we see it as an opportunity to showcase excellent content as an example and inspiration to the Forum’s users.
About the winning posts
Note: I write this section in first person based on my own thoughts, rather than by attempting to summarize the views of the other judges.
—
Career-choice posts like “Information security careers for GCR reduction” help members of the EA community in several ways:
They give people who are in a position to choose or change their career a sense of what areas might be useful.
They alert people who already work in a certain area that they might be able to have more impact than they had realized.
They help funders and organizations that have influence over the career choices of others to make better decisions. (For example, by sponsoring a scholarship for a PhD focused on a particular field, or conducting research on how firms in that field choose new hires.)
In this particular post, Claire and Luke offer an excellent summary beforehand, explaining not only their reasoning for the potential of the career path but also the timelines involved (“some organizations [...] would hire [infosec experts] now, if they found them”). They also clarify their uncertainties and provide a follow-up step for anyone with an interest in the field, whatever their level of experience.
—
In June, Luisa Rodriguez published a book’s worth of material about risks from nuclear war, beginning with “Which nuclear wars should worry us most?”
Her collection of posts was well-organized:
Each post included a summary of its individual findings and an explanation of how it fit into the overall series.
The articles were packed with links and footnotes.
She went back to add corrections on multiple posts when she (or commenters) noticed mistakes. While it’s good to acknowledge in a comment when you change your mind, it’s even better to adjust the post in this way, so that readers don’t need to dig into comments to get the most up-to-date version of a post.
Overall, though, I don’t think I need to say much about the posts — they speak for themselves. (If that’s not enough, Alex Tabarrok also speaks for them.)
Note on voting: Fewer judges voted for at least one post in the series than voted for this month’s first-place post. This was the standard I used to select winners for June; it’s possible that we may change the way we handle voting on “series” posts in the future.
—
Jason Schukraft published several posts on invertebrate sentience in June, but judges were especially impressed by “A Useful Empirical Resource”, which ably summarizes more than 1000 citations’ worth of research on the behavior of ants, bees, cows, and many other creatures.
Questions about the experience of other species are inherently very difficult to think about (we still don’t know what it’s like to be a bat), but I appreciate the work of Schukraft (and other contributors from Rethink Priorities) to attack the problem from many different angles. Learning about the spatial memory of spiders hasn’t helped me settle whether I should stop squashing the ones I find in my shower, but as I read this post, I felt myself developing a more sophisticated model of how I’d define “consciousness”, and how my moral intuitions related to different features of cognition.
I’d also like to point out the author’s exemplary “Limitations” section. Including this makes the post far more useful, by helping readers understand how they should update on its findings, providing a jumping-off point for discussion, and giving future researchers a sense of how they might be able to improve on the work.
The voting process
Prizes were chosen by six people:
Two Forum moderators (Aaron Gertler and Denise Melchin).
Two of the highest-karma users at the time the new Forum was launched (Peter Hurford and Rob Wiblin). Joey Savoie decided to leave the panel this month to focus on other work.
Two users who have a recent history of strong posts and comments (Larks and Khorton).
All posts published in the month of June qualified for voting, save for those in the following categories:
Procedural posts from CEA and EA Funds (for example, posts announcing a new application round for one of the Funds)
Linkposts with no additional content
Posts which accrued zero or negative net karma after being posted
Example: a post which had 2 karma upon publication and wound up with 2 karma or less
Voters recused themselves from voting on posts written by themselves or their colleagues. Otherwise, they used their own individual criteria for choosing posts, though they broadly agree with the goals outlined above.
Winners were chosen by an initial round of approval voting, followed by a runoff vote to resolve ties.
Adding Comments to the Forum Prize
Starting next month (in the July prize post), we will be experimenting with a new prize structure:
First-place post: $750
Second-place post: $500
Third-place post: $250
An additional $250 will be split among the authors of several comments that… well, to plagiarize myself, “exemplify the kinds of comments we’d like to see”.
Comments provide a substantial fraction of the Forum’s value, and give users a way to contribute even if they don’t have the time or desire to publish original work. We’d like to reward comments that are especially well-thought-out, and which we believe add a lot of additional value to the post they accompany.
We also hope that a “comment prize” will make it easier to recognize people who contribute their ideas without publishing full-fledged research posts. It would be easy for the Prize to become something akin to a research award, but while we do value research, we also care about the culture of discussion and constructive criticism which has formed on the Forum through hundreds of small-scale interactions.
The number of comments that receive a prize may vary from month to month. We may sometimes reward a series of related comments instead of a single comment (as we did for Luisa’s series of posts on nuclear war this month).
Feedback
If you have thoughts on how the Prize has changed the way you read or write on the Forum, or ideas for ways we should change the current format, please write a comment or contact Aaron Gertler.
- EA Forum Prize: Winners for July 2019 by 20 Aug 2019 7:09 UTC; 24 points) (
- 22 May 2020 3:29 UTC; 5 points) 's comment on Which nuclear wars should worry us most? by (
- 22 May 2020 3:25 UTC; 2 points) 's comment on Invertebrate Sentience: A Useful Empirical Resource by (
- 22 May 2020 3:29 UTC; 2 points) 's comment on Information security careers for GCR reduction by (
(To be clear, I don’t mean this as a complaint, but an emergent observation that calls for possible changes)
I think these winners were quite reasonable. That said, I find it a bit awkward that these posts are even competing the more common blog posts. I could imagine this being pretty frustrating for almost anyone not in either an EA org or getting paid by an EA group to spend a significant amount of time working on a piece. If these winners were all valid entries, then I have little hope for almost any “casual” entry to have a chance here.
On a related note, if the norm is to rate the “top serious EA organization documents,” this seems quite difficult to do for different reasons. For one, “Information security careers for GCR reduction” seems like a very different class of thing to me than “Invertebrate Sentience”. Second, if we keep on doing this, I’d imagine we’d eventually want some domain experts; or at least, a somewhat different ranking/setup than for the many small posts.
I feel like it would be pretty fair to either exclude major EA orgs from this competition in the future, or have a separate tier, like the “best emerging artist” award (but for writing.)
Just a thought for future prizes.
I think the Information security careers for GCR reduction post is a relatively bad first place, and made me update reasonably strong downwards on the signal of the price.
It’s not that the post is bad, but I didn’t perceive it to contribute much to intellectual progress in any major way, and to me mostly parsed as an organizational announcement. The post obviously got a lot of upvotes, which is good because it was an important announcement, but I think a large part of that is because it was written by Open Phil (which is what makes it an important announcement) [Edit: I believe this less strongly now than I did at the time of writing this comment. See my short thread with Peter_Hurford]. I expect the same post written by someone else would have not received much prominence and I expect would have very unlikely been selected for a price.
I think it’s particularly bad for posts to get prizes that would have been impossible to write when not coming from an established organization. I am much less confident about posts that could have been written by someone else, but that happened to have been written by someone in a full time role at an EA organization.
Just to explain why I voted for this piece as one of the judges… I like career profiles that emphasize opportunities in a space that I don’t think many people considered. Especially careers that might scalably employ a large number of EAs. I personally don’t think the OpenPhil-affiliated authorship was a key determinant of my judging decision, but it may have played a small role in the decision. I disagree that it “would have been impossible to write when not coming from an established organization”. I agree with Aaron that “Aligning Recommender Systems” falls in a similar category for me. Similarly, this post on plant-based food jobs also felt similarly helpful to me.
This updated me a bit, and I think I now at least partially retract that part of my comment.
I’m not sure about this. One of last month’s winners, “Aligning Recommender Systems,” also outlined an argument for EAs gaining experience/pursuing careers in a field that hadn’t been covered much or at all by prior authors, and was highly upvoted. As far as I know, neither author works for an EA organization (though I don’t know much about their background, and would appreciate someone correcting me if I’m wrong).
How do you feel about posts which would have been almost impossible to write for authors who weren’t in some other exceptional circumstance?
For example, during the first month of prize selection, one winner was Adam Gleave, who wrote a great post about deciding what to do with his winnings from the EA Donor Lottery. I’d guess that only someone with unusual financial resources would have been able to make such large donations (and get statements from ALLFED, etc.) which left me uncertain at the time whether Adam’s post should have qualified.
The main difference here seems to be that he sacrificed a lot of his free time to conduct research and write a post, but I still expect that other authors with equal willingness to research and write wouldn’t have gotten as much attention.
--
On another note, I think that some posts in this category are highly valuable. For example, someone working at an org might write a very detailed post on operations that they couldn’t have written without experience running large-scale EA events. If this kind of post wouldn’t be written in someone’s spare time without incentives (which I know is a big assumption), I’d like to provide those incentives.
A question for the prize winners (if they read this and have time):
Did you find this award helps to motivate you, and do you have thoughts on if the prize should be changed in the future?
See https://forum.effectivealtruism.org/posts/u55Misp5ZjtkTQrXQ/ea-forum-prize-winners-for-june-2019#fDmiPdXBgA99oPp5g
Yep, I’d generally agree with that. One possible distinction is that I could see value in recognizing posts that have high EV but don’t necessarily match “intellectual progress” in one way or another.
My comment applied to the fact that all three winners were tough to compete with for most people. However, there is the similar point that the Information Security Careers post in particular is odd because it was useful because it of the reputation of the writers (I’d agree this seemed necessary.)
Ozzie,
Thanks for this feedback! I was thinking about exactly the same issue as I counted the votes and wrote up this post.
--
Back when we were setting up initial rules for the Prize, I wasn’t sure whether to allow posts written on “org time” (that is, by employees of EA organizations who were paid by their employers for Forum work). Eventually, I decided to err on the side of making almost all posts eligible as a starting point, but to keep an eye on which types of posts were winning.
This is the first month (out of eight) that all winning posts have come from the employees of EA orgs; since the Prize began in November, roughly half of the winning posts have come from Forum contributors who (as far as I know) weren’t employed in direct work at the time, or were writing about subjects unrelated to their direct work. Some of the other half were written by org employees who drew on their work experience, but in cases where I’m not sure whether they were paid to do so (e.g. November’s winning post on EAF’s hiring process).
This doesn’t indicate that posts from employees of EA orgs should necessarily remain in the same category, but I did want to note that this month was anomalous. (We certainly don’t intend to be rating “the top serious EA organization documents.”)
---
Some thoughts on ways we could address this concern:
The comment prize, which we’ll be starting up next month, should help us highlight contributions that didn’t require as much time to make, and I could imagine scaling it up over time (in the sense of “amount awarded for comments relative to posts”). I noted this in my initial post:
2. Some organizations have been unusually thorough in posting on the Forum, and this is something we’d like to highlight and encourage (whether through a prize or some other means). For example, researchers from Rethink Priorities have spent a lot of additional time formatting posts and responding to comments, rather than only cross-posting research from their website.
3. It’s possible that posts produced by organizations should be in a separate category, though it’s tricky to define when this is the case. For example, Open Phil is a very different kind of research organization than a smaller org like ALLFED or AI Impacts, and I’m uncertain how to define people who are freelance researchers working off of a small grant or commission. It’s also hard to tell when something was or was not written on “paid time” by the employee of an EA organization.
Personally, I have a higher bar on voting for posts that come from org employees, but I’ll disclose that I did vote for each of the winning posts this month — I thought that the invertebrate sentience and nuclear risk series were especially outstanding, even by the standards of EA research organizations.
This is something I and the other judges will be discussing in future months, and if you have further thoughts, I’d appreciate hearing them!
Makes sense. I’m excited for the comment prize.
I think that the main “organization” posts I’m thinking of are almost like a different class, like they are using the EA Forum as an academic journal as opposed to as a blog. There could be some self-selection then; like a separate category / website where people self-select for a different kind of feedback. I’m going to be chatting to people about this.
I like the idea of having separate categories for professional work and amateur/some other categorization work. I’d still like to encourage the professional work to be posted here, but encouraging non-professional work is also important.
[comment deleted]
Is there any data on how prize winners generally feel about winning? Does the prize help motivate them to either write the material or post it here?
I’d just point out from the perspective of running a research org that has produced multiple posts that win the Prize (Rethink Priorities), I do find the Prize to be very motivating and really value getting a quick “elite commonsense” take on whether our research is valuable.
(Disclaimer: I am also a judge in this contest but I do not vote on Rethink Priorities content.)
One direct example, from a December winner:
Anecdotally, I’ve heard from a few other winners and authors that the Prize is a motivating factor to spend more time/effort on posts, but I couldn’t find public statements to that effect.