EA Forum Prize: Winners for November 2020

CEA is pleased to announce the winners of the November 2020 EA Forum Prize!

The following users were each awarded a Comment Prize ($75):

  • Dan Stein on the pros and cons of advertising carbon offset charities

  • Saulius on the importance of a study’s limitations

  • Richard Ngo, summarizing a complex post in a way the author and others found useful

  • Michael Aird for providing a good example on his own thread to kick off a discussion of people’s end-of-year giving decisions

See here for a list of all prize announcements and winning posts.

What is the EA Forum Prize?

Certain posts and comments exemplify the kind of content we most want to see on the EA Forum. They are well-researched and well-organized; they care about informing readers, not just persuading them.

The Prize is an incentive to create content like this. But more importantly, we see it as an opportunity to showcase excellent work as an example and inspiration to the Forum’s users.

About the winning posts and comments

Note: I write this section in first person based on my own thoughts, rather than by attempting to summarize the views of the other judges.

Problem Area Report: Pain

We all expect to experience some pain in our lives. For most of us, especially those in high-income countries, these experiences will be mild, bearable, and short. Others are not so fortunate. Millions suffer excruciating pain. Millions more suffer moderate or severe pain. They suffer despite the fact that cheap and effective treatments exist.

This report briefly discusses the measurement of pain then explores three major causes of pain and what might be done to relieve them.

This report is excellent throughout, but these were a few of my favorite features:

  • The post begins with a clear summary of its major focus areas. The summary is relatively brief, but still detailed enough that someone could read it and come away with useful knowledge even if they didn’t have time for the full post.

  • The post gives readers many suggestions they can follow up on — from open research questions to career paths and volunteer opportunities.

  • The post acknowledges that the writers haven’t evaluated donation opportunities, but still provides a few weak suggestions for people who feel moved to make a direct contribution to the cause right away. This seems like a good balance between making an overconfident recommendation and leaving potential donors with no idea what to do (and it gives up-and-coming charity researchers something to chew on).

These are all a bit meta, but the actual substance of the post should be very exciting for people seeking to do good in the short term; I highly recommend reading the full report.

Why those who care about catastrophic and existential risk should care about autonomous weapons

I and the Future of Life Institute (FLI) have gathered the strong impression that parts of the effective altruism ecosystem are skeptical of the importance of the issue of autonomous weapons systems. This post explains why we think those interested in avoiding catastrophic and existential risk, especially risk stemming from emerging technologies, may want to have this issue higher on their list of concerns.

As one of the skeptics Anthony mentions in his introduction, I left this post thinking about the issue of autonomous weapons very differently — even if it didn’t fully persuade me, it introduced me to a number of interesting frames. These lines stand out:

Managing to avoid an arms race in autonomous weapons – via multi-stakeholder international agreement and other means – would set a very powerful precedent for avoiding one more generally.

[...]

Good governance of AWSs will take exactly the sort of multilateral cooperation, including getting militaries onboard, that is likely to be necessary with an overall AI/​AGI (figurative) arms race. The methods, institutions, and ideas necessary to govern AGI in a beneficial and stable multilateral system are very unlikely to arise quickly or from nowhere.

Features I especially liked:

  • The post makes good use of the history of weapons systems and governments’ responses to them. As a result, it feels less speculative than many future-oriented technology posts on the Forum and elsewhere in the community.

  • Anthony was very responsive to comments and feedback, which further enhanced the quality of the information available in the post.

Thoughts on whether we’re living at the most influential time in history

I think [Will MacAskill’s] main argument against [the Hinge of History hypothesis] is deeply flawed. The comment section of Will’s post contains a number of commenters making some of the same criticisms I’m going to make. I’m writing this post because I think the rebuttals can be phrased in some different, potentially clearer ways, and because I think that the weaknesses in Will’s argument should be more widely discussed.

I really liked the last sentence of this excerpt — there’s a lot of value to be had in noticing a good discussion in Forum comments and then converting it into a post that is easier to follow and can help the discussants make further progress. Other folks should do more of this!

More things I liked:

  • An extremely clear edit to the post acknowledges that one of the original criticisms no longer applies to the newest version of the work in question. Updating a published post is an excellent, under-utilized practice.

  • When Buck found a way to apply one of his points to another topic within EA (patient philanthropy), he made sure to mention it. If you’re trying to produce general arguments about important topics, it’s good to check on other implications of those arguments, and to report your findings.

  • Like Anthony, Buck engaged closely with the comments. While he made many good points along the way, I also like the context-not-required comment where he reacted to one of his own mistakes with a no-holds-barred apology. It’s good to be able to say “oops”.

£4bn for the global poor: the UK’s 0.7%

The UK Chancellor of the Exchequer announced that the government will reduce the amount of spend on international development from 0.7% of GNI to 0.5%. (read more, e.g., here). This means that the government will spend £10bn on aid instead of £14bn.

This post sets out an attempt to undo this decision.

If I ever see a chance to take swift action on a political problem that is of special interest to EAs, I hope that I react as well as Sanjay did to the UK’s move toward reducing development funding. While I think the post could have been a bit more clear on how readers could follow up, I really appreciate the dedication to speed:

This post is being written quickly, as we may not have much time to act.

I also liked Sanjay’s mention of a second option in a comment (if the bill passes, trying to ensure it does so with an amendment making it temporary). Too often in politics, we focus on the single outcome we want rather than considering ways to moderate the outcome we don’t want; it’s good to see this campaign avoid that.

Review of FHI’s Summer Research Fellowship 2020

This post reviews the Future of Humanity Institute’s Summer Research Fellowship 2020 in detail. If you’re in a hurry, we recommend reading the summary of activities, lessons learned, and comparing costs and benefits sections for a quicker take.

Once again, the opening lines of a post demonstrate something I like about it. Rose offers readers a shortcut to getting some of her post’s core content, and that’s an extremely useful feature. There’s a ton of content on this forum, too much for any one person to read thoroughly (even me!), and shortcuts are a godsend for people who want to keep up with limited time.

Anyway, there are some other things I liked about this post:

  • The “Lessons Learned” section, with its many content-dense bullet points. One of the best summaries I can remember seeing on the Forum, in that it is a tiny fraction of the total length of the post but gets at almost all of the most important information.

  • The acknowledgement of fellows’ occasional mental health difficulties, and what seemed to me like very reasonable guidance on reducing these in other similar programs.

  • The list of mistakes, which felt like something I’d be likely to learn from if I were organizing a similar program.

The winning comments

I won’t write up an analysis of each comment. Instead, here are my thoughts on selecting comments for the prize.

The voting process

The winning posts were chosen by five people:

All posts published in the titular month qualified for voting, save for those in the following categories:

  • Procedural posts from CEA and EA Funds (for example, posts announcing a new application round for one of the Funds)

  • Posts linking to others’ content with little or no additional commentary

  • Posts which accrued zero or negative net karma after being posted

    • Example: a post which had 2 karma upon publication and wound up with 2 karma or less

Voters recused themselves from voting on posts written by themselves or their colleagues. Otherwise, they used their own individual criteria for choosing posts, though they broadly agree with the goals outlined above.

Judges each had ten votes to distribute between the month’s posts. They also had a number of “extra” votes equal to [10 - the number of votes made last month]. For example, a judge who cast 7 votes last month would have 13 this month. No judge could cast more than three votes for any single post.

The winning comments were chosen by Aaron Gertler, though other judges had the chance to nominate other comments and to veto comments they didn’t think should win.

Feedback

If you have thoughts on how the Prize has changed the way you read or write on the Forum, or ideas for ways we should change the current format, please write a comment or contact me.

No comments.