EA Forum Prize: Winners for July 2020

This post is arriving late — my fault, not that of any other judge. We’re catching up on a Prize backlog and expect to be current again by the time October prizes are given.

CEA is pleased to announce the winners of the July 2020 EA Forum Prize!

The following users were each awarded a Comment Prize ($75):

  • Jason Schukraft on theories about the first historical instance of suffering

  • Bara Hanzalova on region-level cause prioritization research

  • John Halstead, on how recent climate research influenced his current model

  • Asya Bergal on value alignment and public discussion of uncertainty

See here for a list of all prize announcements and winning posts.

What is the EA Forum Prize?

Certain posts and comments exemplify the kind of content we most want to see on the EA Forum. They are well-researched and well-organized; they care about informing readers, not just persuading them.

The Prize is an incentive to create content like this. But more importantly, we see it as an opportunity to showcase excellent work as an example and inspiration to the Forum’s users.

About the winning posts and comments

Note: I write this section in first person based on my own thoughts, rather than by attempting to summarize the views of the other judges.

Collection of good 2012-2017 EA Forum posts

This post is exactly what it sounds like and I don’t have much to say, other than:

  1. I thought the list was full of great selections, and found that the pieces I hadn’t heard of before made for excellent reading.

  2. I really liked the idea. As the author notes, Forum karma calculations have changed over time. This makes it tricky to compare older and newer articles when looking through All Posts for good content to read in front of the fire on a cold winter morning (at least, that’s how I like to imagine people using the Forum). Highlighting comparatively high-karma posts from the past is already useful; adding categories and editorial judgment is even better.

I recommend following the author’s instructions: “reading the titles and clicking on the ones that seem interesting and relevant to you.” That said, my favorite suggestions include:

Use resilience, instead of imprecision, to communicate uncertainty

In these important cases, one is hamstrung if one only has ‘quick and dirty’ ways to communicate uncertainty in one’s arsenal: our powers of judgement are feeble enough without saddling them with lossy and ambiguous communication too.

Communicating uncertainty is a core part of EA thinking. We are almost never certain about certain important things (e.g. the risks posed many different existential threats), so we need to make a lot of statements like “there’s an X% chance that Y will happen by the year Z.”

We often round our estimates to avoid sounding too certain — an 11.2% chance is more likely to sound overconfident or silly than a 10% chance, or “around a 10% chance.” However, rounding our estimates and using vague terms like “around” and “roughly” stops us from communicating precisely, leading to the loss of valuable information. We may be wrong about our 11.2% estimate, but unless we came up with that number at random or by badly misinterpreting the evidence, it’s likely to be more accurate than the rounded 10%. (Or, as the author puts it, “you lose half the information if you round percents to per-tenths.”)

That was my attempt to summarize the first few paragraphs of the post, but I recommend reading it all; the author makes a lot of great points about ways to get around the problem of sounding overconfidence, how the value of precision becomes clear when many forecasts are being made, etc. Along the way, he:

  • Cites several relevant papers, despite having a mathematically solid argument even without direct evidence.

  • Shares examples of how to carry out the types of forecasting communication he recommends (for example, by including confidence intervals).

  • Clarifies that some occasions don’t really call for precision (and goes into further detail on pros and cons in the comments).

3 suggestions about jargon in EA

Before using jargon [...] see whether you can say the same idea without the jargon, at least in your own head. This may help you realise that you’re unsure what the jargon means. Or it may help you realise that the idea is easy to convey without the jargon.

EA is a hotbed of specialized language (“jargon”). Sometimes, this lets us quickly convey complex ideas and make rapid progress; sometimes, it leads us to talk past each other, or the people we hope to convince. I really enjoyed this post on how to avoid the bad side of jargon. Some of my favorite features:

  • The author provides examples of how common terms are misused (when you say “existential risk,” do you mean the same thing as Nick Bostrom when he says it? What exactly is the “unilaterialist’s curse”?)

  • The author takes his own medicine: He recommends linking to the definitions of uncommon words when you first use them, and does this himself throughout the post.

  • The author presents stories to back up his points, which made them especially resonant to me (I would not want to be the member of the EA community who tried to claim that EA had originated a common economic concept… in front of someone who studied economics).

The academic contribution to AI safety seems large

I argue that [AI safety] is less neglected than it seems, because some academic work is related, and academia is enormous.

This post proposes an intriguing theory — that academics working in “adjacent areas” to AI safety likely contribute as much as the smaller community of dedicated safety researchers” — and backs it up with strong reasoning, multiple quantitative models, and many references to external academic work from sources that aren’t often quoted on the Forum.

I don’t know how I feel about the tentative conclusions reached by the author, but it seems to me that anyone who reads this post closely will have enough information to begin forming their own conclusions. It’s a great example of scout mindset; the information here isn’t meant to persuade readers so much as to lay out both sides of a debate. I also really appreciated the “caveats and future work” section — adding this to a post makes it easier for other authors to follow up, and thus encourages progress on difficult questions.

Objections to value alignment between effective altruists

EA members gesture at moral uncertainty as if all worldviews are considered equal under their watch, but in fact the survey data reveals cognitive homogeneity. Homogeneity churns out blind spots.

When members of a community share a number of common values, they tend to work together more easily. But they might also develop collective blind spots and have trouble incorporating new ideas that clash with their established values or positions. This post lays out a number of ways in which “value alignment” can be dangerous, while describing phenomena that worry the author.

While I’d have appreciated more concrete examples of some of the author’s concerns, I appreciated the primary sources she brought in — ample evidence that value alignment is both a high priority and rather vaguely defined, which is troubling given the importance placed upon it by the community. I also share the author’s concerns about the infrequency with which EA researchers attempt to publish in peer-reviewed journals (one of many points I’d be interested to see targeted by a follow-up post).

Finally, I thought this part of the conclusion was nicely put:

“I do not propose a change to EAs basic premise. Instead of optimising towards a particular objective, EA could maximise the chance of identifying that objective.”

The winning comments

I won’t write up an analysis of each comment. Instead, here are my thoughts on selecting comments for the prize.

The voting process

The winning posts were chosen by five people:

All posts published in the titular month qualified for voting, save for those in the following categories:

  • Procedural posts from CEA and EA Funds (for example, posts announcing a new application round for one of the Funds)

  • Posts linking to others’ content with little or no additional commentary

  • Posts which accrued zero or negative net karma after being posted

    • Example: a post which had 2 karma upon publication and wound up with 2 karma or less

Voters recused themselves from voting on posts written by themselves or their colleagues. Otherwise, they used their own individual criteria for choosing posts, though they broadly agree with the goals outlined above.

Judges each had ten votes to distribute between the month’s posts. They also had a number of “extra” votes equal to [10 - the number of votes made last month]. For example, a judge who cast 7 votes last month would have 13 this month. No judge could cast more than three votes for any single post.

——

The winning comments were chosen by Aaron Gertler, though the other judges had the chance to suggest other comments or veto comments they didn’t think should win.

Feedback

If you have thoughts on how the Prize has changed the way you read or write on the Forum, or ideas for ways we should change the current format, please write a comment or contact me.

No comments.