My notes on what I liked about the post, from the announcement:
“2018 AI Alignment Literature Review and Charity Comparison” is an elegant summary of a complicated cause area. It should serve as a useful resource for people who want to learn about the field of AI alignment; we hope it also sets an example for other authors who want to summarize research.
The post isn’t only well-written, but also well-organized, with several features that make it easier to read and understand. The author:
Offers suggestions on how to effectively read the post.
Hides their conclusions, encouraging readers to draw their own first.
Discloses relevant information about their background, including the standards by which they evaluate research and their connections with AI organizations.
These features all fit with the Forum’s goal of “information before persuasion”, letting readers gain value from the post even if they disagree with some of the author’s beliefs.
This post was awarded an EA Forum Prize; see the prize announcement for more details.
My notes on what I liked about the post, from the announcement: