My notes on what I liked about the post, from the announcement:
ā2018 AI Alignment Literature Review and Charity Comparisonā is an elegant summary of a complicated cause area. It should serve as a useful resource for people who want to learn about the field of AI alignment; we hope it also sets an example for other authors who want to summarize research.
The post isnāt only well-written, but also well-organized, with several features that make it easier to read and understand. The author:
Offers suggestions on how to effectively read the post.
Hides their conclusions, encouraging readers to draw their own first.
Discloses relevant information about their background, including the standards by which they evaluate research and their connections with AI organizations.
These features all fit with the Forumās goal of āinformation before persuasionā, letting readers gain value from the post even if they disagree with some of the authorās beliefs.
This post was awarded an EA Forum Prize; see the prize announcement for more details.
My notes on what I liked about the post, from the announcement: