Executive summary: Surveys of the EA and AI alignment communities reveal important insights about cause prioritization, research directions, demographics, personality traits, and moral foundations that can help guide the future work and priorities of both communities.
Key points:
Alignment researchers generally don’t believe current research is on track to solve alignment before AGI, suggesting additional approaches should be pursued, especially neglected ones.
Alignment researchers view capabilities research and alignment as not mutually exclusive, contrary to their predictions of the community’s views.
EAs and alignment researchers significantly overestimate how much intelligence is valued in their communities compared to other skills like collaboration and work ethic.
EAs are lukewarm on longtermist causes compared to global health/development and animal welfare, despite predicting the opposite.
Most alignment researchers don’t expect AGI within 5 years, but incorrectly predict the community does expect this.
EAs and alignment researchers differ in moral foundations, with EAs higher in compassion and alignment researchers higher in liberty. Both are low in traditionalism.
The communities differ in key personality traits and demographics. Alignment skews much more heavily male (9:1) and younger compared to EA (2:1 male:female).
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
7. The communities differ in key personality traits and demographics. Alignment skews much more heavily male (9:1) and younger compared to EA (2:1 male:female).
Though the article does disclaim this point ever-so-slightly …
“While this gender distribution is not unfamiliar in engineering spaces …”
… it would be difficult to understate how profoundly a simple skew in gender can profoundly skew everything that results therefrom; one should keep the ‘base-rate fallacy’ concept in mind when drawing conclusions (as the sample may/may not reflect the EA community as a whole, e.g. in the case of the forums introducing an implicit bias in sample). Note the lack of comments pertaining to the findings regarding gender...!
Executive summary: Surveys of the EA and AI alignment communities reveal important insights about cause prioritization, research directions, demographics, personality traits, and moral foundations that can help guide the future work and priorities of both communities.
Key points:
Alignment researchers generally don’t believe current research is on track to solve alignment before AGI, suggesting additional approaches should be pursued, especially neglected ones.
Alignment researchers view capabilities research and alignment as not mutually exclusive, contrary to their predictions of the community’s views.
EAs and alignment researchers significantly overestimate how much intelligence is valued in their communities compared to other skills like collaboration and work ethic.
EAs are lukewarm on longtermist causes compared to global health/development and animal welfare, despite predicting the opposite.
Most alignment researchers don’t expect AGI within 5 years, but incorrectly predict the community does expect this.
EAs and alignment researchers differ in moral foundations, with EAs higher in compassion and alignment researchers higher in liberty. Both are low in traditionalism.
The communities differ in key personality traits and demographics. Alignment skews much more heavily male (9:1) and younger compared to EA (2:1 male:female).
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Though the article does disclaim this point ever-so-slightly …
… it would be difficult to understate how profoundly a simple skew in gender can profoundly skew everything that results therefrom; one should keep the ‘base-rate fallacy’ concept in mind when drawing conclusions (as the sample may/may not reflect the EA community as a whole, e.g. in the case of the forums introducing an implicit bias in sample). Note the lack of comments pertaining to the findings regarding gender...!