Executive summary: This evidence-based analysis argues that AI safety grantmakers are significantly underqualified to evaluate political advocacy projects due to a strong staffing bias toward academic researchers, and calls for a strategic overhaul in hiring practices to include more professionals with direct political experience to avoid suboptimal funding decisions that could jeopardize the effectiveness of AI governance efforts.
Key points:
The author’s census shows nearly 4 academic researchers for every 1 political advocacy expert in major AI safety grantmaking organizations, leading to a bias in funding decisions.
Despite clear needs and opportunities for advocacy, grantmakers disproportionately fund academic research, potentially due to their own research-oriented backgrounds rather than objective impact considerations.
While grantmakers occasionally consult external political experts, these consultations are informal, inconsistently influential, and often involve junior personnel, failing to substitute for in-house advocacy expertise.
The lack of formal procedures and incentives to balance perspectives within teams increases the likelihood of decisions based on social comfort and internal relationships rather than strategic need.
The author urges funders to aggressively recruit seasoned political advocacy professionals into grantmaking teams and to advertise these roles in mainstream political job markets.
The piece critiques broader Effective Altruism practices, warning that without reform, EA’s grantmaking processes risk reinforcing epistemic bubbles and undermining high-stakes efforts like preventing AI-driven existential risks.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: This evidence-based analysis argues that AI safety grantmakers are significantly underqualified to evaluate political advocacy projects due to a strong staffing bias toward academic researchers, and calls for a strategic overhaul in hiring practices to include more professionals with direct political experience to avoid suboptimal funding decisions that could jeopardize the effectiveness of AI governance efforts.
Key points:
The author’s census shows nearly 4 academic researchers for every 1 political advocacy expert in major AI safety grantmaking organizations, leading to a bias in funding decisions.
Despite clear needs and opportunities for advocacy, grantmakers disproportionately fund academic research, potentially due to their own research-oriented backgrounds rather than objective impact considerations.
While grantmakers occasionally consult external political experts, these consultations are informal, inconsistently influential, and often involve junior personnel, failing to substitute for in-house advocacy expertise.
The lack of formal procedures and incentives to balance perspectives within teams increases the likelihood of decisions based on social comfort and internal relationships rather than strategic need.
The author urges funders to aggressively recruit seasoned political advocacy professionals into grantmaking teams and to advertise these roles in mainstream political job markets.
The piece critiques broader Effective Altruism practices, warning that without reform, EA’s grantmaking processes risk reinforcing epistemic bubbles and undermining high-stakes efforts like preventing AI-driven existential risks.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.