GiveWell’s Holden Karnofsky assessed the Singularity Institute in 2012, and provided a thoughtful, extensive critique of the mission and approach which still remains tied for the top post on Lesswrong. It seems the EA Meta-charity evaluators are still hesitant to name AI Safety (and more broadly, Existential Risk Reduction) as a potentially effective target for donations. What are you doing to change that?
GiveWell’s Holden Karnofsky assessed the Singularity Institute in 2012, and provided a thoughtful, extensive critique of the mission and approach which still remains tied for the top post on Lesswrong. It seems the EA Meta-charity evaluators are still hesitant to name AI Safety (and more broadly, Existential Risk Reduction) as a potentially effective target for donations. What are you doing to change that?