To complement Tyler’s comment—the field of AI safety is not similar to that of global health and poverty in this regard. When looking at health interventions, you’re considering solutions to widespread problems, and time scales of a few decades at most. In contrast, AI safety (from the EA perspective) mostly deals with future technologies, and has made little measurable progress in mitigating their dangers. There’s no direct evidence you can use to judge AI safety orgs with high confidence. So you’re going to, at maximum, get evaluations which are much less robust, and have much more disagreement about them.
To complement Tyler’s comment—the field of AI safety is not similar to that of global health and poverty in this regard. When looking at health interventions, you’re considering solutions to widespread problems, and time scales of a few decades at most. In contrast, AI safety (from the EA perspective) mostly deals with future technologies, and has made little measurable progress in mitigating their dangers. There’s no direct evidence you can use to judge AI safety orgs with high confidence. So you’re going to, at maximum, get evaluations which are much less robust, and have much more disagreement about them.