To complement Tyler’s comment—the field of AI safety is not similar to that of global health and poverty in this regard. When looking at health interventions, you’re considering solutions to widespread problems, and time scales of a few decades at most. In contrast, AI safety (from the EA perspective) mostly deals with future technologies, and has made little measurable progress in mitigating their dangers. There’s no direct evidence you can use to judge AI safety orgs with high confidence. So you’re going to, at maximum, get evaluations which are much less robust, and have much more disagreement about them.
There isn‘t one exactly, but poking around the grants made by Open Philanthropy and EA funds will give you a good idea of what orgs and projects look promising to the experts who disburse those funds.
What’s the GiveWell of AI Safety?
To complement Tyler’s comment—the field of AI safety is not similar to that of global health and poverty in this regard. When looking at health interventions, you’re considering solutions to widespread problems, and time scales of a few decades at most. In contrast, AI safety (from the EA perspective) mostly deals with future technologies, and has made little measurable progress in mitigating their dangers. There’s no direct evidence you can use to judge AI safety orgs with high confidence. So you’re going to, at maximum, get evaluations which are much less robust, and have much more disagreement about them.
There isn‘t one exactly, but poking around the grants made by Open Philanthropy and EA funds will give you a good idea of what orgs and projects look promising to the experts who disburse those funds.