This post contains an extensive discussion on the difficulty of evaluating AI charities because they do not share all of their work due to info hazards (in the “Openness” section as well as the MIRI review). Will you have access to work that is not shared with the general public, and how will you approach evaluating research that is not shared with you or not shared with the public?
We won’t generally have access to work that isn’t shared with the general public, but may incidentally have access to such work through individual fund members having private conversations with researchers. Thus far, we’ve evaluated organizations based on the quality of their past research and the quality of their team.
We may also evaluate private research by evaluating the quality of its general direction, and the quality of the team pulling it off. For example, I think the discourse around AI safety could use a lot of deconfusion. I also recognize that such deconfusion could be an infohazard, but nevertheless want such research to be carried out, and think MIRI is one of the most competent organizations around to do it.
In the event that our decision for whether to fund an organization hinges on the content of their private research, we’ll probably reach out to them and ask them if they’re willing to disclose it.
This post contains an extensive discussion on the difficulty of evaluating AI charities because they do not share all of their work due to info hazards (in the “Openness” section as well as the MIRI review). Will you have access to work that is not shared with the general public, and how will you approach evaluating research that is not shared with you or not shared with the public?
We won’t generally have access to work that isn’t shared with the general public, but may incidentally have access to such work through individual fund members having private conversations with researchers. Thus far, we’ve evaluated organizations based on the quality of their past research and the quality of their team.
We may also evaluate private research by evaluating the quality of its general direction, and the quality of the team pulling it off. For example, I think the discourse around AI safety could use a lot of deconfusion. I also recognize that such deconfusion could be an infohazard, but nevertheless want such research to be carried out, and think MIRI is one of the most competent organizations around to do it.
In the event that our decision for whether to fund an organization hinges on the content of their private research, we’ll probably reach out to them and ask them if they’re willing to disclose it.