We won’t generally have access to work that isn’t shared with the general public, but may incidentally have access to such work through individual fund members having private conversations with researchers. Thus far, we’ve evaluated organizations based on the quality of their past research and the quality of their team.
We may also evaluate private research by evaluating the quality of its general direction, and the quality of the team pulling it off. For example, I think the discourse around AI safety could use a lot of deconfusion. I also recognize that such deconfusion could be an infohazard, but nevertheless want such research to be carried out, and think MIRI is one of the most competent organizations around to do it.
In the event that our decision for whether to fund an organization hinges on the content of their private research, we’ll probably reach out to them and ask them if they’re willing to disclose it.
This is also broadly representative of how I think about evaluating opportunities.