I expect different people on the fund will have quite different answers to this, so here is my perspective:
I don’t expect to score projects or applications on any straightforward rubric any more than a startup VC should do so for the companies that they are investing in. Obviously, things like general competence, past track record, clear value proposition and neglectedness matter, but at large, I mostly expect to recommend grants based on my models of what is globally important, and on my expectation of whether the plan the grantee proposed will actually work, and do something that I guess you could call “model-driven granting”
What this means in practice is that I expect the things I look for in a potential grantee to differ quite a bit depending on what precisely they are planning to do with the resources. I expect there will be many applicants that will display strong competence and rationality, but are running on assumptions that I don’t share, or are trying to solve problems that I don’t think are important, and I don’t plan to make recommendations unless my personal models expect that the plan the grantee is pursuing will actually work. This obviously means I will have to invest significant time and resources to actually understand what the grantees are trying to achieve, which I am currently planning to make room for.
I can imagine some exceptions to this though. I think we will run across potential grantees who are asking for money mostly to increase their own slack, and who have a past track record of doing valuable work. I am quite open to grants like this, think they are quite valuable and expect to give out multiple grants in this space (barring logistical problems of doing so). In that case, I expect to mostly ask myself the question of whether I expect additional slack and freedom would make a large difference in that person’s output, which I expect will again differ quite a bit from person to person.
One other type of grant that I am open to are rewards for past impact. I think rewarding people for past good deeds is quite important for setting up long-term incentives, and evaluating whether an intervention had a positive impact is obviously a lot easier after the project is completed than before it is completed. In this case I again mostly expect to rely heavily on my personal models of whether the completed project had a significant positive impact, and will base my recommendations on that estimate.
I think this approach will sadly make it harder for potential grantees to evaluate whether I am likely to recommend them for a grant, but I think is less likely to give rise to various goodharting and prestige-optimization problems, and will allow me to make much more targeted grants than the alternative of a more rubric-driven approach. It’s also really the only approach that I expect will cause me to learn what interventions work and don’t work in the long-run, by exposing my models to the real world and seeing whether my concrete predictions of how various projects will go come true or not.
I also think this sort of question might be useful to ask on a more individual basis—I expect each fund manager to have a different answer to this question that informs what projects they put forward to the group for funding, and which projects they’d encourage you to inform them about.
What generally is your criteria for evaluating opportunities?
I expect different people on the fund will have quite different answers to this, so here is my perspective:
I don’t expect to score projects or applications on any straightforward rubric any more than a startup VC should do so for the companies that they are investing in. Obviously, things like general competence, past track record, clear value proposition and neglectedness matter, but at large, I mostly expect to recommend grants based on my models of what is globally important, and on my expectation of whether the plan the grantee proposed will actually work, and do something that I guess you could call “model-driven granting”
What this means in practice is that I expect the things I look for in a potential grantee to differ quite a bit depending on what precisely they are planning to do with the resources. I expect there will be many applicants that will display strong competence and rationality, but are running on assumptions that I don’t share, or are trying to solve problems that I don’t think are important, and I don’t plan to make recommendations unless my personal models expect that the plan the grantee is pursuing will actually work. This obviously means I will have to invest significant time and resources to actually understand what the grantees are trying to achieve, which I am currently planning to make room for.
I can imagine some exceptions to this though. I think we will run across potential grantees who are asking for money mostly to increase their own slack, and who have a past track record of doing valuable work. I am quite open to grants like this, think they are quite valuable and expect to give out multiple grants in this space (barring logistical problems of doing so). In that case, I expect to mostly ask myself the question of whether I expect additional slack and freedom would make a large difference in that person’s output, which I expect will again differ quite a bit from person to person.
One other type of grant that I am open to are rewards for past impact. I think rewarding people for past good deeds is quite important for setting up long-term incentives, and evaluating whether an intervention had a positive impact is obviously a lot easier after the project is completed than before it is completed. In this case I again mostly expect to rely heavily on my personal models of whether the completed project had a significant positive impact, and will base my recommendations on that estimate.
I think this approach will sadly make it harder for potential grantees to evaluate whether I am likely to recommend them for a grant, but I think is less likely to give rise to various goodharting and prestige-optimization problems, and will allow me to make much more targeted grants than the alternative of a more rubric-driven approach. It’s also really the only approach that I expect will cause me to learn what interventions work and don’t work in the long-run, by exposing my models to the real world and seeing whether my concrete predictions of how various projects will go come true or not.
This is also broadly representative of how I think about evaluating opportunities.
I also think this sort of question might be useful to ask on a more individual basis—I expect each fund manager to have a different answer to this question that informs what projects they put forward to the group for funding, and which projects they’d encourage you to inform them about.