Our threshold for funding is set at GiveWell-recommended charities. Namely, if we don’t think a project is plausibly better than e.g. AMF we plan to not recommend the project.
This is because a pernicious failure mode for the project is that we move money away from good proven projects and towards bad, unproved projects. By only recommending projects that could (in expectation) be better than AMF, we can mitigate the pernicious failure mode.
In terms of funding level, we ask how much money the projects need and what they plan to do with it in the application. We also plan to ask about this in the future. The goal is to ensure that the projects have room for more funding. We don’t plan to recommend specific funding levels, but I can see us doing this if donors would find it valuable.
Also, to clarify on the crowdedness of the project, I could see our uncrowdedness ranking improving as we learn more about the funding space. It’s certainly plausible that the project will turn out to be uncrowded.
Thanks. To be sure I’m reading that right: you mean projects that you think are better in expectation than AMF, or that you think someone might reasonably think is better in expectation than AMF?
I expect that if most/all of your recommendations get funded, it would be useful to have recommendations for the amount of funding until they are in expectation worse than AMF at the margin. If not all your recommendations get funded, it would be useful to have extra ranking between them. It may be that donors are happy making these judgements, but just as you are likely to have comparative advantage identifying the projects, you’ll probably also be well placed to identify funding requirements or trade-offs between your recommendations.
Projects that the EAV team and expert evaluators think might be better in expectation than AMF. I used the other phrasing because we do two stages of evaluation. At the first, we discard projects that “are not plausible better in expectation than AMF” where that means that it is not plausible that further evidence will show the project to be more worth funding.
We should talk on Skype about how to accurately model the crossover point between when a project is better than AMF and worse than AMF given certain amounts of funding. I agree that this would be valuable, but I don’t yet know how to determine this.
This is a great initiative, and a helpful write—up. Thanks Kerry.
So you want to find ventures that are expected to be better than the most effective charity (or thereabouts) in the world?
I’m a bit worried that you will rule out many fantastically valuable ventures that may be discouraged or not stimulated from happening otherwise.
If these ventures were to use only EA funds or mainly EA funds, then that would be right.
However, if the ventures have a (lets say 10% chance) of growing out of the EA world and getting funding that wouldn’t otherwise be attracted, and are only 1⁄5 as effective as AMF, but when they do they last for 40 years and wouldn’t be done otherwise, then it could still be worth giving them?
Further, the learning from the process might be worth something significant if its a necessary barrier to becoming an uber-effectiveness incubator?
Obviously you want to take the highest expected value anyway so this might be an academic discussion.
Haha, had a look at the people behind this—forget what I said—I’m sure that between all the funders/backers you’ve got more than enough learning to identify projects that are better than AMF. Good luck!
Our threshold for funding is set at GiveWell-recommended charities. Namely, if we don’t think a project is plausibly better than e.g. AMF we plan to not recommend the project.
This is because a pernicious failure mode for the project is that we move money away from good proven projects and towards bad, unproved projects. By only recommending projects that could (in expectation) be better than AMF, we can mitigate the pernicious failure mode.
In terms of funding level, we ask how much money the projects need and what they plan to do with it in the application. We also plan to ask about this in the future. The goal is to ensure that the projects have room for more funding. We don’t plan to recommend specific funding levels, but I can see us doing this if donors would find it valuable.
Also, to clarify on the crowdedness of the project, I could see our uncrowdedness ranking improving as we learn more about the funding space. It’s certainly plausible that the project will turn out to be uncrowded.
Thanks. To be sure I’m reading that right: you mean projects that you think are better in expectation than AMF, or that you think someone might reasonably think is better in expectation than AMF?
I expect that if most/all of your recommendations get funded, it would be useful to have recommendations for the amount of funding until they are in expectation worse than AMF at the margin. If not all your recommendations get funded, it would be useful to have extra ranking between them. It may be that donors are happy making these judgements, but just as you are likely to have comparative advantage identifying the projects, you’ll probably also be well placed to identify funding requirements or trade-offs between your recommendations.
Projects that the EAV team and expert evaluators think might be better in expectation than AMF. I used the other phrasing because we do two stages of evaluation. At the first, we discard projects that “are not plausible better in expectation than AMF” where that means that it is not plausible that further evidence will show the project to be more worth funding.
We should talk on Skype about how to accurately model the crossover point between when a project is better than AMF and worse than AMF given certain amounts of funding. I agree that this would be valuable, but I don’t yet know how to determine this.
This is a great initiative, and a helpful write—up. Thanks Kerry.
So you want to find ventures that are expected to be better than the most effective charity (or thereabouts) in the world?
I’m a bit worried that you will rule out many fantastically valuable ventures that may be discouraged or not stimulated from happening otherwise.
If these ventures were to use only EA funds or mainly EA funds, then that would be right.
However, if the ventures have a (lets say 10% chance) of growing out of the EA world and getting funding that wouldn’t otherwise be attracted, and are only 1⁄5 as effective as AMF, but when they do they last for 40 years and wouldn’t be done otherwise, then it could still be worth giving them?
Further, the learning from the process might be worth something significant if its a necessary barrier to becoming an uber-effectiveness incubator?
Obviously you want to take the highest expected value anyway so this might be an academic discussion.
Haha, had a look at the people behind this—forget what I said—I’m sure that between all the funders/backers you’ve got more than enough learning to identify projects that are better than AMF. Good luck!