Meta: I’m requesting feedback and gauging interest. I’m not a grantmaker.
You can use prediction markets to improve grantmaking. The assumption is that having accurate predictions about project outcomes benefits the grantmaking process.
Here’s how I imagine the protocol could work:
Someone proposes an idea for a project.
They apply for a grant and make specific, measurable predictions about the outcomes they aim to achieve.
Examples of grant proposals and predictions (taken from here):
Project: Funding a well-executed podcast featuring innovative thinking from a range of cause areas in effective altruism.
Prediction: The podcast will reach 10,000 unique listeners in its first 12 months and score an average rating of 4.5/5 across major platforms.
Project: Funding a very promising biology PhD student to attend a one-month program run by a prestigious US think tank.
Prediction: The student will publish two policy-relevant research briefs within 12 months of attending the program.
Project: A 12-month stipend and budget for an EA to develop programs increasing the positive impact of biomedical engineers and scientists.
Prediction: Three biomedical researchers involved in the program will identify or implement career changes aimed at improving global health outcomes.
Project: Stipends for 4 full-time-equivalent (FTE) employees and operational expenses for an independent research organization conducting EA cause prioritization research.
Prediction: Two new donors with a combined giving potential of $5M+ will use this organization’s recommendations to allocate funds.
A prediction market is created based on these proposed outcomes, conditional on the project receiving funding. Some of the potential grant money is staked to make people trade.
Obvious criticism is that:
Markets can be gamed, so the potential grantee shouldn’t be allowed to bet.
Exploratory projects and research can’t make predictions like this.
A lot of people need to participate in the market.
I’m also a broad fan of this sort of direction, but have come to prefer some alternatives. Some points: 1. I believe of this is being done at OP. Some grantmakers make specific predictions, and some of those might be later evaluated. I think that these are mostly private. My impression is that people at OP believe that they have critical information that can’t be made public, and I also assume it might be awkward to make any of this public. 2. Personally, I’d flag that making and resolving custom questions for each specific grant can be a lot of work. In comparison, it can be great when you can have general-purpose questions, like, “how much will this organization grow over time” or “based on a public ranking of the value of each org, where will this org be?” 3. While OP doesn’t seem to make public prediction market questions on specific grants, they do sponsor Metaculus questions and similar on key strategic questions. For example, there are a tournaments on AI risk, bio, etc. I’m overall a fan of this.
4. In the future, AI forecasters could do interesting things. OP could take the best ones, then these could make private forecasts of many elements of any program.
Re 2. I agree that this is a lot of work but it’s little given how much money goes into grants. Some of the predictions are also quite straightforward to resolve.
Well, glad to hear that they are using it.
I believe that an alternative could be funding a general direction, e.g., funding everything in AIS, but I don’t think that these approaches are exclusive.
Meta: I’m requesting feedback and gauging interest. I’m not a grantmaker.
You can use prediction markets to improve grantmaking. The assumption is that having accurate predictions about project outcomes benefits the grantmaking process.
Here’s how I imagine the protocol could work:
Someone proposes an idea for a project.
They apply for a grant and make specific, measurable predictions about the outcomes they aim to achieve.
Examples of grant proposals and predictions (taken from here):
Project: Funding a well-executed podcast featuring innovative thinking from a range of cause areas in effective altruism.
Prediction: The podcast will reach 10,000 unique listeners in its first 12 months and score an average rating of 4.5/5 across major platforms.
Project: Funding a very promising biology PhD student to attend a one-month program run by a prestigious US think tank.
Prediction: The student will publish two policy-relevant research briefs within 12 months of attending the program.
Project: A 12-month stipend and budget for an EA to develop programs increasing the positive impact of biomedical engineers and scientists.
Prediction: Three biomedical researchers involved in the program will identify or implement career changes aimed at improving global health outcomes.
Project: Stipends for 4 full-time-equivalent (FTE) employees and operational expenses for an independent research organization conducting EA cause prioritization research.
Prediction: Two new donors with a combined giving potential of $5M+ will use this organization’s recommendations to allocate funds.
A prediction market is created based on these proposed outcomes, conditional on the project receiving funding. Some of the potential grant money is staked to make people trade.
Obvious criticism is that:
Markets can be gamed, so the potential grantee shouldn’t be allowed to bet.
Exploratory projects and research can’t make predictions like this.
A lot of people need to participate in the market.
I’m also a broad fan of this sort of direction, but have come to prefer some alternatives. Some points:
1. I believe of this is being done at OP. Some grantmakers make specific predictions, and some of those might be later evaluated. I think that these are mostly private. My impression is that people at OP believe that they have critical information that can’t be made public, and I also assume it might be awkward to make any of this public.
2. Personally, I’d flag that making and resolving custom questions for each specific grant can be a lot of work. In comparison, it can be great when you can have general-purpose questions, like, “how much will this organization grow over time” or “based on a public ranking of the value of each org, where will this org be?”
3. While OP doesn’t seem to make public prediction market questions on specific grants, they do sponsor Metaculus questions and similar on key strategic questions. For example, there are a tournaments on AI risk, bio, etc. I’m overall a fan of this.
4. In the future, AI forecasters could do interesting things. OP could take the best ones, then these could make private forecasts of many elements of any program.
Re 2. I agree that this is a lot of work but it’s little given how much money goes into grants. Some of the predictions are also quite straightforward to resolve.
Well, glad to hear that they are using it.
I believe that an alternative could be funding a general direction, e.g., funding everything in AIS, but I don’t think that these approaches are exclusive.