Thanks for the writeup, Nathan; I am indeed excited about the possibility of making better grants through forecasting/futarchic mechanisms. So I’ll start from the other direction: instead of reaching for futarchy as a hammer, start with, what are current major problems grantmakers face?
The problem that seems most important to solve: “finding projects that turn out to be orders of magnitude more successful/impactful than the rest”. Paul Graham describes funding seed-stage startups as “farming black swans”, which rings true to me. To look at two example rounds from ACX Grants, which I’ve been involved in:
ACX Grants: Many of the projects look good, but a handful seem to have gotten outlier success; I would count Lars and Will’s Valuebase, the Oxfendazole group, and our own Manifold as having gone on to raise millions in further funding.
ACX Forecasting Mini-grants: Still a bit early to tell, but OPTIC and BaseRateTimes (which we missed!) seem to have hit their goals and continue on to work on cool things.
So right now, I’m most interested in mechanisms that help us find such founders/projects. Just daydreaming here, is there any kind of prediction mechanism that can turn out a report as informative as the ACX Grants 1-year project update? The information value in most prediction markets is “% chance given by the market”, which misses out on the valuable qualitative sketches given by a retroactive writeup.
Other promising things:
Asking grantees to set up markets for their own outcomes; eg “If funded, will we successfully publish a paper that receives >10 citations within 1 year?” this might clarify exactly what goals the grantees are trying to hit.
Doing some kind of impact analysis for alignment work in past years; imagine a kind of “AI Safety Nobel Prizes” which identify what work turned out to be the most important. This would give future forecasting tools something concrete to predict on.
Many of the projects look good, but a handful seem to have gotten outlier success; I would count Lars and Will’s Valuebase, the Oxfendazole group, and our own Manifold as having gone on to raise millions in further funding.
Do you think there was a sense that this might be the case?
So right now, I’m most interested in mechanisms that help us find such founders/projects. Just daydreaming here, is there any kind of prediction mechanism that can turn out a report as informative as the ACX Grants 1-year project update?
I guess you could encourage anyone to make markets, not just the funders. Then have some way to select the 10 most interesting markets. If you wanted you could try and run an LLM to generate text for some kind of premortem. Seems a bit galaxy brained though.
Thanks for the writeup, Nathan; I am indeed excited about the possibility of making better grants through forecasting/futarchic mechanisms. So I’ll start from the other direction: instead of reaching for futarchy as a hammer, start with, what are current major problems grantmakers face?
The problem that seems most important to solve: “finding projects that turn out to be orders of magnitude more successful/impactful than the rest”. Paul Graham describes funding seed-stage startups as “farming black swans”, which rings true to me. To look at two example rounds from ACX Grants, which I’ve been involved in:
ACX Grants: Many of the projects look good, but a handful seem to have gotten outlier success; I would count Lars and Will’s Valuebase, the Oxfendazole group, and our own Manifold as having gone on to raise millions in further funding.
ACX Forecasting Mini-grants: Still a bit early to tell, but OPTIC and BaseRateTimes (which we missed!) seem to have hit their goals and continue on to work on cool things.
So right now, I’m most interested in mechanisms that help us find such founders/projects. Just daydreaming here, is there any kind of prediction mechanism that can turn out a report as informative as the ACX Grants 1-year project update? The information value in most prediction markets is “% chance given by the market”, which misses out on the valuable qualitative sketches given by a retroactive writeup.
Other promising things:
Asking grantees to set up markets for their own outcomes; eg “If funded, will we successfully publish a paper that receives >10 citations within 1 year?” this might clarify exactly what goals the grantees are trying to hit.
Doing some kind of impact analysis for alignment work in past years; imagine a kind of “AI Safety Nobel Prizes” which identify what work turned out to be the most important. This would give future forecasting tools something concrete to predict on.
Do you think there was a sense that this might be the case?
I guess you could encourage anyone to make markets, not just the funders. Then have some way to select the 10 most interesting markets. If you wanted you could try and run an LLM to generate text for some kind of premortem. Seems a bit galaxy brained though.