The “Retrospective grant evaluations of longtermist projects” idea seems like something that would work really well in conjunction with an impact market, like Manifund. That — retroactive evaluations — must be done extremely well for impact markets to function.
Since this could potentially be a really difficult/expensive process, randomized conditional prediction markets could also help (full explanation here). Here’s an example scheme I cooked up:
Subsidize prediction markets on all of the following:
Conditional on Project A being retroactively evaluated by the Retroactive Evaluation Team (RET), how much impact will it have[1]?
Conditional on Project B being retroactively evaluated by the RET, how much impact will it have?
etc.
Then, randomly pick one project (say, Project G) to retroactively evaluate, and fund the retroactive evaluation of Project G.
For all the other projects’ markets, refund all of the investors and, to quote DYNOMIGHT, “use the SWEET PREDICTIVE KNOWLEDGE … for STAGGERING SCIENTIFIC PROGRESS and MAXIMAL STATUS ENHANCEMENT.”
Obviously, the amount of impact would need to be metricized in some way. Again obviously, this is an incredibly difficult problem that I’m handwaving away.
The one idea that comes to mind is evaluating n projects and ranking their relative impact, where n is a proper subset of the number of total projects greater than 1. Then, change the questions to “Conditional on Project A/B/C/etc being retroactively evaluated, will it be ranked highest?” That avoids actually putting a number on it, but it comes with its own host of problems
Yeah, we’d absolutely love to see (and fund!) retroactive evals of longtermist project—as Saul says, these are absolutely necessary for impact certs. For example, the ACX Minigrants impact certs round is going to need evals for distributing the $40k in retroactive funding. Scott is going to be the one to decide how the funding is divvied up, but I’d love to sponsor external evals as well.
The “Retrospective grant evaluations of longtermist projects” idea seems like something that would work really well in conjunction with an impact market, like Manifund. That — retroactive evaluations — must be done extremely well for impact markets to function.
Since this could potentially be a really difficult/expensive process, randomized conditional prediction markets could also help (full explanation here). Here’s an example scheme I cooked up:
Subsidize prediction markets on all of the following:
Conditional on Project A being retroactively evaluated by the Retroactive Evaluation Team (RET), how much impact will it have[1]?
Conditional on Project B being retroactively evaluated by the RET, how much impact will it have?
etc.
Then, randomly pick one project (say, Project G) to retroactively evaluate, and fund the retroactive evaluation of Project G.
For all the other projects’ markets, refund all of the investors and, to quote DYNOMIGHT, “use the SWEET PREDICTIVE KNOWLEDGE … for STAGGERING SCIENTIFIC PROGRESS and MAXIMAL STATUS ENHANCEMENT.”
Obviously, the amount of impact would need to be metricized in some way. Again obviously, this is an incredibly difficult problem that I’m handwaving away.
The one idea that comes to mind is evaluating n projects and ranking their relative impact, where n is a proper subset of the number of total projects greater than 1. Then, change the questions to “Conditional on Project A/B/C/etc being retroactively evaluated, will it be ranked highest?” That avoids actually putting a number on it, but it comes with its own host of problems
Yeah, we’d absolutely love to see (and fund!) retroactive evals of longtermist project—as Saul says, these are absolutely necessary for impact certs. For example, the ACX Minigrants impact certs round is going to need evals for distributing the $40k in retroactive funding. Scott is going to be the one to decide how the funding is divvied up, but I’d love to sponsor external evals as well.