Sure. So one straightforward thing one can do is forecast the potential of each idea/evaluate its promisingness, and then just implement the best ideas, or try to convince other people to do so.
Normally, this would run into incentive problems because if forecasting accuracy isn’t evaluated, the incentive is to just to make the forecast that would otherwise benefit the forecaster. But if you have a bunch of aligned EAs, that isn’t that much of a problem.
Still, one might run into the problem that maybe the forecasters are in fact subtly bad; maybe you suspect that they’re missing a bunch of gears about how politics and organizations work. In that case, we can still try to amplify some research process we do trust, like a funder or incubator who does their own evaluation. For example, we could get a bunch of forecasters to try to forecast whether, after much more rigorous research, some more rigorous, senior and expensive evaluators also finds a cause candidate exciting, and then just carry the expensive evaluation for the ideas forecasted to be the most promising.
Simultaneously, I’m interested in altruistic uses for scalable forecasting, and cause candidates seems like a rich field to experiment on. But, right now, these are just ideas, without concrete plans to follow on them.
Sure. So one straightforward thing one can do is forecast the potential of each idea/evaluate its promisingness, and then just implement the best ideas, or try to convince other people to do so.
Normally, this would run into incentive problems because if forecasting accuracy isn’t evaluated, the incentive is to just to make the forecast that would otherwise benefit the forecaster. But if you have a bunch of aligned EAs, that isn’t that much of a problem.
Still, one might run into the problem that maybe the forecasters are in fact subtly bad; maybe you suspect that they’re missing a bunch of gears about how politics and organizations work. In that case, we can still try to amplify some research process we do trust, like a funder or incubator who does their own evaluation. For example, we could get a bunch of forecasters to try to forecast whether, after much more rigorous research, some more rigorous, senior and expensive evaluators also finds a cause candidate exciting, and then just carry the expensive evaluation for the ideas forecasted to be the most promising.
Simultaneously, I’m interested in altruistic uses for scalable forecasting, and cause candidates seems like a rich field to experiment on. But, right now, these are just ideas, without concrete plans to follow on them.
Thanks. I hadn’t seen those amplification posts before, seems very interesting!