We —Ozzie Gooen of the Quantified Uncertainty Research Institute and I— might later be interested in expanding this work and eventually using it for forecasting —e.g., predicting whether each candidate would still seem promising after much more rigorous research.
Can you expand a little bit on what you mean by this and how it might work? I’m not sure what you mean by ‘forecasting’ in this context.
On the first day, alexrjl went to Carl Shulman and said: “I have looked at 100 cause candidates, and here are the five I predict have the highest probability of being evaluated favorably by you”
And Carl Shulman looked at alexrjl in the eye, and said: “these are all shit, kiddo”
On the seventh day, alexrjl came back and said: “I have read through 1000 cause candidates in the EA Forum, LessWrong, the old Felicifia forum and all of Brian Tomasik’s writtings. And here are the three I predict have the highest probability of being evaluated favorably by you”
And Carl Shulman looked at alexrjl in the eye and said: “David Pearce already came up with your #1 twenty years ago, but on further inspection it was revealed to not be promising. Ideas#2 and #3 are not worth much because of such and such”
On the seventh day of the seventh week alexrjl came back, and said “I have scrapped Wikipedia, Reddit, all books ever written and otherwise the good half of the internet for keywords related to new cause areas, and came up with 1,000,000 candidates. Here is my top proposal”
And Carl Shulman answered “Mmh, I guess this could be competitive with OpenPhil’s last dollar”
Sure. So one straightforward thing one can do is forecast the potential of each idea/evaluate its promisingness, and then just implement the best ideas, or try to convince other people to do so.
Normally, this would run into incentive problems because if forecasting accuracy isn’t evaluated, the incentive is to just to make the forecast that would otherwise benefit the forecaster. But if you have a bunch of aligned EAs, that isn’t that much of a problem.
Still, one might run into the problem that maybe the forecasters are in fact subtly bad; maybe you suspect that they’re missing a bunch of gears about how politics and organizations work. In that case, we can still try to amplify some research process we do trust, like a funder or incubator who does their own evaluation. For example, we could get a bunch of forecasters to try to forecast whether, after much more rigorous research, some more rigorous, senior and expensive evaluators also finds a cause candidate exciting, and then just carry the expensive evaluation for the ideas forecasted to be the most promising.
Simultaneously, I’m interested in altruistic uses for scalable forecasting, and cause candidates seems like a rich field to experiment on. But, right now, these are just ideas, without concrete plans to follow on them.
Thanks for putting this together, this is great!
Can you expand a little bit on what you mean by this and how it might work? I’m not sure what you mean by ‘forecasting’ in this context.
On the first day, alexrjl went to Carl Shulman and said: “I have looked at 100 cause candidates, and here are the five I predict have the highest probability of being evaluated favorably by you”
And Carl Shulman looked at alexrjl in the eye, and said: “these are all shit, kiddo”
On the seventh day, alexrjl came back and said: “I have read through 1000 cause candidates in the EA Forum, LessWrong, the old Felicifia forum and all of Brian Tomasik’s writtings. And here are the three I predict have the highest probability of being evaluated favorably by you”
And Carl Shulman looked at alexrjl in the eye and said: “David Pearce already came up with your #1 twenty years ago, but on further inspection it was revealed to not be promising. Ideas#2 and #3 are not worth much because of such and such”
On the seventh day of the seventh week alexrjl came back, and said “I have scrapped Wikipedia, Reddit, all books ever written and otherwise the good half of the internet for keywords related to new cause areas, and came up with 1,000,000 candidates. Here is my top proposal”
And Carl Shulman answered “Mmh, I guess this could be competitive with OpenPhil’s last dollar”
At this point, alexrjl attained nirvana.
Sure. So one straightforward thing one can do is forecast the potential of each idea/evaluate its promisingness, and then just implement the best ideas, or try to convince other people to do so.
Normally, this would run into incentive problems because if forecasting accuracy isn’t evaluated, the incentive is to just to make the forecast that would otherwise benefit the forecaster. But if you have a bunch of aligned EAs, that isn’t that much of a problem.
Still, one might run into the problem that maybe the forecasters are in fact subtly bad; maybe you suspect that they’re missing a bunch of gears about how politics and organizations work. In that case, we can still try to amplify some research process we do trust, like a funder or incubator who does their own evaluation. For example, we could get a bunch of forecasters to try to forecast whether, after much more rigorous research, some more rigorous, senior and expensive evaluators also finds a cause candidate exciting, and then just carry the expensive evaluation for the ideas forecasted to be the most promising.
Simultaneously, I’m interested in altruistic uses for scalable forecasting, and cause candidates seems like a rich field to experiment on. But, right now, these are just ideas, without concrete plans to follow on them.
Thanks. I hadn’t seen those amplification posts before, seems very interesting!