First, again, overall, I think we generally agree on most of this stuff.
Perhaps, but I think you gain a ton of info from actually trying to do stuff and iterating. I think prioritization work can sometimes seem more intuitively great than it ends up being, relative to the iteration strategy.
I agree to an extent. But I think there are some very profound prioritization questions that haven’t been researched much, and that I don’t expect us to gain much insight from by experimentation in the next few years. I’d still like us to do experimentation (If I were in charge of a $50Mil fund, I’d start spending it soon, just not as quickly as I would otherwise). For example:
How promising is it to improve the wisdom/intelligence of EAs vs. others?
How promising are brain-computer-interfaces vs. rationality training vs. forecasting?
What is a good strategy to encourage epistemic-helping AI, where philanthropists could have the most impact?
What kinds of benefits can we generically expect from forecasting/epistemics? How much should we aim for EAs to spend here?
I would love for this to be true! Am open to changing mind based on a compelling analysis.
We might be disagreeing a bit on what the bar for “valuable for EA decision-making” is. I see a lot of forecasting like accounting—it rarely leads to a clear and large decision, but it’s good to do, and steers organizations in better directions. I personally rely heavily on prediction markets for key understandings of EA topics, and see that people like Scott Alexander and Zvi seem to. I know less about the inner workings of OP, but the fact that they continue to pay for predictions that are very much for their questions seems like a sign. All that said, I think that ~95%+ of Manifold and a lot of Metaculus is not useful at all.
I think you might be understating how fungible OpenPhil’s efforts are between AI safety (particularly governance team) and forecasting
I’m not sure how much to focus on OP’s narrow choices here. I found it surprising that Javier went from governance to forecasting, and that previously it was the (very small) governance team that did forecasting. It’s possible that if I evaluated the situation, and had control of the situation, I’d recommend that OP moved marginal resources to governance from forecasting. But I’m a lot less interested in this question than I am, “is forecasting competitive with some EA activities, and how can we do it well?”
Seems unclear what should count as internal research for EA, e.g. are you counting OP worldview diversification team / AI strategy research in general?
Thanks for the replies! Some quick responses.
First, again, overall, I think we generally agree on most of this stuff.
I agree to an extent. But I think there are some very profound prioritization questions that haven’t been researched much, and that I don’t expect us to gain much insight from by experimentation in the next few years. I’d still like us to do experimentation (If I were in charge of a $50Mil fund, I’d start spending it soon, just not as quickly as I would otherwise). For example:
How promising is it to improve the wisdom/intelligence of EAs vs. others?
How promising are brain-computer-interfaces vs. rationality training vs. forecasting?
What is a good strategy to encourage epistemic-helping AI, where philanthropists could have the most impact?
What kinds of benefits can we generically expect from forecasting/epistemics? How much should we aim for EAs to spend here?
We might be disagreeing a bit on what the bar for “valuable for EA decision-making” is. I see a lot of forecasting like accounting—it rarely leads to a clear and large decision, but it’s good to do, and steers organizations in better directions. I personally rely heavily on prediction markets for key understandings of EA topics, and see that people like Scott Alexander and Zvi seem to. I know less about the inner workings of OP, but the fact that they continue to pay for predictions that are very much for their questions seems like a sign. All that said, I think that ~95%+ of Manifold and a lot of Metaculus is not useful at all.
I’m not sure how much to focus on OP’s narrow choices here. I found it surprising that Javier went from governance to forecasting, and that previously it was the (very small) governance team that did forecasting. It’s possible that if I evaluated the situation, and had control of the situation, I’d recommend that OP moved marginal resources to governance from forecasting. But I’m a lot less interested in this question than I am, “is forecasting competitive with some EA activities, and how can we do it well?”
Yep, I’d count these.