I kinda disagree with yesterday-me on how important these arguments are. I’m not entirely sure why. I think writing out this post helped me see how limited they are, and decision-relevant evidence related to specific cases will likely overwhelm them. But anyway:
Clarification
Overall I’d guess most of the disagreement can be rounded off to you thinking that AI safety is known to be the top priority, and so the benefits of forecasting in terms of prioritization are pretty small. Is that fair?
I don’t try to argue the object-level. I’m instead suggesting reasons why direct work could be a higher priority under a greater range of uncertainty than people might think.
If this is true, it doesn’t necessarily mean that people should deprioritise forecasting. But it does mean that if your estimates are already within the range where direct work is higher priority, and expected evidence seems unlikely to shift estimates out of that range, then forecasting is marginally wastefwl.
The Future Fund’s estimates and resilience (or that of a large part of the community) might not be within that range, however. In which case they should probably prioritise forecasting.
I’m only saying “if you think this, then that”. The arguments could still be valid (and therefore potentially usefwl), even if the premises don’t hold in specific cases.
I’m not saying “forecasting is wastefwl”, I’m saying “here are some reasons that may help you analyse”. My opinions shouldn’t matter to the value of the post, since I explicitly say that people shouldn’t defer to me.
The arguments are entirely filtered for anti-forecasting, because I expect people to already be aware of the pro-forecasting arguments I currently have on offer, and I only wish to provide tools that they may not already have.
Role-based socioepistemology, and “forecasters” vs “explorers”
I’m supposed to try to figure out what a good research community looks like, and that will involve different people filling different roles. I believe there are tangible methodological differences between optimal forecasting and optimal exploring, and I want to refine my model of what those differences are.
When I talk about “forecasters”, it’s usually because I want to contrast that with what I think good methodologies for “explorers” are. Truth is, I have no idea how to do good forecasting, so it usually ends up being rather strawman-ish.
When I say “explorer” I think of people like V.S. Ramachandran, Feynman, Kary Mullis, and people who aren’t afraid of being wrong a bunch in order to be extremely right on occasion.
Whereas forecasters need to produce work that people can safely defer to and use for prioritising between consequential decisions, so the negative impact of being wrong are much greater.
Exploring helps forecasting more than the other way around
The way I usually update my estimates on the importance of doing X (e.g. animal advocacy or AI alignment) is by spending most of my time actually doing X and thereby learning how worthwhile it is.
If X hits diminishing returns, or I uncover evidence that reduces my confidence in X, then I’ll spend more resources trying to look for alternative paths.
This way, I still get evidence related to prioritisation and forecasting, but I also make progress on object-level projects. The forecasting-related flow of information from project-execution is often sufficient that I don’t need to spend much time explicitly forecasting.
(I realise the terms are insufficiently well-defined, but hopefwly I communicate my intention.)
It seems plausible that if something like this algorithm is widely adopted in the community, we not only make progress on important projects faster, but we also uncover more evidence related to prioritisation and forecasting.
I kinda disagree with yesterday-me on how important these arguments are. I’m not entirely sure why. I think writing out this post helped me see how limited they are, and decision-relevant evidence related to specific cases will likely overwhelm them. But anyway:
Clarification
I don’t try to argue the object-level. I’m instead suggesting reasons why direct work could be a higher priority under a greater range of uncertainty than people might think.
If this is true, it doesn’t necessarily mean that people should deprioritise forecasting. But it does mean that if your estimates are already within the range where direct work is higher priority, and expected evidence seems unlikely to shift estimates out of that range, then forecasting is marginally wastefwl.
The Future Fund’s estimates and resilience (or that of a large part of the community) might not be within that range, however. In which case they should probably prioritise forecasting.
I’m only saying “if you think this, then that”. The arguments could still be valid (and therefore potentially usefwl), even if the premises don’t hold in specific cases.
I’m not saying “forecasting is wastefwl”, I’m saying “here are some reasons that may help you analyse”. My opinions shouldn’t matter to the value of the post, since I explicitly say that people shouldn’t defer to me.
The arguments are entirely filtered for anti-forecasting, because I expect people to already be aware of the pro-forecasting arguments I currently have on offer, and I only wish to provide tools that they may not already have.
Role-based socioepistemology, and “forecasters” vs “explorers”
I’m supposed to try to figure out what a good research community looks like, and that will involve different people filling different roles. I believe there are tangible methodological differences between optimal forecasting and optimal exploring, and I want to refine my model of what those differences are.
When I talk about “forecasters”, it’s usually because I want to contrast that with what I think good methodologies for “explorers” are. Truth is, I have no idea how to do good forecasting, so it usually ends up being rather strawman-ish.
When I say “explorer” I think of people like V.S. Ramachandran, Feynman, Kary Mullis, and people who aren’t afraid of being wrong a bunch in order to be extremely right on occasion.
Whereas forecasters need to produce work that people can safely defer to and use for prioritising between consequential decisions, so the negative impact of being wrong are much greater.
Exploring helps forecasting more than the other way around
The way I usually update my estimates on the importance of doing X (e.g. animal advocacy or AI alignment) is by spending most of my time actually doing X and thereby learning how worthwhile it is.
If X hits diminishing returns, or I uncover evidence that reduces my confidence in X, then I’ll spend more resources trying to look for alternative paths.
This way, I still get evidence related to prioritisation and forecasting, but I also make progress on object-level projects. The forecasting-related flow of information from project-execution is often sufficient that I don’t need to spend much time explicitly forecasting.
(I realise the terms are insufficiently well-defined, but hopefwly I communicate my intention.)
It seems plausible that if something like this algorithm is widely adopted in the community, we not only make progress on important projects faster, but we also uncover more evidence related to prioritisation and forecasting.