> I don’t see forecasting as being a very targeted use of time for making the world better
I disagree with this. I think that one generator of this is that I like to think of the thing that I do when forecasting as “improving my models of the world”, and assigning probabilities as a tool that I use to cull inaccurate models. One past example where I think this was with estimates of [nuclear risk](https://forum.effectivealtruism.org/posts/2nDTrDPZJBEerZGrk/samotsvety-nuclear-risk-update-october-2022), where I think that forecasting was a useful lense & affected some decisions.
> this contest can make people less effective because it incentivises them to work on forecasting when they otherwise would have worked on generating solutions
> [some stuff about how direct work would be better]
So I agree that conditional on AI being a top priority, direct work is more important. As Misha mentioned, however, one still has to determine to what extent that is the case. Forecasting isn’t the only tool to do that, though.
> These words don’t have strict definitions, but I think of “forecasting” as spending optimisation on predicting what will happen, and “problem-solving” as trying to change what will happen (by generating new ideas and solutions). Forecasting is differentiating between what’s already known, problem-solving is generating something that doesn’t exist yet.
Idk, man, I like the “acquring better models of the world” framing better.
You might also get some mileage out of this old blogpost of mine: Building Blocks of Utility Maximization; maybe rewording your objections in terms of the specific parts of expected utility maximization will be clarifying.
Overall I’d guess most of the disagreement can be rounded off to you thinking that AI safety is known to be the top priority, and so the benefits of forecasting in terms of prioritization are pretty small. Is that fair?
I kinda disagree with yesterday-me on how important these arguments are. I’m not entirely sure why. I think writing out this post helped me see how limited they are, and decision-relevant evidence related to specific cases will likely overwhelm them. But anyway:
Clarification
Overall I’d guess most of the disagreement can be rounded off to you thinking that AI safety is known to be the top priority, and so the benefits of forecasting in terms of prioritization are pretty small. Is that fair?
I don’t try to argue the object-level. I’m instead suggesting reasons why direct work could be a higher priority under a greater range of uncertainty than people might think.
If this is true, it doesn’t necessarily mean that people should deprioritise forecasting. But it does mean that if your estimates are already within the range where direct work is higher priority, and expected evidence seems unlikely to shift estimates out of that range, then forecasting is marginally wastefwl.
The Future Fund’s estimates and resilience (or that of a large part of the community) might not be within that range, however. In which case they should probably prioritise forecasting.
I’m only saying “if you think this, then that”. The arguments could still be valid (and therefore potentially usefwl), even if the premises don’t hold in specific cases.
I’m not saying “forecasting is wastefwl”, I’m saying “here are some reasons that may help you analyse”. My opinions shouldn’t matter to the value of the post, since I explicitly say that people shouldn’t defer to me.
The arguments are entirely filtered for anti-forecasting, because I expect people to already be aware of the pro-forecasting arguments I currently have on offer, and I only wish to provide tools that they may not already have.
Role-based socioepistemology, and “forecasters” vs “explorers”
I’m supposed to try to figure out what a good research community looks like, and that will involve different people filling different roles. I believe there are tangible methodological differences between optimal forecasting and optimal exploring, and I want to refine my model of what those differences are.
When I talk about “forecasters”, it’s usually because I want to contrast that with what I think good methodologies for “explorers” are. Truth is, I have no idea how to do good forecasting, so it usually ends up being rather strawman-ish.
When I say “explorer” I think of people like V.S. Ramachandran, Feynman, Kary Mullis, and people who aren’t afraid of being wrong a bunch in order to be extremely right on occasion.
Whereas forecasters need to produce work that people can safely defer to and use for prioritising between consequential decisions, so the negative impact of being wrong are much greater.
Exploring helps forecasting more than the other way around
The way I usually update my estimates on the importance of doing X (e.g. animal advocacy or AI alignment) is by spending most of my time actually doing X and thereby learning how worthwhile it is.
If X hits diminishing returns, or I uncover evidence that reduces my confidence in X, then I’ll spend more resources trying to look for alternative paths.
This way, I still get evidence related to prioritisation and forecasting, but I also make progress on object-level projects. The forecasting-related flow of information from project-execution is often sufficient that I don’t need to spend much time explicitly forecasting.
(I realise the terms are insufficiently well-defined, but hopefwly I communicate my intention.)
It seems plausible that if something like this algorithm is widely adopted in the community, we not only make progress on important projects faster, but we also uncover more evidence related to prioritisation and forecasting.
Some thoughts as I was reading your post:
> I don’t see forecasting as being a very targeted use of time for making the world better
I disagree with this. I think that one generator of this is that I like to think of the thing that I do when forecasting as “improving my models of the world”, and assigning probabilities as a tool that I use to cull inaccurate models. One past example where I think this was with estimates of [nuclear risk](https://forum.effectivealtruism.org/posts/2nDTrDPZJBEerZGrk/samotsvety-nuclear-risk-update-october-2022), where I think that forecasting was a useful lense & affected some decisions.
> this contest can make people less effective because it incentivises them to work on forecasting when they otherwise would have worked on generating solutions
At the same time, forecasting can make it more legible or more apparent to people that working on AI safety is important. It could also end up concluding that it is *not* that important. I’m also interested in other ways to do that, like [this contest](https://forum.effectivealtruism.org/posts/noDYmqoDxYk5TXoNm/usd5k-challenge-to-quantify-the-impact-of-80-000-hours-top) to quantify the value of different career paths.
> [some stuff about how direct work would be better]
So I agree that conditional on AI being a top priority, direct work is more important. As Misha mentioned, however, one still has to determine to what extent that is the case. Forecasting isn’t the only tool to do that, though.
> These words don’t have strict definitions, but I think of “forecasting” as spending optimisation on predicting what will happen, and “problem-solving” as trying to change what will happen (by generating new ideas and solutions). Forecasting is differentiating between what’s already known, problem-solving is generating something that doesn’t exist yet.
Idk, man, I like the “acquring better models of the world” framing better.
You might also get some mileage out of this old blogpost of mine: Building Blocks of Utility Maximization; maybe rewording your objections in terms of the specific parts of expected utility maximization will be clarifying.
Overall I’d guess most of the disagreement can be rounded off to you thinking that AI safety is known to be the top priority, and so the benefits of forecasting in terms of prioritization are pretty small. Is that fair?
I kinda disagree with yesterday-me on how important these arguments are. I’m not entirely sure why. I think writing out this post helped me see how limited they are, and decision-relevant evidence related to specific cases will likely overwhelm them. But anyway:
Clarification
I don’t try to argue the object-level. I’m instead suggesting reasons why direct work could be a higher priority under a greater range of uncertainty than people might think.
If this is true, it doesn’t necessarily mean that people should deprioritise forecasting. But it does mean that if your estimates are already within the range where direct work is higher priority, and expected evidence seems unlikely to shift estimates out of that range, then forecasting is marginally wastefwl.
The Future Fund’s estimates and resilience (or that of a large part of the community) might not be within that range, however. In which case they should probably prioritise forecasting.
I’m only saying “if you think this, then that”. The arguments could still be valid (and therefore potentially usefwl), even if the premises don’t hold in specific cases.
I’m not saying “forecasting is wastefwl”, I’m saying “here are some reasons that may help you analyse”. My opinions shouldn’t matter to the value of the post, since I explicitly say that people shouldn’t defer to me.
The arguments are entirely filtered for anti-forecasting, because I expect people to already be aware of the pro-forecasting arguments I currently have on offer, and I only wish to provide tools that they may not already have.
Role-based socioepistemology, and “forecasters” vs “explorers”
I’m supposed to try to figure out what a good research community looks like, and that will involve different people filling different roles. I believe there are tangible methodological differences between optimal forecasting and optimal exploring, and I want to refine my model of what those differences are.
When I talk about “forecasters”, it’s usually because I want to contrast that with what I think good methodologies for “explorers” are. Truth is, I have no idea how to do good forecasting, so it usually ends up being rather strawman-ish.
When I say “explorer” I think of people like V.S. Ramachandran, Feynman, Kary Mullis, and people who aren’t afraid of being wrong a bunch in order to be extremely right on occasion.
Whereas forecasters need to produce work that people can safely defer to and use for prioritising between consequential decisions, so the negative impact of being wrong are much greater.
Exploring helps forecasting more than the other way around
The way I usually update my estimates on the importance of doing X (e.g. animal advocacy or AI alignment) is by spending most of my time actually doing X and thereby learning how worthwhile it is.
If X hits diminishing returns, or I uncover evidence that reduces my confidence in X, then I’ll spend more resources trying to look for alternative paths.
This way, I still get evidence related to prioritisation and forecasting, but I also make progress on object-level projects. The forecasting-related flow of information from project-execution is often sufficient that I don’t need to spend much time explicitly forecasting.
(I realise the terms are insufficiently well-defined, but hopefwly I communicate my intention.)
It seems plausible that if something like this algorithm is widely adopted in the community, we not only make progress on important projects faster, but we also uncover more evidence related to prioritisation and forecasting.