Is marginal work on AI forecasting usefwl? With so much brainpower being spent on moving a single number up or down, I’d expect it to hit diminishing returns pretty fast. To what extent is forecasting a massive brain drain and people should just get to work on the object-level problems if they’re sufficiently convinced? How sensitive to AI forecasting estimates are your priorities over object-level projects (as in, how many more years out would your estimate of X have to be)?
Update: I added some arguments against forecasting here, but they are very general, and I suspect they will be overwhelmed by evidence related to specific cases.
Is marginal work on AI forecasting usefwl? With so much brainpower being spent on moving a single number up or down, I’d expect it to hit diminishing returns pretty fast. To what extent is forecasting a massive brain drain and people should just get to work on the object-level problems if they’re sufficiently convinced? How sensitive to AI forecasting estimates are your priorities over object-level projects (as in, how many more years out would your estimate of X have to be)?
Update: I added some arguments against forecasting here, but they are very general, and I suspect they will be overwhelmed by evidence related to specific cases.