Sure, so we agree?
Ah, sorry, I misunderstood that as criticism.
Do you think that forecasting like this will hurt the information landscape on average?
I’m a big fan of the development e.g. QRI’s process of making tools that make it increasingly easy to translate natural thoughts into more usable forms. In my dream world, if you told me your beliefs it would be in the form of a set of distributions that I could run a monte carlo sim on, having potentially substituted my own opinions if I felt differently confident than you (and maybe beyond that there’s still neater ways of unpacking my credences that even better tools could reveal).
Absent that, I’m a fan of forecasting, but I worry that overnormalising the naive I-say-a-number-and-you-have-no-idea-how-I-reached-it-or-how-confident-I-am-in-it form of it might get in the way of developing it into something better.
I’m still highly sceptical of neglectedness as anything but a first-pass heuristic for how prioritisation organisations might spend their early research—firstly because there are so many ways a field can be ‘not neglected’ and still highly leveraged (e.g. Givewell and Giving What We Can were only able to have the comparative impact they did because the global health field had been vigorously researched but no-one had systemically done the individual-level prioritisation they did with the research results); secondly because it encourages EA to reject established learning in a way I find dangerously hubristic (FTX weren’t irresponsible—they just took a neglected approach to fundraising!).
If we must keep using this heuristic, it helps to introduce supporting heuristics like the one you mention; to which I’d add ‘look at the amount of input relative to the amount of input needed to solve the issue’. Climate has had far more input than AI safety, but it’s unclear to me whether the proportion of input it’s had to what it needs is higher.