I weakly disagree here. I am very much in the “make up statistics and be clear about that” camp.
I’m sympathetic to that camp, but I think it has major epistemic issues that largely go unaddressed:
It systemically biases away from extreme probabilities (it’s hard to assert < than , for e.g., but many real-world probabilities are and post-hoc credences look like they should have been below this)
By focusing on very specific pathways towards some outcome, it diverts attention towards easily definable issues, and hence away from the prospects of more complex pathways of causing the same or value-equivalent outcomes.[1]
It strongly emphasises point credence estimates over distributions, the latter of which are IMO well worth the extra effort, at least whenever you’re broadcasting your credences to the rest of the world.
By the way, I find this a strange remark:
Seems like a lot of specific, quite technical criticisms.
This sounds like exactly the sort of criticism that’s most valuable of a project like this! If their methodology were sound it might be more valuable to present a more holistic set of criticisms and some contrary credences, but David and titotal aren’t exactly nitpicking syntactic errors—IMO they’re finding concrete reasons to be deeply suspicious of virtually every step of the AI 2027 methodology.
- ^
For e.g. I think it’s a huge concern that the EA movement have been pulling people away from nonextinction global catastrophic work because they focused for so long on extinction being the only plausible way we could fail to become interstellar, subject to the latter being possible. I’ve been arguing for years now, that the extinction focus is too blunt a tool, at least for the level of investigation the question has received from longtermists and x-riskers.
Ah, sorry, I misunderstood that as criticism.
I’m a big fan of the development e.g. QRI’s process of making tools that make it increasingly easy to translate natural thoughts into more usable forms. In my dream world, if you told me your beliefs it would be in the form of a set of distributions that I could run a monte carlo sim on, having potentially substituted my own opinions if I felt differently confident than you (and maybe beyond that there’s still neater ways of unpacking my credences that even better tools could reveal).
Absent that, I’m a fan of forecasting, but I worry that overnormalising the naive I-say-a-number-and-you-have-no-idea-how-I-reached-it-or-how-confident-I-am-in-it form of it might get in the way of developing it into something better.