I weakly disagree here. I am very much in the “make up statistics and be clear about that” camp.
I’m sympathetic to that camp, but I think it has major epistemic issues that largely go unaddressed:
It systemically biases away from extreme probabilities (it’s hard to assert < than 10−3, for e.g., but many real-world probabilities are and post-hoc credences look like they should have been below this)
By focusing on very specific pathways towards some outcome, it diverts attention towards easily definable issues, and hence away from the prospects of more complex pathways of causing the same or value-equivalent outcomes.[1]
It strongly emphasises point credence estimates over distributions, the latter of which are IMO well worth the extra effort, at least whenever you’re broadcasting your credences to the rest of the world.
By the way, I find this a strange remark:
Seems like a lot of specific, quite technical criticisms.
This sounds like exactly the sort of criticism that’s most valuable of a project like this! If their methodology were sound it might be more valuable to present a more holistic set of criticisms and some contrary credences, but David and titotal aren’t exactly nitpicking syntactic errors—IMO they’re finding concrete reasons to be deeply suspicious of virtually every step of the AI 2027 methodology.
For e.g. I think it’s a huge concern that the EA movement have been pulling people away from nonextinction global catastrophic work because they focused for so long on extinction being the only plausible way we could fail to become interstellar, subject to the latter being possible. I’ve been arguing for yearsnow, that the extinction focus is too blunt a tool, at least for the level of investigation the question has received from longtermists and x-riskers.
Yeah, I think you make good points. I think that forecasts are useful on balance, and then people should investigate them. Do you think that forecasting like this will hurt the information landscape on average?
Personally, to me, people engaged in this forecasting generally seem more capable of changing their minds. I think the AI2027 folks would probably be pretty capable of acknowledging they were wrong, which seems like a healthy thing. Probably more so than the media and academic?
Seems like a lot of specific, quite technical criticisms.
Sure, so we agree?
(Maybe you think I’m being derogatory, but no, I’m just allowing people who scroll down to the comments to see that I think this article contains a lot of specific, quite technical criticisms. If in doubt, I say things I think are true.)
Do you think that forecasting like this will hurt the information landscape on average?
I’m a big fan of the development e.g. QRI’s process of making tools that make it increasingly easy to translate natural thoughts into more usable forms. In my dream world, if you told me your beliefs it would be in the form of a set of distributions that I could run a monte carlo sim on, having potentially substituted my own opinions if I felt differently confident than you (and maybe beyond that there’s still neater ways of unpacking my credences that even better tools could reveal).
Absent that, I’m a fan of forecasting, but I worry that overnormalising the naive I-say-a-number-and-you-have-no-idea-how-I-reached-it-or-how-confident-I-am-in-it form of it might get in the way of developing it into something better.
I dunno, I think that sounds galaxy-brained to me. I think that giving numbers is better than not giving them and that thinking carefully about the numbers is better than that. I don’t really buy your second order concerns (or think they could easily go in the opposite direction)
I’m sympathetic to that camp, but I think it has major epistemic issues that largely go unaddressed:
It systemically biases away from extreme probabilities (it’s hard to assert < than 10−3, for e.g., but many real-world probabilities are and post-hoc credences look like they should have been below this)
By focusing on very specific pathways towards some outcome, it diverts attention towards easily definable issues, and hence away from the prospects of more complex pathways of causing the same or value-equivalent outcomes.[1]
It strongly emphasises point credence estimates over distributions, the latter of which are IMO well worth the extra effort, at least whenever you’re broadcasting your credences to the rest of the world.
By the way, I find this a strange remark:
This sounds like exactly the sort of criticism that’s most valuable of a project like this! If their methodology were sound it might be more valuable to present a more holistic set of criticisms and some contrary credences, but David and titotal aren’t exactly nitpicking syntactic errors—IMO they’re finding concrete reasons to be deeply suspicious of virtually every step of the AI 2027 methodology.
For e.g. I think it’s a huge concern that the EA movement have been pulling people away from nonextinction global catastrophic work because they focused for so long on extinction being the only plausible way we could fail to become interstellar, subject to the latter being possible. I’ve been arguing for years now, that the extinction focus is too blunt a tool, at least for the level of investigation the question has received from longtermists and x-riskers.
Yeah, I think you make good points. I think that forecasts are useful on balance, and then people should investigate them. Do you think that forecasting like this will hurt the information landscape on average?
Personally, to me, people engaged in this forecasting generally seem more capable of changing their minds. I think the AI2027 folks would probably be pretty capable of acknowledging they were wrong, which seems like a healthy thing. Probably more so than the media and academic?
Sure, so we agree?
(Maybe you think I’m being derogatory, but no, I’m just allowing people who scroll down to the comments to see that I think this article contains a lot of specific, quite technical criticisms. If in doubt, I say things I think are true.)
Ah, sorry, I misunderstood that as criticism.
I’m a big fan of the development e.g. QRI’s process of making tools that make it increasingly easy to translate natural thoughts into more usable forms. In my dream world, if you told me your beliefs it would be in the form of a set of distributions that I could run a monte carlo sim on, having potentially substituted my own opinions if I felt differently confident than you (and maybe beyond that there’s still neater ways of unpacking my credences that even better tools could reveal).
Absent that, I’m a fan of forecasting, but I worry that overnormalising the naive I-say-a-number-and-you-have-no-idea-how-I-reached-it-or-how-confident-I-am-in-it form of it might get in the way of developing it into something better.
I dunno, I think that sounds galaxy-brained to me. I think that giving numbers is better than not giving them and that thinking carefully about the numbers is better than that. I don’t really buy your second order concerns (or think they could easily go in the opposite direction)