Seems like a lot of specific, quite technical criticisms. I don’t edorse Thorstadts work in general (or not endorse it), but often when he cites things I find them valuable. This has enough material that it seems worth reading.
I think my main disagreement is here:
“It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so” … I think the rationalist mantra of “If It’s Worth Doing, It’s Worth Doing With Made-Up Statistics” will turn out to hurt our information landscape much more than it helps.
I weakly disagree here. I am very much in the “make up statistics and be clear about that” camp. I disagree a bit with AI 2027 in that they don’t always label their forecasts with their median (which it turns out wasn’t 2027 ??).
I think that it is worth having and tracking individual predictions, though I acknowledge the risk that people are going to take them too seriously. That said, after some number of forecasters I think this info does become publishable (Katja Grace’s AI survey contains a lot of forecasts and is literally published).
I weakly disagree here. I am very much in the “make up statistics and be clear about that” camp.
I’m sympathetic to that camp, but I think it has major epistemic issues that largely go unaddressed:
It systemically biases away from extreme probabilities (it’s hard to assert < than 10−3, for e.g., but many real-world probabilities are and post-hoc credences look like they should have been below this)
By focusing on very specific pathways towards some outcome, it diverts attention towards easily definable issues, and hence away from the prospects of more complex pathways of causing the same or value-equivalent outcomes.[1]
It strongly emphasises point credence estimates over distributions, the latter of which are IMO well worth the extra effort, at least whenever you’re broadcasting your credences to the rest of the world.
By the way, I find this a strange remark:
Seems like a lot of specific, quite technical criticisms.
This sounds like exactly the sort of criticism that’s most valuable of a project like this! If their methodology were sound it might be more valuable to present a more holistic set of criticisms and some contrary credences, but David and titotal aren’t exactly nitpicking syntactic errors—IMO they’re finding concrete reasons to be deeply suspicious of virtually every step of the AI 2027 methodology.
For e.g. I think it’s a huge concern that the EA movement have been pulling people away from nonextinction global catastrophic work because they focused for so long on extinction being the only plausible way we could fail to become interstellar, subject to the latter being possible. I’ve been arguing for yearsnow, that the extinction focus is too blunt a tool, at least for the level of investigation the question has received from longtermists and x-riskers.
Yeah, I think you make good points. I think that forecasts are useful on balance, and then people should investigate them. Do you think that forecasting like this will hurt the information landscape on average?
Personally, to me, people engaged in this forecasting generally seem more capable of changing their minds. I think the AI2027 folks would probably be pretty capable of acknowledging they were wrong, which seems like a healthy thing. Probably more so than the media and academic?
Seems like a lot of specific, quite technical criticisms.
Sure, so we agree?
(Maybe you think I’m being derogatory, but no, I’m just allowing people who scroll down to the comments to see that I think this article contains a lot of specific, quite technical criticisms. If in doubt, I say things I think are true.)
Seems like a lot of specific, quite technical criticisms. I don’t edorse Thorstadts work in general (or not endorse it), but often when he cites things I find them valuable. This has enough material that it seems worth reading.
I think my main disagreement is here:
I weakly disagree here. I am very much in the “make up statistics and be clear about that” camp. I disagree a bit with AI 2027 in that they don’t always label their forecasts with their median (which it turns out wasn’t 2027 ??).
I think that it is worth having and tracking individual predictions, though I acknowledge the risk that people are going to take them too seriously. That said, after some number of forecasters I think this info does become publishable (Katja Grace’s AI survey contains a lot of forecasts and is literally published).
I’m sympathetic to that camp, but I think it has major epistemic issues that largely go unaddressed:
It systemically biases away from extreme probabilities (it’s hard to assert < than 10−3, for e.g., but many real-world probabilities are and post-hoc credences look like they should have been below this)
By focusing on very specific pathways towards some outcome, it diverts attention towards easily definable issues, and hence away from the prospects of more complex pathways of causing the same or value-equivalent outcomes.[1]
It strongly emphasises point credence estimates over distributions, the latter of which are IMO well worth the extra effort, at least whenever you’re broadcasting your credences to the rest of the world.
By the way, I find this a strange remark:
This sounds like exactly the sort of criticism that’s most valuable of a project like this! If their methodology were sound it might be more valuable to present a more holistic set of criticisms and some contrary credences, but David and titotal aren’t exactly nitpicking syntactic errors—IMO they’re finding concrete reasons to be deeply suspicious of virtually every step of the AI 2027 methodology.
For e.g. I think it’s a huge concern that the EA movement have been pulling people away from nonextinction global catastrophic work because they focused for so long on extinction being the only plausible way we could fail to become interstellar, subject to the latter being possible. I’ve been arguing for years now, that the extinction focus is too blunt a tool, at least for the level of investigation the question has received from longtermists and x-riskers.
Yeah, I think you make good points. I think that forecasts are useful on balance, and then people should investigate them. Do you think that forecasting like this will hurt the information landscape on average?
Personally, to me, people engaged in this forecasting generally seem more capable of changing their minds. I think the AI2027 folks would probably be pretty capable of acknowledging they were wrong, which seems like a healthy thing. Probably more so than the media and academic?
Sure, so we agree?
(Maybe you think I’m being derogatory, but no, I’m just allowing people who scroll down to the comments to see that I think this article contains a lot of specific, quite technical criticisms. If in doubt, I say things I think are true.)