Rather than go through this paragraph-by-paragraph, let me pick one particular thing.
Your overall thesis is that thereās little or no evidence behind many models of AI risk.
By Ad-hoc predictive models, youāve cited a sum total of 2 blog posts criticising AI 2027 - hardly the āacademic consensusā you ask for in that paragraph.
You also donāt actually point to any specific or characteristic issues from those 2 blog posts in that paragraph, instead appealing to heuristics, concept handles, and accusations. I would honestly describe that paragraph as a narrative argument.
Overall I disagree and am also downvoting this post as not a helpful contribution.
The point of this post is not to address specific issues of AI 2027, but narrative arguments and ad-hoc models in general. AI 2027 contains both, and thus exemplifies them well. By choosing a model not based on reference literature, and thus established consensus, the authors risk incorporating their own biases and assumptions into the model. This risk is present in all ad-hoc, models, not just AI 2027, which is why all ad-hoc models should be met with strong skepticism until supported by wider consensus.
You make a good observation that the criticisms of AI 2027 do not form an āacademic consensusā either. This is because AI 2027 itself is not an academic publication, nor has it been a topic of any major academic discussion. It is possible for non-academic works to contain valuable contributions ā as I say in an above comment, peer-review is not magic. Furthermore, even new and original models that were āad-hocā when first published can be good. However, the lack of wider adoption of this model suggests scientists suggests it is not viewed as a solid foundation to build on. But this lack does not, of course, explain why this is the case, so I have included links to commentary by other people in the EA community that describe the concrete issues in their model. Again, these issues are not the main point of my post, and are only provided for the readerās convenience.
You also donāt actually point to any specific or characteristic issues from those 2 blog posts in that paragraph, instead appealing to heuristics, concept handles, and accusations. I would honestly describe that paragraph as a narrative argument.
A narrative argument presents the argument in a form of a story, like AI 2027ā²s science fiction scenario or the parables in Yudkowsky and Soaresās book. Iām not sure what part of my text you characterize as a story, could you elaborate on that?
Rather than go through this paragraph-by-paragraph, let me pick one particular thing.
Your overall thesis is that thereās little or no evidence behind many models of AI risk.
By Ad-hoc predictive models, youāve cited a sum total of 2 blog posts criticising AI 2027 - hardly the āacademic consensusā you ask for in that paragraph.
You also donāt actually point to any specific or characteristic issues from those 2 blog posts in that paragraph, instead appealing to heuristics, concept handles, and accusations. I would honestly describe that paragraph as a narrative argument.
Overall I disagree and am also downvoting this post as not a helpful contribution.
Thank you for your criticism.
The point of this post is not to address specific issues of AI 2027, but narrative arguments and ad-hoc models in general. AI 2027 contains both, and thus exemplifies them well. By choosing a model not based on reference literature, and thus established consensus, the authors risk incorporating their own biases and assumptions into the model. This risk is present in all ad-hoc, models, not just AI 2027, which is why all ad-hoc models should be met with strong skepticism until supported by wider consensus.
You make a good observation that the criticisms of AI 2027 do not form an āacademic consensusā either. This is because AI 2027 itself is not an academic publication, nor has it been a topic of any major academic discussion. It is possible for non-academic works to contain valuable contributions ā as I say in an above comment, peer-review is not magic. Furthermore, even new and original models that were āad-hocā when first published can be good. However, the lack of wider adoption of this model suggests scientists suggests it is not viewed as a solid foundation to build on. But this lack does not, of course, explain why this is the case, so I have included links to commentary by other people in the EA community that describe the concrete issues in their model. Again, these issues are not the main point of my post, and are only provided for the readerās convenience.
A narrative argument presents the argument in a form of a story, like AI 2027ā²s science fiction scenario or the parables in Yudkowsky and Soaresās book. Iām not sure what part of my text you characterize as a story, could you elaborate on that?