That progress is incredibly fast, and new architectures explicitly aimed at creating AGI are getting proposed and implemented. (I’m agnostic about whether LLMs will scale past human reasoning—it seems very plausible they won’t. But I don’t think it matters, because that’s not the only research direction with tons of resources being put into it that create existential risks.)
Interesting—what do you have in mind for fast-progressing architectures explicitly aimed at creating AGI?
On your 2nd point on x-risks from non-LLM AI, am I right in thinking that you would also hope to catch dual-use scientific AI (for instance) in a compute governance scheme and/or pause? That’s a considerably broader remit than I’ve seen advocates of a pause/compute restrictions argue for and seems much harder to achieve both politically and technically.
If regulators or model review firms have any flexibility (which seems very plausible,) and the danger of AGI is recognized (which seems increasingly likely,) once there is any recognition of promising progress towards AGI, review of the models for safety would occur—as it should, as in any other engineering discipline, albeit in this case more like civil engineering, where lives are on the line, than software engineering, where they usually aren’t.
And considering other risks, as I argued in my piece, there’s an existing requirement for countries to ban bioweapons development, again, as there should be. I’m simply proposing that countries should fulfill that obligation, in this case, by requiring review of potentially dangerous research into ML which can be applied to certain classes of virology.
On the last part of your comment—if AGI doesn’t come out of LLMs then what would the justification for a pause be?
That progress is incredibly fast, and new architectures explicitly aimed at creating AGI are getting proposed and implemented. (I’m agnostic about whether LLMs will scale past human reasoning—it seems very plausible they won’t. But I don’t think it matters, because that’s not the only research direction with tons of resources being put into it that create existential risks.)
Interesting—what do you have in mind for fast-progressing architectures explicitly aimed at creating AGI?
On your 2nd point on x-risks from non-LLM AI, am I right in thinking that you would also hope to catch dual-use scientific AI (for instance) in a compute governance scheme and/or pause? That’s a considerably broader remit than I’ve seen advocates of a pause/compute restrictions argue for and seems much harder to achieve both politically and technically.
If regulators or model review firms have any flexibility (which seems very plausible,) and the danger of AGI is recognized (which seems increasingly likely,) once there is any recognition of promising progress towards AGI, review of the models for safety would occur—as it should, as in any other engineering discipline, albeit in this case more like civil engineering, where lives are on the line, than software engineering, where they usually aren’t.
And considering other risks, as I argued in my piece, there’s an existing requirement for countries to ban bioweapons development, again, as there should be. I’m simply proposing that countries should fulfill that obligation, in this case, by requiring review of potentially dangerous research into ML which can be applied to certain classes of virology.