More from Existential Risk Observatory (@XRobservatory) on Twitter:
It was a landmark speech by @RishiSunak: the first real recognition of existential risk by a world leader. But even better are the press questions at the end:
@itvnews: “If the risks are as big as you say, shouldn’t we at the very least slow down AI development, at least long enough to understand and control the risks.”
@SkyNews: “Is it fair to say we know enough already to call for a moratorium on artificial general intelligence? Would you back a moratorium on AGI?”
Sky again: “Given the harms and the risk you pointed out in this report, and some of those are profound, surely there must be some red lines we can draw at this point. Which ones are yours?”
@TheSun: “You say we shouldn’t be losing sleep over this stuff. If not, why not?”
@theipaper: “You haven’t really talked about whether your government is actually going to regulate. Will there be an AI Bill or similar on the The King’s Speech?”
iNews again: “On the details of that regulation: does the government remain commited to this idea of responsible scaling, whereby you sort of test models after they’re being developed, or is it time to start thinking about how you intervene to stop the most dangerous models being developed at all?”
Who would have thought one year ago? The public debate about AI xrisk so far outdoes everyone’s expectations. Next step: convincing answers.
More from Existential Risk Observatory (@XRobservatory) on Twitter: