I think my point is more like “if anyone gets anywhere near advanced AI, governments will have something to say about it—they will be a central player in shaping its development and deployment.” It seems very unlikely to me that governments would not notice or do anything about such a potentially transformative technology. It seems very unlikely to me that a company could train and deploy an advanced AI system of the kind you’re thinking about without governments regulating and directing it. On funding specifically, I would probably be >50% on governments getting involved in meaningful private-public collaboration if we get closer to substantial leaps in capabilities (though it seems unlikely to me that AI progress will get to that point by 2030).
On your regulation question, I’d note that the EU AI Act, likely to pass next year, already proposes the following requirements applying to companies providing (eg selling, licensing or selling access to) ‘general purpose AI systems’ (eg large foundation models):
Risk Management System
Data and data governance
Technical documentation
Record-keeping
Transparency and provision of information to users
Human oversight
Accuracy, robustness and cybersecurity
So they’ll already have to do (post-training) safety testing before deployment. Regulating the training of these models is different and harder, but even that seems plausible to me at some point, if the training runs become ever huger and potentially more consequential. Consider the analogy that we regulate biological experiments.
I think that our disagreement comes from what we mean by “regulating and directing it.”
My rough model of what usually happens in national governments (and not the EU, which is a lot more independent from its citizen than the typical national government) is that there are two scenarios:
Scenario 1 in which national governments regulate or do things on something nobody is caring about (in particular, not the media). That gives birth to a lot of degrees of freedom and the possibility of doing fairly ambitious things (cf Secret Congress)
Scenario 2 in which national governments regulate things that many people care about and brings attention and then nothing gets done, most measures are fairly weak etc. In this scenario my rough model is that national governments do the smallest thing that satisfy their electorate + key stakeholders.
I feel like we’re extremely likely to be in scenario 2 regarding AI. And thus that no significant measure will be taken, which is why I put the emphasis of “no strong [positive] effect” on AI safety. So basically I feel like the best you can probably do in national policy is something like “avoid that they do bad things” (which is really good if it’s a big risk) or “do mildly good things”. But to me, it’s quite unlikely that we go from a world where we die to a world where we don’t die thanks to a theory of change which is focused on national policy.
The EU AI Act is a bit different in that as I said above, the EU is much less tied to the daily worries of citizen and thus is operating under less constraints. Thus I think that it’s indeed plausible that the EU does something ambitious on GPAIS but I think that unfortunately it’s unlikely that the US will replicate something locally and that the EU compliance mechanisms are not super likely to cut the worst risks for the UK and US companies.
Regulating the training of these models is different and harder, but even that seems plausible to me at some point
I think that it’s plausible but not likely, and given that it would be the intervention that would cut the most risks, I tend to prefer corporate governance which seems significantly more tractable and neglected to me.
Out of curiosity, could you refer to a specific event you’d expect to see “if we get closer to substantial leaps in capabilities”? I think that it’s a useful exercise to disagree fruitfully on timelines and I’d be happy to bet on some events if we disagree on one.
I think my point is more like “if anyone gets anywhere near advanced AI, governments will have something to say about it—they will be a central player in shaping its development and deployment.” It seems very unlikely to me that governments would not notice or do anything about such a potentially transformative technology. It seems very unlikely to me that a company could train and deploy an advanced AI system of the kind you’re thinking about without governments regulating and directing it. On funding specifically, I would probably be >50% on governments getting involved in meaningful private-public collaboration if we get closer to substantial leaps in capabilities (though it seems unlikely to me that AI progress will get to that point by 2030).
On your regulation question, I’d note that the EU AI Act, likely to pass next year, already proposes the following requirements applying to companies providing (eg selling, licensing or selling access to) ‘general purpose AI systems’ (eg large foundation models):
Risk Management System
Data and data governance
Technical documentation
Record-keeping
Transparency and provision of information to users
Human oversight
Accuracy, robustness and cybersecurity
So they’ll already have to do (post-training) safety testing before deployment. Regulating the training of these models is different and harder, but even that seems plausible to me at some point, if the training runs become ever huger and potentially more consequential. Consider the analogy that we regulate biological experiments.
I think that our disagreement comes from what we mean by “regulating and directing it.”
My rough model of what usually happens in national governments (and not the EU, which is a lot more independent from its citizen than the typical national government) is that there are two scenarios:
Scenario 1 in which national governments regulate or do things on something nobody is caring about (in particular, not the media). That gives birth to a lot of degrees of freedom and the possibility of doing fairly ambitious things (cf Secret Congress)
Scenario 2 in which national governments regulate things that many people care about and brings attention and then nothing gets done, most measures are fairly weak etc. In this scenario my rough model is that national governments do the smallest thing that satisfy their electorate + key stakeholders.
I feel like we’re extremely likely to be in scenario 2 regarding AI. And thus that no significant measure will be taken, which is why I put the emphasis of “no strong [positive] effect” on AI safety. So basically I feel like the best you can probably do in national policy is something like “avoid that they do bad things” (which is really good if it’s a big risk) or “do mildly good things”. But to me, it’s quite unlikely that we go from a world where we die to a world where we don’t die thanks to a theory of change which is focused on national policy.
The EU AI Act is a bit different in that as I said above, the EU is much less tied to the daily worries of citizen and thus is operating under less constraints. Thus I think that it’s indeed plausible that the EU does something ambitious on GPAIS but I think that unfortunately it’s unlikely that the US will replicate something locally and that the EU compliance mechanisms are not super likely to cut the worst risks for the UK and US companies.
I think that it’s plausible but not likely, and given that it would be the intervention that would cut the most risks, I tend to prefer corporate governance which seems significantly more tractable and neglected to me.
Out of curiosity, could you refer to a specific event you’d expect to see “if we get closer to substantial leaps in capabilities”? I think that it’s a useful exercise to disagree fruitfully on timelines and I’d be happy to bet on some events if we disagree on one.