I think that our disagreement comes from what we mean by “regulating and directing it.”
My rough model of what usually happens in national governments (and not the EU, which is a lot more independent from its citizen than the typical national government) is that there are two scenarios:
Scenario 1 in which national governments regulate or do things on something nobody is caring about (in particular, not the media). That gives birth to a lot of degrees of freedom and the possibility of doing fairly ambitious things (cf Secret Congress)
Scenario 2 in which national governments regulate things that many people care about and brings attention and then nothing gets done, most measures are fairly weak etc. In this scenario my rough model is that national governments do the smallest thing that satisfy their electorate + key stakeholders.
I feel like we’re extremely likely to be in scenario 2 regarding AI. And thus that no significant measure will be taken, which is why I put the emphasis of “no strong [positive] effect” on AI safety. So basically I feel like the best you can probably do in national policy is something like “avoid that they do bad things” (which is really good if it’s a big risk) or “do mildly good things”. But to me, it’s quite unlikely that we go from a world where we die to a world where we don’t die thanks to a theory of change which is focused on national policy.
The EU AI Act is a bit different in that as I said above, the EU is much less tied to the daily worries of citizen and thus is operating under less constraints. Thus I think that it’s indeed plausible that the EU does something ambitious on GPAIS but I think that unfortunately it’s unlikely that the US will replicate something locally and that the EU compliance mechanisms are not super likely to cut the worst risks for the UK and US companies.
Regulating the training of these models is different and harder, but even that seems plausible to me at some point
I think that it’s plausible but not likely, and given that it would be the intervention that would cut the most risks, I tend to prefer corporate governance which seems significantly more tractable and neglected to me.
Out of curiosity, could you refer to a specific event you’d expect to see “if we get closer to substantial leaps in capabilities”? I think that it’s a useful exercise to disagree fruitfully on timelines and I’d be happy to bet on some events if we disagree on one.
I think that our disagreement comes from what we mean by “regulating and directing it.”
My rough model of what usually happens in national governments (and not the EU, which is a lot more independent from its citizen than the typical national government) is that there are two scenarios:
Scenario 1 in which national governments regulate or do things on something nobody is caring about (in particular, not the media). That gives birth to a lot of degrees of freedom and the possibility of doing fairly ambitious things (cf Secret Congress)
Scenario 2 in which national governments regulate things that many people care about and brings attention and then nothing gets done, most measures are fairly weak etc. In this scenario my rough model is that national governments do the smallest thing that satisfy their electorate + key stakeholders.
I feel like we’re extremely likely to be in scenario 2 regarding AI. And thus that no significant measure will be taken, which is why I put the emphasis of “no strong [positive] effect” on AI safety. So basically I feel like the best you can probably do in national policy is something like “avoid that they do bad things” (which is really good if it’s a big risk) or “do mildly good things”. But to me, it’s quite unlikely that we go from a world where we die to a world where we don’t die thanks to a theory of change which is focused on national policy.
The EU AI Act is a bit different in that as I said above, the EU is much less tied to the daily worries of citizen and thus is operating under less constraints. Thus I think that it’s indeed plausible that the EU does something ambitious on GPAIS but I think that unfortunately it’s unlikely that the US will replicate something locally and that the EU compliance mechanisms are not super likely to cut the worst risks for the UK and US companies.
I think that it’s plausible but not likely, and given that it would be the intervention that would cut the most risks, I tend to prefer corporate governance which seems significantly more tractable and neglected to me.
Out of curiosity, could you refer to a specific event you’d expect to see “if we get closer to substantial leaps in capabilities”? I think that it’s a useful exercise to disagree fruitfully on timelines and I’d be happy to bet on some events if we disagree on one.