Regard if the EU regulation will matter, I suppose many if not most important people in AI governance will learn about them and this will at least impact their thinking/plans/recommendations for their governments somehow, right?
I don’t think this is the case: most actors involved are not incentivized to care about balance, transferability and effectiveness. Making a gross generalization, we could say civil society groups are concerned about issues that often uninformed donors care about. Industry groups on the other hands are hoping to minimize short-term costs from the regulatory requirements given financial markets’ pressure - by reducing either the law’s scope of application or its requirements for whatever remains in scope. (There are more stakeholders and angles than just civil society and industry of course, but in practice most discussions end up being about whether a solution is pro- or anti-innovation nowadays). Policymakers themselves generally don’t specialize in AI governance, but instead have a broad range of topics to deal with. Their concerns involve political positioning and they care about balance, transferability and effectiveness only to the extent it improves their positioning.
Of course, policymakers and all people within civil society and industry groups are individuals with their own beliefs which affect their actions (that is why having EA or longtermist individuals in these roles would matter). They therefore sometimes sacrifice the party/org’s mission in favor of doing what they think is right, particularly in civil society where there aren’t enough resources to surveil compliance with HQ’s talking points and “doing the right thing” can be argued to be part of the staff’s mandate.
In this system, balance is generally achieved thanks to both “sides” pushing as hard as they can in opposite directions. A vibrant civil society and democratically elected policymakers can offset the industry’s resource advantage in lobbying. Moreover, transferability is an increasing function of balance. So both balance and transferability are not the primary concerns.
However, effectiveness seems to be purely accidental: besides occasional individuals skewing the interpretation of their mandate in order to push for effectiveness, there is little incentive and pressure in the system to result in effective policies.
Do you think it’s possible that ineffective laws by the EU might lead European governments to invest more in their own AI regulation efforts?
The EU AI Act will reduce political demand for national AI regulations among Member States and beyond: as it is a Regulation (as opposed to a Directive), it requires all Member States to apply it the same way, so additional national AI regulations would literally layer up over, rather than complement or substitute, EU regulations. Countries outside the EU would also have less demand for regulations because of a potential de jure Brussels effect—though this effect would have to offset the hypothetical “regulatory competition” effect i.e. lawmakers trying to be the first to have invented a legislative framework for topic X. Ineffective EU laws’ impact on political demand will be smaller than effective ones, but not enough to offset the primary effect.
so maybe that would be actually end up being good?
“Effectiveness” to the longtermist/EA community is different from “effectiveness” to the rest of society. For example, AGI-concerned individuals care more about requirements related to safety and alignment than measures to foster digital skills in rural areas. So it is possible that whatever we call ineffective is hailed as a major success of effectiveness by decisionmakers and will cut the demand for further policymaking for the next 20 years. I am very interested in the topic of experimentation and adaptiveness/future-proofing in policy, but since it requires decisionmakers i) acknowledging ignorance or admitting than current decisions might not be the best decisions and ii) considering time horizons of >8 years, it is politically difficult to achieve in representative democracies.
That’s very helpful and makes sense, thanks! Would be interesting to learn about case studies where decisionmakers acknowledged ignorance and acted with longer time horizons in mind. I suppose this will end up being cases that are not much publically debated and their consultants having longer time horizons.
Thanks for your questions Max! Hope this helps:
I don’t think this is the case: most actors involved are not incentivized to care about balance, transferability and effectiveness. Making a gross generalization, we could say civil society groups are concerned about issues that often uninformed donors care about. Industry groups on the other hands are hoping to minimize short-term costs from the regulatory requirements given financial markets’ pressure - by reducing either the law’s scope of application or its requirements for whatever remains in scope. (There are more stakeholders and angles than just civil society and industry of course, but in practice most discussions end up being about whether a solution is pro- or anti-innovation nowadays). Policymakers themselves generally don’t specialize in AI governance, but instead have a broad range of topics to deal with. Their concerns involve political positioning and they care about balance, transferability and effectiveness only to the extent it improves their positioning.
Of course, policymakers and all people within civil society and industry groups are individuals with their own beliefs which affect their actions (that is why having EA or longtermist individuals in these roles would matter). They therefore sometimes sacrifice the party/org’s mission in favor of doing what they think is right, particularly in civil society where there aren’t enough resources to surveil compliance with HQ’s talking points and “doing the right thing” can be argued to be part of the staff’s mandate.
In this system, balance is generally achieved thanks to both “sides” pushing as hard as they can in opposite directions. A vibrant civil society and democratically elected policymakers can offset the industry’s resource advantage in lobbying. Moreover, transferability is an increasing function of balance. So both balance and transferability are not the primary concerns.
However, effectiveness seems to be purely accidental: besides occasional individuals skewing the interpretation of their mandate in order to push for effectiveness, there is little incentive and pressure in the system to result in effective policies.
The EU AI Act will reduce political demand for national AI regulations among Member States and beyond: as it is a Regulation (as opposed to a Directive), it requires all Member States to apply it the same way, so additional national AI regulations would literally layer up over, rather than complement or substitute, EU regulations. Countries outside the EU would also have less demand for regulations because of a potential de jure Brussels effect—though this effect would have to offset the hypothetical “regulatory competition” effect i.e. lawmakers trying to be the first to have invented a legislative framework for topic X. Ineffective EU laws’ impact on political demand will be smaller than effective ones, but not enough to offset the primary effect.
“Effectiveness” to the longtermist/EA community is different from “effectiveness” to the rest of society. For example, AGI-concerned individuals care more about requirements related to safety and alignment than measures to foster digital skills in rural areas. So it is possible that whatever we call ineffective is hailed as a major success of effectiveness by decisionmakers and will cut the demand for further policymaking for the next 20 years. I am very interested in the topic of experimentation and adaptiveness/future-proofing in policy, but since it requires decisionmakers i) acknowledging ignorance or admitting than current decisions might not be the best decisions and ii) considering time horizons of >8 years, it is politically difficult to achieve in representative democracies.
That’s very helpful and makes sense, thanks! Would be interesting to learn about case studies where decisionmakers acknowledged ignorance and acted with longer time horizons in mind. I suppose this will end up being cases that are not much publically debated and their consultants having longer time horizons.