As the UK drafts new legislation to regulate the risks posed by frontier AI systems, ensuring widespread compliance from AI companies is critical. This paper discusses options for incentivising compliance through a mix of rewards and penalties.
Key Challenges:
The UK’s relatively small market size could discourage companies from complying if regulations are perceived as overly burdensome or punitive.
Companies might deliberately not comply if they think the risk of non-compliance being detected is sufficiently low, even if the scale of the externalised harm is high.
The evolving and flexible nature of AI safety leaves gaps between minimal legal compliance and good-faith adherence, requiring mechanisms to encourage higher standards.
Proposed Incentives and Disincentives:
Model Release Restrictions: Prevent non-compliant AI models from being released in the UK, while coordinating with international partners to ensure global enforcement of safety standards.
Access to Other Markets: Promote mutual recognition agreements and explore trade-based incentives to make UK compliance advantageous globally.
Fines: The maximum fine size should be set as the cost of compute used to train the largest model that the company in question has developed.
Tax Breaks: Offer enhanced R&D tax credits or conditional incentives for companies exceeding compliance baselines.
Personal Liability: Draw on UK finance law precedents to hold executives accountable for compliance failures, encouraging responsible leadership.
Public Procurement: Link access to lucrative government contracts to companies’ adherence to AI safety standards.
Problem
The UK government is currently drafting AI legislation focused on risks posed by frontier AI systems. This will likely make existing voluntary commitments legally binding. The main things that this may mandate are:
Third-party safety testing
Require companies to allow AISI to carry out pre-deployment testing
Not to develop or release models that pose severe risks that can’t be mitigated
In some ways this is not a big ask since this is just mandating something that the main companies who would be in scope of the legislation said they would do already. Despite this, there some reasons to expect some risk of non-compliance by default:
There are signs of companies not following through on their commitments
New entrants onto the market who didn’t make these voluntary commitments might also be less likely to comply.
More demanding regulatory requirements are likely to be added over time.
This risk of non-compliance is challenging to mitigate because:
The UK is a small market. As with overly-burdensome regulation, overly harsh compliance mechanisms could push companies out of the UK. Even if regulatory requirements are minimal, excessive punishment for violations could make it more likely that companies choose not to operate in the UK to avoid large penalties from accidental non-compliance. The flipside of this is that positive incentives to encourage compliance can also counterbalance negative impacts from burdensome legislative requirements.
Rational non-compliance and risk taking. If a company knows that a model they’re releasing poses a potentially severe risk that the regulator does not know about, there are scenarios where it could make sense for them to knowingly not-comply and release the model anyway. Given that a lot of the major negative impacts are externalised, if a company assesses that the probability of the risk materialising is low enough, and disincentives aren’t strong enough, then it could make rational sense for the company to selfishly gamble and release the model anyway to reap the commercial benefits. Effective compliance mechanisms rebalance the risk-benefit equation such that it isn’t worthwhile for AI companies to deploy models that they are only 90% sure are safe, but come along with a 10% risk of catastrophe.
Lack of specificity. Because the science of AI safety is still at an early stage, it is expected that any AI legislation and how a regulator interprets it will avoid being overly prescriptive. Given this degree of flexibility, there will likely be a significant gap between doing the bare minimum to satisfy a regulator and complying in good faith. Effective compliance mechanisms encourage the latter.
This paper looks to find the most appropriate mechanisms that the UK government could use to incentivise compliance with legislation from AI companies, with the above challenges in mind.
The following assumes that monitoring and enforcement are imperfect, but effective enough such that a proportion of non-compliance is found and penalised. It assumes an arbitrary set of regulatory requirements, and that an unspecified AI regulator is enforcing them.
Compliance mechanism longlist
The table below sets out a list of potential incentives and disincentives that the UK government could use, indicating which incentives are likely to be viable and also be strong enough to impact company behaviour. The top options are fleshed out in more detail with some additional considerations in the subsequent section.
Area
Carrot(s) to reward above-minimum compliance
Stick(s) to punish non-compliance
Viability
Expected impact on company behaviour if implemented
Model release
UK model release permitted
Model prevented from being accessed in the UK
Medium—This is the default for non-compliant products but it’s hard to stop people using VPNs
High—This loses the company a non-negligible amount of income
Access to other markets
Automatic (partial) compliance with regulation in other countries
Model release prohibition is coordinated with other countries
Medium—It will be challenging to agree this with the EU or US
High—This would lose the company a lot of income
Financial incentives and penalties
Tax breaks, R&D subsidies
Fines
High—Fines are a common disincentive. Tax breaks may be more challenging.
High—If fines are large enough then companies will adjust their decisions to avoid them.
Personal liability
n/a
Jail time and professional bans for company executives
High—There is precedent from the financial sector
High—Executives don’t want to go to jail.
Public procurement
Going above minimum regulatory requirements becomes a factor in AI public procurement frameworks
Company blocked from bidding on all public contracts
High—This seems straightforward to implement and the Public Procurement Act permits this.
High—Public contracts often involve large sums of money
Wider regulatory requirements
Less strict interpretation of GDPR requirements
Increased restrictions on acquisitions and mergers
Low—It will be hard for an AI regulator to coordinate this with other regulators
Medium—Given the limitations on how far this could go, it’s unlikely to provide a strong incentive
Burden of compliance monitoring
If a company has a history of always being compliant then they are subject to fewer audits.
If non-compliance is found then the frequency of future audits increases.
High—This is a decision that an AI regulator will likely be able to make at their discretion
Medium—Monitoring and report is unlikely to require major resources from the company
Speed of future approvals
Fast tracked approvals
Slower approvals due to increased scrutiny or less regulator resource given to them
Low—Doesn’t work if the system isn’t banned by default
Medium—If this is relevant then companies likely care a lot about being able to release models quickly
Public image
Public praise from politicians, compliance certifications
Public criticism, non-compliant companies noted in accountability reports
High—Although it may look weak if praise is being given for compliance with the legal baseline
Medium—This could impact profits if the non-compliance is egregious and the public decides that the company has acted recklessly
Access to UK resources (e.g. UK Biobank, data gathered from UK consumers, cloud services, data centres)
Priority access
Restricted access
Medium—Blocking access to UK cloud services seems viable but it will be difficult to enforce deletion of UK consumer data
Low—UK resources probably aren’t worth that much to AI companies
Access to talent
Lower bar for talent visas
Restrict talent visas for non-compliant companies
Low—It will be challenging for an AI regulator to coordinate this with the Home Office and the UK wants to attract talent
Low—There is a lot of domestic talent in the UK.
Regulatory fees
Offer reduced regulatory fees for approvals
Higher regulatory fees
Medium—There might not be regulatory fees for AI regulation
Low—Fees will be small relative to company overheads.
Additional detail on shortlisted compliance mechanisms
Model release
If a company doesn’t take the actions required by the legislation then they likely won’t be allowed to release their model in the UK, or if it has already been released then it will be pulled from the market. There will obviously be examples of minor non-compliance where using this lever would be overkill, e.g. if the company doesn’t fill in a form correctly. If the legislation includes a mixture of model specific and general process requirements then it’s not obvious whether model release should be prevented if the company has made a minor infraction in their overarching risk management process. Given the expected narrowness of the bill, the general process requirements are likely to be very targeted towards risk reduction. Therefore if a company deviates from these in a significant way it will likely be reasonable to block model release until they are compliant again.
It’s hard to stop people using VPNs if a model is banned in the UK but has already been released in other countries. If the goal is to disincentivise companies from non-compliance then this isn’t a problem as the majority of users likely won’t bother using a VPN and will use a different model instead. However, if the goal is to prevent unsafe models from being released then many threat actors relevant to different risk scenarios may be determined enough to use the model via a VPN.
VPN use will only be possible if the model has been released in other countries. The most important of these is the US. In the near term, all the major AI companies likely to be releasing models that could pose a catastrophic risk are based in the US. The US is the only country that has enough leverage with the companies in its jurisdiction to prevent model release anywhere.
For many high impact risks where the impact is global, such as AI-assisted bioweapon development, the UK government unilaterally banning model release doesn’t do much to directly reduce risk to UK citizens. This just emphasises the importance of coordinating action in this area with other countries.
Access to other markets
Leveraging access to other markets as an incentive for compliance with UK AI legislation is challenging but potentially impactful. While the UK obviously can’t directly dictate regulations in other jurisdictions, creative solutions could align its compliance incentives with other regions, making it advantageous for companies to adhere to UK standards.
One option is to establish mutual recognition agreements with key markets like the EU and the US. If UK compliant companies could automatically meet part of the regulatory requirements in these regions, the prospect of reduced regulatory burden would provide a strong incentive to comply. For example, a certification process that aligns closely with international safety standards could act as a “passport” for smoother entry into other major markets. This would require intense diplomatic and technical coordination but could significantly enhance the appeal of adhering to UK regulations.
Simultaneously, if the UK successfully positions itself as a global leader in AI safety standards and UK compliance becomes a recognized mark of quality and safety, companies could use it as a competitive advantage when entering markets that prioritise AI safety. For instance, regions with developing AI frameworks might look to the UK as a benchmark, favouring partnerships with companies that have already demonstrated adherence to its rigorous standards.
Additionally, the UK could explore using trade agreements to reinforce these incentives. New agreements could include provisions favouring companies that meet UK compliance standards, such as expedited market access for compliant companies. This might also involve collaborations with like-minded countries on global AI governance, creating a bloc of markets where compliance with UK legislation provides tangible benefits. While this can’t completely overcome the fragmentation of AI regulation across jurisdictions, they could make UK compliance more attractive.
Financial incentives and penalties
AI companies have deep pockets. Fines or subsidies therefore need to be big in order to affect decision-making.
Fines
Fines could be scaled by the annual turnover of the company as a proxy for the size of fine required to provide a suitable disincentive. Given that some AI companies aren’t yet making a profit it doesn’t make sense to use this as an indication of company size. However, this isn’t straightforward for organisations like Google DeepMind, where it will be challenging to get a figure for the annual turnover of the AI team that’s separated from all the other activities of the parent company. It would be easier for Google to eat the cost of a fine than it would be for a company that only develops AI models, but it seems disproportionate to set the maximum fine based on the entire turnover of Google, which was over $300b in 2023. Setting the fine to be around the same order of magnitude as the operating costs of the AI arm of the company seems more appropriate if the goal is to disincentivise bad behaviour since this should be enough to make non-compliance unprofitable.
The maximum fine could instead scale with the amount of compute used to train the largest model released by that company. This could scale too quickly though since compute is getting cheaper over time and the amount of compute being used to train frontier models is projected to increase by orders of magnitude. The maximum fine size does need to increase over time to account for increasing amounts of money being invested in AI to avoid the fine becoming a smaller and smaller proportion of company turnover over time and would provide an increasingly weak disincentive.
Instead the maximum fine should be set based on the estimated cost of training the largest model that the company in question has trained (not just released). Converting the amount of compute into an estimate of how much it cost the company isn’t straightforward, but given that this is just a cap it doesn’t have to be a precise figure. The lowest cost per FLOP that is commercially available over the past 12 months should be used.
This is only viable if the legislation compels companies to inform the UK government how much compute they used to train a given model. Verification methods will be required to make sure that this is approximately accurate.
If the estimated cost of training the largest model from that company is used as the maximum fine size, the actual fine should be based on how far the company deviated from what is required by legislation. This will be a decision for a regulator to make. As part of this decision they should take into account whether the company followed voluntary codes of practice or industry standard. If a company has chosen not to follow a relevant voluntary code of practice and something has gone wrong then they should be subject to much harsher penalties.
One downside of this approach is that it doesn’t work well for narrow models that don’t require as much compute but still pose significant risks, such as narrow biological AI tools. If models like these are in scope of the legislation, then they will likely be treated separately. Therefore, for narrow tools the maximum fine size should possibly be set based on total company turnover.
Tax breaks
Companies that demonstrate robust compliance with regulatory standards—especially those who voluntarily exceed these requirements—could qualify for enhanced Research and Development (R&D) tax credits. This would support continued innovation while ensuring companies prioritise safety and ethical considerations in their AI development processes.
Tax breaks could be scaled based on measurable compliance outcomes, such as achieving specific safety benchmarks or implementing industry best practices. Companies would need to demonstrate compliance through regular reporting and third-party audits.
Tax incentives could align with existing frameworks like the Patent Box, which already offers reduced tax rates for intellectual property income.
Personal liability
Introducing personal liability for senior executives in AI companies could be a powerful tool for ensuring compliance with UK AI legislation. The precedent for this approach already exists in UK finance law, particularly through the Senior Managers and Certification Regime (SMCR). This framework holds individual executives accountable for failures in their areas of responsibility, with penalties ranging from fines to bans on working in the sector, and even jail time in extreme cases.
Adapting this concept to AI governance makes sense. AI systems, particularly those at the frontier, have the potential to cause harm on a scale that rivals financial crises. By making executives personally accountable, the government can create a strong incentive for companies to prioritise compliance and ethical considerations over short-term gains. For example, if a senior leader signs off on deploying an AI model that later causes significant harm due to a known safety risk, they could face personal repercussions—not just their company.
This approach also sends a clear message: cutting corners on safety and compliance isn’t just a corporate risk—it’s a personal one. For executives, the prospect of being fined, banned from working in the sector, or even facing imprisonment is a powerful motivator to ensure their teams adhere to both the letter and spirit of the law. It creates a culture of accountability, where leaders think twice before pushing risky decisions that could endanger people or markets.
Of course, implementing this isn’t without challenges. The complexity of AI systems and the ambiguity around emerging risks mean that assigning blame won’t always be straightforward. Regulators will need clear guidelines for when personal liability applies, ensuring it’s reserved for cases of willful neglect or gross misconduct rather than honest mistakes. Nonetheless, leveraging personal liability—backed by the success of similar frameworks in the finance sector—could be a crucial element in the UK’s strategy to enforce robust AI governance.
Public procurement
Public procurement is a significant lever the UK government can use to incentivize compliance with AI legislation. With billions of pounds spent annually on procurement, securing government contracts is often a major priority for companies. By tying access to these contracts to compliance with AI safety standards, the government can effectively encourage good behaviour while ensuring taxpayer money supports responsible businesses.
This approach has been successfully used in other sectors, such as construction and defence, where companies are required to meet stringent safety, environmental, or ethical standards before bidding for contracts. In the context of AI, companies could be rewarded for exceeding minimum compliance requirements, such as implementing robust risk management practices, conducting thorough safety audits, or adhering to voluntary industry codes.
For companies, the financial incentive is clear. Government contracts are not only lucrative but often stable and long-term, providing significant revenue streams. Losing access to this market due to non-compliance would be a major setback, especially for AI firms looking to establish themselves as leaders in public sector innovation.
To make this work, the government could integrate AI compliance into existing procurement frameworks, like the Public Procurement Act. Criteria for AI projects could include a track record of compliance or following a responsible scaling policy that meets specific criteria. Companies falling short of these standards could be disqualified from bidding.
An Aside: Regulatory sandboxes
In the conversation around how to ensure that AI is compliant with legislation, someone is likely to suggest regulatory sandboxes. Regulatory sandboxes are useful when there is a body of existing legislation, but it’s unclear how to apply it to a new technology that the original legislation wasn’t designed to address. They don’t exempt participating companies from legislation; rather, they provide a controlled environment where the regulator can adapt the interpretation of the legislation and the requirements imposed to fulfil their regulatory duties.
Regulatory sandboxes are effective when things are banned by default, and it’s unclear what process a product needs to go through before it can be sold. This isn’t currently the case for general-purpose AI systems, which do not require an approvals process to be released for normal usage (though this differs for businesses using these systems in regulated domains like finance). AI companies have little incentive to take part in a regulatory sandbox if they can already operate freely under existing laws.
While sandboxes work for exploring innovative ways to use AI in sectors like healthcare, they are less suitable for technologies where the legislation is directly targeted at the technology itself. For narrowly targeted AI legislation focusing on catastrophic harm (like what the UK government is planning), it doesn’t make sense to exempt large, powerful AI systems from compliance until they are so different from current models that they are nearly out of the legislation’s scope. In such cases, any flexibility or lighter requirements should be built into the legislation and its interpretation by regulators from the outset. At that point, this isn’t a regulatory sandbox but rather a feature of the legislation itself, making sandboxes a more suitable proposal for the future when technologies might evolve beyond current regulatory frameworks.
Which incentives should be used to encourage compliance with UK AI legislation?
This is a linkpost.
Summary
As the UK drafts new legislation to regulate the risks posed by frontier AI systems, ensuring widespread compliance from AI companies is critical. This paper discusses options for incentivising compliance through a mix of rewards and penalties.
Key Challenges:
The UK’s relatively small market size could discourage companies from complying if regulations are perceived as overly burdensome or punitive.
Companies might deliberately not comply if they think the risk of non-compliance being detected is sufficiently low, even if the scale of the externalised harm is high.
The evolving and flexible nature of AI safety leaves gaps between minimal legal compliance and good-faith adherence, requiring mechanisms to encourage higher standards.
Proposed Incentives and Disincentives:
Model Release Restrictions: Prevent non-compliant AI models from being released in the UK, while coordinating with international partners to ensure global enforcement of safety standards.
Access to Other Markets: Promote mutual recognition agreements and explore trade-based incentives to make UK compliance advantageous globally.
Fines: The maximum fine size should be set as the cost of compute used to train the largest model that the company in question has developed.
Tax Breaks: Offer enhanced R&D tax credits or conditional incentives for companies exceeding compliance baselines.
Personal Liability: Draw on UK finance law precedents to hold executives accountable for compliance failures, encouraging responsible leadership.
Public Procurement: Link access to lucrative government contracts to companies’ adherence to AI safety standards.
Problem
The UK government is currently drafting AI legislation focused on risks posed by frontier AI systems. This will likely make existing voluntary commitments legally binding. The main things that this may mandate are:
Third-party safety testing
Require companies to allow AISI to carry out pre-deployment testing
Not to develop or release models that pose severe risks that can’t be mitigated
In some ways this is not a big ask since this is just mandating something that the main companies who would be in scope of the legislation said they would do already. Despite this, there some reasons to expect some risk of non-compliance by default:
There are signs of companies not following through on their commitments
New entrants onto the market who didn’t make these voluntary commitments might also be less likely to comply.
More demanding regulatory requirements are likely to be added over time.
This risk of non-compliance is challenging to mitigate because:
The UK is a small market. As with overly-burdensome regulation, overly harsh compliance mechanisms could push companies out of the UK. Even if regulatory requirements are minimal, excessive punishment for violations could make it more likely that companies choose not to operate in the UK to avoid large penalties from accidental non-compliance. The flipside of this is that positive incentives to encourage compliance can also counterbalance negative impacts from burdensome legislative requirements.
Rational non-compliance and risk taking. If a company knows that a model they’re releasing poses a potentially severe risk that the regulator does not know about, there are scenarios where it could make sense for them to knowingly not-comply and release the model anyway. Given that a lot of the major negative impacts are externalised, if a company assesses that the probability of the risk materialising is low enough, and disincentives aren’t strong enough, then it could make rational sense for the company to selfishly gamble and release the model anyway to reap the commercial benefits. Effective compliance mechanisms rebalance the risk-benefit equation such that it isn’t worthwhile for AI companies to deploy models that they are only 90% sure are safe, but come along with a 10% risk of catastrophe.
Lack of specificity. Because the science of AI safety is still at an early stage, it is expected that any AI legislation and how a regulator interprets it will avoid being overly prescriptive. Given this degree of flexibility, there will likely be a significant gap between doing the bare minimum to satisfy a regulator and complying in good faith. Effective compliance mechanisms encourage the latter.
This paper looks to find the most appropriate mechanisms that the UK government could use to incentivise compliance with legislation from AI companies, with the above challenges in mind.
The following assumes that monitoring and enforcement are imperfect, but effective enough such that a proportion of non-compliance is found and penalised. It assumes an arbitrary set of regulatory requirements, and that an unspecified AI regulator is enforcing them.
Compliance mechanism longlist
The table below sets out a list of potential incentives and disincentives that the UK government could use, indicating which incentives are likely to be viable and also be strong enough to impact company behaviour. The top options are fleshed out in more detail with some additional considerations in the subsequent section.
Fines
Additional detail on shortlisted compliance mechanisms
Model release
If a company doesn’t take the actions required by the legislation then they likely won’t be allowed to release their model in the UK, or if it has already been released then it will be pulled from the market. There will obviously be examples of minor non-compliance where using this lever would be overkill, e.g. if the company doesn’t fill in a form correctly. If the legislation includes a mixture of model specific and general process requirements then it’s not obvious whether model release should be prevented if the company has made a minor infraction in their overarching risk management process. Given the expected narrowness of the bill, the general process requirements are likely to be very targeted towards risk reduction. Therefore if a company deviates from these in a significant way it will likely be reasonable to block model release until they are compliant again.
It’s hard to stop people using VPNs if a model is banned in the UK but has already been released in other countries. If the goal is to disincentivise companies from non-compliance then this isn’t a problem as the majority of users likely won’t bother using a VPN and will use a different model instead. However, if the goal is to prevent unsafe models from being released then many threat actors relevant to different risk scenarios may be determined enough to use the model via a VPN.
VPN use will only be possible if the model has been released in other countries. The most important of these is the US. In the near term, all the major AI companies likely to be releasing models that could pose a catastrophic risk are based in the US. The US is the only country that has enough leverage with the companies in its jurisdiction to prevent model release anywhere.
For many high impact risks where the impact is global, such as AI-assisted bioweapon development, the UK government unilaterally banning model release doesn’t do much to directly reduce risk to UK citizens. This just emphasises the importance of coordinating action in this area with other countries.
Access to other markets
Leveraging access to other markets as an incentive for compliance with UK AI legislation is challenging but potentially impactful. While the UK obviously can’t directly dictate regulations in other jurisdictions, creative solutions could align its compliance incentives with other regions, making it advantageous for companies to adhere to UK standards.
One option is to establish mutual recognition agreements with key markets like the EU and the US. If UK compliant companies could automatically meet part of the regulatory requirements in these regions, the prospect of reduced regulatory burden would provide a strong incentive to comply. For example, a certification process that aligns closely with international safety standards could act as a “passport” for smoother entry into other major markets. This would require intense diplomatic and technical coordination but could significantly enhance the appeal of adhering to UK regulations.
Simultaneously, if the UK successfully positions itself as a global leader in AI safety standards and UK compliance becomes a recognized mark of quality and safety, companies could use it as a competitive advantage when entering markets that prioritise AI safety. For instance, regions with developing AI frameworks might look to the UK as a benchmark, favouring partnerships with companies that have already demonstrated adherence to its rigorous standards.
Additionally, the UK could explore using trade agreements to reinforce these incentives. New agreements could include provisions favouring companies that meet UK compliance standards, such as expedited market access for compliant companies. This might also involve collaborations with like-minded countries on global AI governance, creating a bloc of markets where compliance with UK legislation provides tangible benefits. While this can’t completely overcome the fragmentation of AI regulation across jurisdictions, they could make UK compliance more attractive.
Financial incentives and penalties
AI companies have deep pockets. Fines or subsidies therefore need to be big in order to affect decision-making.
Fines
Fines could be scaled by the annual turnover of the company as a proxy for the size of fine required to provide a suitable disincentive. Given that some AI companies aren’t yet making a profit it doesn’t make sense to use this as an indication of company size. However, this isn’t straightforward for organisations like Google DeepMind, where it will be challenging to get a figure for the annual turnover of the AI team that’s separated from all the other activities of the parent company. It would be easier for Google to eat the cost of a fine than it would be for a company that only develops AI models, but it seems disproportionate to set the maximum fine based on the entire turnover of Google, which was over $300b in 2023. Setting the fine to be around the same order of magnitude as the operating costs of the AI arm of the company seems more appropriate if the goal is to disincentivise bad behaviour since this should be enough to make non-compliance unprofitable.
The maximum fine could instead scale with the amount of compute used to train the largest model released by that company. This could scale too quickly though since compute is getting cheaper over time and the amount of compute being used to train frontier models is projected to increase by orders of magnitude. The maximum fine size does need to increase over time to account for increasing amounts of money being invested in AI to avoid the fine becoming a smaller and smaller proportion of company turnover over time and would provide an increasingly weak disincentive.
Instead the maximum fine should be set based on the estimated cost of training the largest model that the company in question has trained (not just released). Converting the amount of compute into an estimate of how much it cost the company isn’t straightforward, but given that this is just a cap it doesn’t have to be a precise figure. The lowest cost per FLOP that is commercially available over the past 12 months should be used.
This is only viable if the legislation compels companies to inform the UK government how much compute they used to train a given model. Verification methods will be required to make sure that this is approximately accurate.
If the estimated cost of training the largest model from that company is used as the maximum fine size, the actual fine should be based on how far the company deviated from what is required by legislation. This will be a decision for a regulator to make. As part of this decision they should take into account whether the company followed voluntary codes of practice or industry standard. If a company has chosen not to follow a relevant voluntary code of practice and something has gone wrong then they should be subject to much harsher penalties.
One downside of this approach is that it doesn’t work well for narrow models that don’t require as much compute but still pose significant risks, such as narrow biological AI tools. If models like these are in scope of the legislation, then they will likely be treated separately. Therefore, for narrow tools the maximum fine size should possibly be set based on total company turnover.
Tax breaks
Companies that demonstrate robust compliance with regulatory standards—especially those who voluntarily exceed these requirements—could qualify for enhanced Research and Development (R&D) tax credits. This would support continued innovation while ensuring companies prioritise safety and ethical considerations in their AI development processes.
Tax breaks could be scaled based on measurable compliance outcomes, such as achieving specific safety benchmarks or implementing industry best practices. Companies would need to demonstrate compliance through regular reporting and third-party audits.
Tax incentives could align with existing frameworks like the Patent Box, which already offers reduced tax rates for intellectual property income.
Personal liability
Introducing personal liability for senior executives in AI companies could be a powerful tool for ensuring compliance with UK AI legislation. The precedent for this approach already exists in UK finance law, particularly through the Senior Managers and Certification Regime (SMCR). This framework holds individual executives accountable for failures in their areas of responsibility, with penalties ranging from fines to bans on working in the sector, and even jail time in extreme cases.
Adapting this concept to AI governance makes sense. AI systems, particularly those at the frontier, have the potential to cause harm on a scale that rivals financial crises. By making executives personally accountable, the government can create a strong incentive for companies to prioritise compliance and ethical considerations over short-term gains. For example, if a senior leader signs off on deploying an AI model that later causes significant harm due to a known safety risk, they could face personal repercussions—not just their company.
This approach also sends a clear message: cutting corners on safety and compliance isn’t just a corporate risk—it’s a personal one. For executives, the prospect of being fined, banned from working in the sector, or even facing imprisonment is a powerful motivator to ensure their teams adhere to both the letter and spirit of the law. It creates a culture of accountability, where leaders think twice before pushing risky decisions that could endanger people or markets.
Of course, implementing this isn’t without challenges. The complexity of AI systems and the ambiguity around emerging risks mean that assigning blame won’t always be straightforward. Regulators will need clear guidelines for when personal liability applies, ensuring it’s reserved for cases of willful neglect or gross misconduct rather than honest mistakes. Nonetheless, leveraging personal liability—backed by the success of similar frameworks in the finance sector—could be a crucial element in the UK’s strategy to enforce robust AI governance.
Public procurement
Public procurement is a significant lever the UK government can use to incentivize compliance with AI legislation. With billions of pounds spent annually on procurement, securing government contracts is often a major priority for companies. By tying access to these contracts to compliance with AI safety standards, the government can effectively encourage good behaviour while ensuring taxpayer money supports responsible businesses.
This approach has been successfully used in other sectors, such as construction and defence, where companies are required to meet stringent safety, environmental, or ethical standards before bidding for contracts. In the context of AI, companies could be rewarded for exceeding minimum compliance requirements, such as implementing robust risk management practices, conducting thorough safety audits, or adhering to voluntary industry codes.
For companies, the financial incentive is clear. Government contracts are not only lucrative but often stable and long-term, providing significant revenue streams. Losing access to this market due to non-compliance would be a major setback, especially for AI firms looking to establish themselves as leaders in public sector innovation.
To make this work, the government could integrate AI compliance into existing procurement frameworks, like the Public Procurement Act. Criteria for AI projects could include a track record of compliance or following a responsible scaling policy that meets specific criteria. Companies falling short of these standards could be disqualified from bidding.
An Aside: Regulatory sandboxes
In the conversation around how to ensure that AI is compliant with legislation, someone is likely to suggest regulatory sandboxes. Regulatory sandboxes are useful when there is a body of existing legislation, but it’s unclear how to apply it to a new technology that the original legislation wasn’t designed to address. They don’t exempt participating companies from legislation; rather, they provide a controlled environment where the regulator can adapt the interpretation of the legislation and the requirements imposed to fulfil their regulatory duties.
Regulatory sandboxes are effective when things are banned by default, and it’s unclear what process a product needs to go through before it can be sold. This isn’t currently the case for general-purpose AI systems, which do not require an approvals process to be released for normal usage (though this differs for businesses using these systems in regulated domains like finance). AI companies have little incentive to take part in a regulatory sandbox if they can already operate freely under existing laws.
While sandboxes work for exploring innovative ways to use AI in sectors like healthcare, they are less suitable for technologies where the legislation is directly targeted at the technology itself. For narrowly targeted AI legislation focusing on catastrophic harm (like what the UK government is planning), it doesn’t make sense to exempt large, powerful AI systems from compliance until they are so different from current models that they are nearly out of the legislation’s scope. In such cases, any flexibility or lighter requirements should be built into the legislation and its interpretation by regulators from the outset. At that point, this isn’t a regulatory sandbox but rather a feature of the legislation itself, making sandboxes a more suitable proposal for the future when technologies might evolve beyond current regulatory frameworks.