AI Discrimination Requirements: A Regulatory Review

This article is the fifth in a series of ~10 posts comprising a 2024 State of the AI Regulatory Landscape Review, conducted by the Governance Recommendations Research Program at Convergence Analysis. Each post will cover a specific domain of AI governance (e.g. incident reporting, safety evals, model registries, etc.). We’ll provide an overview of existing regulations, focusing on the US, EU, and China as the leading governmental bodies currently developing AI legislation. Additionally, we’ll discuss the relevant context behind each domain and conduct a short analysis.

This series is intended to be a primer for policymakers, researchers, and individuals seeking to develop a high-level overview of the current AI governance space. We’ll publish individual posts on our website and release a comprehensive report at the end of this series.

What are discrimination requirements for AI? Why do they matter?

Discrimination requirements for AI are rules and guidelines aimed at preventing AIsystems from perpetuating or amplifying societal biases and unfairly disadvantaging certain groups of people based on protected characteristics like race, gender, age, religion, disability status, or sexual orientation. As AI increasingly powers high-stakes decision making in areas like hiring, lending, healthcare, criminal justice, and public benefits, these systems are likely to adversely impact certain subsets of the population without algorithmic bias management.

For example, an algorithm designed to identify strong resumes for a job application is likely to predict correlations between the sex of a candidate and the quality of their resume, reflecting existing societal biases (and therefore perpetuating them). As a result, certain classes of individuals may be adversely impacted by an algorithm that contains inherently discriminatory word associations.

Other examples for algorithmic discrimination include:

  • Biases in the type of online ads presented to website users

  • Biases in the error rates of facial recognition technology by race and gender

  • Biases in algorithms designed to predict risk in criminal justice

The usage of discriminatory factors such as sex, ethnicity, or age has been expressly prohibited by longstanding anti-discriminatory legislation around the globe, such as Title VII of the US Civil Right Act of 1964, the U.N.’s ILO Convention 111, or Article 21 of the EU Charter of Fundamental Rights. As enforced by most developed countries, such legislation typically protects citizens of a governmental body from employment or occupational discrimination based on these factors.

To expand these legislative precedents to the rapidly developing domain of algorithmic and AI discrimination, a new crop of anti-discrimination legislation is being passed by leading governmental bodies. This new wave of legislation focuses on regulating the behavior of the algorithms underlying certain protected use cases, such as resume screening, creditworthiness evaluations, or public benefit allocations.

As the momentum grows to address AI bias, governments are starting to pass laws and release guidance aimed at preventing automated discrimination. But this is still an emerging area where much more work is needed to translate principles into practice. Active areas of research and policy development include both technical and non-technical measures such as:

What are current regulatory policies around discrimination requirements for AI?


Two major pieces of Chinese legislation have made references to combating AI discrimination. Though the language around discrimination was scrapped in the first, the 2023 generative AI regulations include binding but non-specific language requiring compliance with anti-discrimination policies for AI training and inference.

  1. Algorithmic Recommendation Provisions, Article 10: The initial interim draft of this legislation prohibited the use of “discriminatory or biased user tags” in algorithmic recommendation systems. However, this language was removed in the final version effective in March 2022.

  2. Generative AI Measures, Article 4.2: This draft calls for the following: “During processes such as algorithm design, the selection of training data, model generation and optimization, and the provision of services, effective measures are to be employed to prevent the creation of discrimination such as by race, ethnicity, faith, nationality, region, sex, age, profession, or health”.

The EU

The EU AI Act directly addresses discriminatory practices classified by the use cases of AI systems considered. In particular, it classifies all AI systems with potential discriminatory practices as high-risk systems and bars them from discrimination, including:

  • AI systems that could produce adverse outcomes to health and safety of persons, and could cause discriminatory practices.

  • AI systems used in education or vocational training, “notably for determining access to educational…institutions or to evaluate persons on a precondition for their education”.

  • AI systems used in employment, “notably for recruitment…for making decisions on promotion and termination and for task allocation, monitoring or evaluation of persons in work-related contractual relationships”.

  • AI systems used to evaluate the credit score or creditworthiness of natural persons, or for allocating public assistance benefits

  • AI systems used in migration, asylum and border control management

In particular, AI systems that provide social scoring of natural persons (which pose a significant discriminatory risk) are deemed unacceptable systems and are banned.

The US

The US government is actively addressing AI discrimination via two primary initiatives by the executive branch. However, both of these initiatives are non-binding and non-specific in nature: in particular, the Executive Order directs several agencies to publish guidelines, but doesn’t identify any specific requirements or enforcement mechanisms.

  1. The AI Bill of Rights contains an entire section on Algorithmic Discrimination Protections. In particular, it emphasizes that consumers should be protected from discrimination based on their “race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law.” Though this bill is non-binding, it sets a general principle for enforcement by the US executive branch for more specific regulations.

  2. The Executive Order on AI directs various executive agencies to publish reports or guidance on preventing discrimination within their respective domains within the 90 − 180 days after its publication. These include the following directly responsible parties:

    1. Section 7.1: “The Attorney General of the Criminal Justice System, and the Assistant Attorney General in charge of the Civil Rights Division will publish guidance preventing discrimination in automated systems.”

    2. Section 7.2.b.i: “The Secretary of HHS (The Department of Health and Human Services) will publish guidance regarding non-discrimination in allocating public benefits.”

    3. Section 7.2.b.ii: “The Secretary of Agriculture will publish guidance regarding non-discrimination in allocating public benefits.”

    4. Section 7.3: “The Secretary of Labor will publish guidance regarding non-discrimination in hiring involving AI.”

How will discriminatory requirements for AI evolve in the near-term future?

The effectiveness of de-biasing techniques is highly variable, and depends heavily on the quality of the data.

  • Unfair datasets are the root cause of algorithmic bias. However, it can be extraordinarily difficult to acquire more equitable data. Rebalancing datasets to mitigate bias will typically lead to lower overall performance.

  • Many underlying sources of bias can be difficult to mitigate. An Amazon study found that even after removing direct causes of gender bias from a hiring algorithm, such as making the algorithm neutral to phrases like “women’s chess club captain”, the algorithm still found implicit male associations with phrases such as “executed” and “captured” on resumes.

Given access to underlying algorithms, it is substantially easier to prove discriminatory bias with an algorithm than it is with human-driven systems.

There are no established required practices or judicial precedents to evaluate the level of discriminatory bias across AI algorithms.

It is likely that the required practices to evaluate discriminatory bias will be established in the judicial system.

  • Judicial frameworks have typically been established over time via landmark or precedent-setting discrimination cases. For example, the McDonnell Douglas Burden-Shifting Framework and the Mixed Motive Framework are two separate judicial approaches to establish workplace discrimination. These developed independently to handle different forms of discrimination lawsuits.

  • We expect that in the next 5 years, we’ll begin to see class-action lawsuits against corporations running high-risk algorithms (as defined by the EU) that may be discriminatory. Accordingly, we’ll expect to see the creation of one or more standardized frameworks for evaluating biased algorithms emerging from a US court.