Regulation of AI Use for Personal Data Protection: Comparison of Global Strategies and Opportunities for Latin America

Acknowledgement Note:

This project was carried out as part of the “Carreras con Impacto” program during the 14-week mentorship phase. You can find more information about the program in this entry.

Problem Context

Artificial intelligence (AI) presents enormous potential in various areas, but it also poses significant risks, especially concerning the manipulation of personal data. In response to these challenges, regions such as the United States, the European Union, and China have begun implementing regulatory frameworks aimed at mitigating these risks, each with a different approach.

In the United States, Executive Order No. 14110 of 2023 promotes the responsible use of AI, highlighting the need to balance security, privacy, and technological innovation. However, public concern about the impact of AI on daily life remains evident, as seen in a 2023 Ipsos survey, which reported that 52% of respondents expressed anxiety about these technologies, representing a 13-percentage point increase compared to the 2022 survey.

Similarly, the European Union has adopted a pioneering approach to privacy protection with the enactment of the General Data Protection Regulation (GDPR), which protects privacy as a fundamental right among citizens. This is complemented by the recent Regulation (EU) 2024/​1689, also known as the “Artificial Intelligence Act,” which classifies AI systems based on their level of risk (unacceptable, high, limited, minimal), while also regulating their use to protect privacy, fundamental rights, and citizens’ safety.

In China, the regulatory approach is more restrictive, focusing on the protection of state secrets and cybersecurity. The Chinese government maintains strict control over the development and use of AI, reflecting a conservative stance compared to other regions, as highlighted in the 2023 “Interim Measures for the Management of Generative AI Services.”

Despite the regulatory efforts in these nations, significant challenges remain in the legal management of AI. A clear example is the use of AI-generated romantic chatbots (such as Crushon AI), which, according to experts like Josep Curto and the Mozilla Foundation, pose at least four major privacy risks for users:

  1. Generative AI chatbots collect the maximum amount of personal information possible, including photos and intimate data.

  2. The terms of use for these programs are ambiguous, allowing data to be sold to third parties.

  3. Some of these applications use trackers that send geographic information to countries involved in espionage; in fact, many of these apps can share the information obtained with the government or military without a court order (López, 2024).

  4. There is a lack of transparency regarding the AI models used.

In the case of Latin America, the region still lacks a solid legal framework regulating the use of AI and protecting personal data. Although countries like Argentina, Brazil, Chile, and Peru have established preliminary guidelines for AI governance and promoted solutions at the United Nations General Assembly, there is currently no robust legal framework to regulate AI use in Latin America. Furthermore, the lack of international treaties and the limited application of human rights in the region show that Latin America is still in the early stages of AI governance. To date, the Inter-American Court of Human Rights has not explicitly recognized the right to personal data protection under Article 11 of the American Convention on Human Rights, although there are indications that it may broaden the protection of the right to privacy in the wake of the COVID-19 pandemic (Contreras, 2024).

In this context, the regulatory frameworks of the United States, the European Union, and China can serve as models for Latin American countries, which still lack strong legislation on the matter. The implementation of specific regulations on AI management is urgent, not only to ensure the protection of citizens’ privacy but also to prevent risks related to espionage, identity theft, and the misuse of personal data.

Therefore, this research aims to understand and compare the regulatory frameworks of the United States, the European Union, and China focused on the manipulation of personal data through the use of artificial intelligence. Based on this analysis, the goal is to develop a series of recommendations applicable to the regulatory framework of Latin America to prevent such risks and protect individuals’ rights, ensuring transparency and accountability in the use of this technology.

Objectives

1.- General Objective:

Analyze and compare the regulatory frameworks associated with the management of AI technologies used for personal data handling in the U.S., the European Union, and China, with the aim of developing proposals applicable to the regulatory framework of Latin America.

2.- Specific Objectives:

  • Analyze the political characteristics related to the management and development of AI technologies based on personal data in the U.S., the European Union, and China.

  • Compare the regulatory framework for the handling of personal data as a risk of AI in the U.S., the European Union, and China.

  • Propose recommendations applicable to the regulatory framework of Latin America to prevent such risks.

Methodology

This research project was developed using a multidimensional and geopolitical approach, with the objective of comparing the regulatory frameworks of China, the United States, and the European Union regarding the protection of personal data in the use of AI.

The study began with an extensive literature review, which included an analysis of the type of government in each selected region to understand how these political structures influence the regulation of artificial intelligence and its use in personal data protection.

Subsequently, an analysis of the regulations in force up until July 31, 2024, was conducted. The regulations were obtained from the official government websites of each region. This process included a detailed review of the regulatory framework, identifying those regulations related to the management of the development and use of artificial intelligence, as well as the entities responsible for its regulation.

A comparative analysis of the specific regulatory framework for personal data protection in each region was then carried out, identifying similarities and differences in regulatory approaches. The comparison focused on identifying the personal data protected in each region and their form of government, in order to understand their strategy and make recommendation proposals for the governments of Latin America.

Finally, the information obtained from the literature review allowed the development of a series of recommendations applicable to those governments in the early stages of regulating the use and development of AI-based technologies and user data protection, as is the case for most countries in Latin America.

As part of the information search and filtering process, relevant keywords for the research were used, including “personal data protection,” “privacy,” “artificial intelligence,” and “manipulation.”

Analysis of Governmental Characteristics and Their Implication in the Management of AI Use for Personal Data Protection

Given that understanding the political regime of the countries under study facilitates comprehension of how AI use regulation concerning personal data protection is approached, the comparative analysis began with a review of the political particularities associated with the systems of government in these world powers (Table 1).

Table 1. Comparison of Political Regimes by Region. Comparative table of the types of political regimes of world powers in the development and use of AI.

Region

Political Regime

United States

Federal Constitutional Republic

China

Socialism with Chinese Characteristics

European Union

Representative Democracy

First, the United States, according to its Constitution, is a Federal Constitutional Republic, with a presidential system and a structure based on the separation of powers. This model grants each state the autonomy to enact its own laws, including those related to personal data protection in the use of AI.

China, on the other hand, follows the model of “socialism with Chinese characteristics” under the principle of “one country, two systems.” This political orientation places a strong emphasis on preserving socialist values and limits any threats to these principles, which may present a series of challenges regarding the integration and regulation of advanced technologies, such as generative artificial intelligence.

In contrast, the European Union operates under a system of representative democracy, which seeks to foster a highly competitive social market economy with the goal of promoting balanced and sustainable economic and social development among all its member countries. This approach is reflected in the region’s efforts to enact regulations focused on emerging technologies like AI to facilitate its use and ongoing development.

Analysis of the Regulatory Frameworks Associated with AI Use in the U.S., European Union, and China, and Their Focus on Personal Data Protection

After understanding the different political regimes and their impact on data protection in AI use, the next step was to identify the main regulations regarding artificial intelligence in the countries under study, with a particular focus on determining whether these regulations aim to protect users’ personal data.

Table 2. Comparison of Existing Regulations Related to AI Use and Personal Data Protection by Region. This table provides an overview of the main AI regulations in the United States, the European Union, and China, highlighting the provisions related to personal data protection and their management of manipulation risks.

Region

Regulation

Data protection

Personal Data protected

Risk of Manipulation by AI

Measures for Data Protection

United States

AI Risk Management Framework (2023) - Federal—Voluntary

Yes

Unauthorized use, disclosure, or anonymization of biometric, health, location, or other personally identifiable information or confidential data.

Yes

Governance, content provenance, pre-implementation testing, and incident disclosure.

Executive Order No. 14110 (2023)

Development and Safe, Reliable, and Secure Use of Artificial Intelligence—Federal

Yes

Identities, locations, habits, and desires of individuals

Yes

Participation of the whole society, including the government, private sector, academia, and civil society.

Agencies will use available technical and policy tools, including Privacy-Enhancing Technologies (PET), as appropriate, to protect privacy and address large-scale legal and social risks, including the collection and misuse of personal data.

Executive Order No. 13859 (2019)

Maintaining American Leadership in Artificial Intelligence—Federal

Yes

Not specified

No

AI research agencies and users will identify improvements in the access and quality of AI data and models, while protecting user privacy.

National Security Commission on Artificial Intelligence (NSCAI) − 2021 Report

Yes

Robust anonymity, whereabouts, and behavioral patterns of an individual

Yes

Evaluate and mitigate risks in the design, development, and testing of AI systems. Elements of the IC, DHS, and FBI must implement measures to mitigate these risks and document any remaining risks.

Generative AI Copyright Disclosure Act of 2024

Yes

Copyright

Yes

Specify the copyrighted works used in this process.

SB-1047 Bill, Safe Innovation Act for Frontier Artificial Intelligence Models—California (09/​03/​2024)

Yes

Preservation of an uncensored copy of the security protocol for the developer and protection for as long as the covered model is available for commercial, public, or foreseeably public use, plus an additional period of 5 years, including records and dates corresponding to any updates or revisions.

Yes

The current law requires the secretary to assess the impact of the proliferation of deepfakes, defined as audio or video content generated or manipulated by artificial intelligence.

European Union

Regulation (EU) 2024/​1689 - AI Act

Yes

Biometric data, material distortion of behavior through manipulation techniques, social behavior, or personal or personality characteristics.

Yes

High-risk AI systems will be subject to strict obligations prior to commercialization. Additionally, any AI-generated text published with the purpose of informing the public on matters of general interest must be labeled as artificially generated content.

China

Interim Measures for the Management of Generative Artificial Intelligence Services—Entered into Force on August 15, 2023

Yes

State and trade secrets, personal privacy, and personal information acquired in the performance of their duties in accordance with the law.

No

Conduct security assessments in accordance with national regulations. Providers must explain the source, scale, type, labeling rules, algorithmic mechanism, and any relevant information regarding the training data as necessary.

Artificial Intelligence Law of the People’s Republic of China (Draft)

Yes

Personal behavioral habits, interests, economic, health, or credit information

Yes

AI developers and providers will not collect unnecessary personal information, nor will they unlawfully retain input data or usage records that could identify users.

Provisions on Deep Synthesis − 10/​01/​2023

Yes

Not specified

Yes

Visible labels on synthetically generated content

Regulation on the Management of Algorithmic Recommendations for Internet Information Services

Yes

Not specified

Yes

Algorithm providers will periodically review the mechanisms, models, data, and outcomes of the application of the algorithms, and will not distribute models that lead users to addiction, excessive consumption, or other behaviors that violate laws, regulations, or ethical principles.

As can be seen in Table 2, Europe leads in the development of regulations with the official publication of a regulatory framework for artificial intelligence. China, in turn, has planned a similar process with the publication of a draft law to regulate this technology, while the United States has been consistent with the issuance of public policies on the matter, with the constant challenge of fostering technological innovation alongside the regulation of emerging technologies.

It is important to note that the main regulations presented in Table 2 indeed aim to protect users’ personal data in the use of AI technologies. The United States shows a greater number of regulations in this area, although many of them are bills that had not been passed as of the date of this research. In China, the priority is placed on protecting government data, an uncommon focus for most countries. Additionally, each regulation presents specific measures to safeguard users’ personal data from AI use, with the European Union’s process standing out. In the EU, AI systems that compromise personal data will be considered high-risk, and as part of the prevention process, these systems will be subject to strict obligations before they can be commercialized.

Regarding data protection measures, the United States stands out with Executive Order No. 14110 of 2023, titled “Development and Safe, Reliable, and Secure Use of Artificial Intelligence”. This order includes the use of Privacy-Enhancing Technologies (PET), when relevant, to protect privacy and mitigate legal and social risks such as the collection and misuse of personal data. It is important to emphasize that the use of this resource (PET) is based on a set of standards issued by the United Kingdom, demonstrating how major powers can use the regulations of other countries as a reference under the principle of reciprocity.

In the case of the United States, there are independent regulations at both federal and state levels. This system has been the subject of heated discussions, such as those arising from the recent SB-1047 Bill, or the “Safe Innovation Act for Frontier Artificial Intelligence Models of California”. Nancy Pelosi, the Democratic senator for California in Washington (2024), expressed her opposition to this, arguing that the law threatens to impose punitive measures on developers in a field that requires innovation. Similarly, the company OpenAI (2024) voiced its concern about a potential talent exodus from the United States and Silicon Valley, advocating for federal-level AI legislation, rather than a fragmented approach in which each of the 50 states has its own regulations. On the other hand, Scott Wiener, a Democratic state senator, supported the bill, considering it to balance innovation with safety in the development of these technologies. The final decision lies with California Governor Gavin Newsom, who has until September 30, 2024, to sign or veto the law.

From the analysis of current regulatory frameworks, we can conclude that although the approaches to regulating artificial intelligence in the United States, China, and Europe vary in their priorities and methods, they all share the common goal of seeking a balance between promoting technological innovation and protecting human rights and fundamental values, particularly the right to privacy (Perez, 2024).

Table 3. Characteristics Associated with AI Regulatory Entities by Region. This table provides a comparison of the main entities responsible for AI regulation in the United States, China, and the European Union, along with their specific approaches.

Characteristics

United States

China

European Union

Number of Entities232
Name of the Entity

- Federal Trade Commission (FTC)

- National Institute of Standards and Technology (NIST)

- Cyberspace Administration of China

- National Development and Reform Commission

- Ministry of Science and Technology

- European Artificial Intelligence Office

- European Data Protection Supervisor

Approach

Federal Trade Commission (FTC): The sole federal agency responsible for consumer protection and competition enforcement through the application of the law.

National Institute of Standards and Technology (NIST): Development of standards and frameworks for AI.

Cyberspace Administration of China: Planning and coordinating the management and supervision of generative AI at the national level, in accordance with its respective responsibilities and laws.

National Development and Reform Commission: Strengthening the management of generative AI services at the national level, in accordance with its respective responsibilities and laws.

Ministry of Science and Technology: Formulating public policies and overseeing the implementation of AI at the national level.

European Artificial Intelligence Office: Monitoring, supervision, and governance of AI across the EU.

European Data Protection Supervisor: AI market surveillance authority.

Following the identification of existing regulations regarding AI use (Table 2), the next step was to identify the regulatory entities responsible for enforcing these regulations in each region (Table 3). At the federal level, there are two main entities in the United States: the Federal Trade Commission (FTC) and the National Institute of Standards and Technology (NIST). The FTC, in particular, is responsible for protecting consumers from any violation of their rights, including their privacy.

In the case of the People’s Republic of China, there are more than three key entities for AI regulation: the Cyberspace Administration of China, the National Development and Reform Commission, and the Ministry of Science and Technology. As can be seen, the Chinese government has not delegated AI regulation to new entities but has directly assumed supervision and the creation of public policies by the government.

As for the European Union, its regulatory structure resembles that of the United States, with two main entities in charge of AI regulation. The European Artificial Intelligence Office is responsible for supervision at the Union level, covering all 27 member states.

Identification of the Legal Framework Related to Data Protection in the U.S., European Union, and China and its Focus on AI Algorithms

The second focus of the analysis of the existing regulations in the countries under study was based on the identification and comparison of different data protection regulations, with a particular emphasis on determining whether these regulations take into account the risks associated with the use of AI algorithms.

Table 4. Comparison of Existing Regulations Related to Personal Data Protection and AI Use by Region. This table provides an overview of the main data protection regulations in the United States, the European Union, and China, highlighting the provisions related to AI use and its management.

Región

Regulación

Datos personales protegidos

Riesgo de manipulación por parte de la IA

Responsabilidad legal (Sanciones)

United States

California Consumer Privacy Act (CCPA)

Identifiers such as real name, alias, postal address, unique personal identifier, online identifier, Internet Protocol (IP) address, email address, account name, Social Security number, driver’s license number, passport number, biometric information, browsing history, and geolocation data.

The risk of manipulation is addressed, but it does not specify whether it is by AI.

Administrative fines or civil penalties

European Union

Regulation (EU) 2016679 - General Data Protection Regulation (GDPR)

Numerus apertus

Yes

Administrative fines
Regulation (EU) 2022/​2065 - Digital Services Act (DSA)

Protection of minors online, mitigation of systemic risks such as manipulation or disinformation

Yes

Administrative fines

China

Cybersecurity Law of the People’s Republic of China

Numerus apertus

No

Administrative fines

Data Security Law of the People’s Republic of China

Data processing, including collection, storage, use, processing, transmission, provision, disclosure, etc.

No

Administrative fines

Personal Information Protection Law of the People’s Republic of China

Sensitive personal information

Yes

Confiscation and administrative fines

From the comparison of the regulatory framework related to data protection in the analyzed countries (Table 4), a legal responsibility based on the application of administrative fines is highlighted, with an additional confiscation of assets in the case of China. These administrative fines are applied directly to the provider that has not taken the necessary precautions or followed the guidelines established by each country, with fines reaching up to 7% of their total annual turnover.

The European Union stands out in this analysis thanks to the General Data Protection Regulation (GDPR), a community law that regulates the protection of individuals with respect to the processing of their personal data and the free movement of such data within the European Union and the European Economic Area. It is worth noting that this Regulation explicitly mentions the possible adoption of its principles by non-EU countries.

Table 5. Technical Methods for Data Protection Indicated in the United Kingdom’s Privacy-Enhancing Technologies (PET). This table provides an overview of the technical methods used for privacy protection within the regulatory framework of the United Kingdom, in relation to Privacy-Enhancing Technologies (PETs).

Privacy-Enhancing Technologies (PETs):

They incorporate fundamental data protection principles by minimizing the use of personal information and maximizing data security.

Technical Methods for Protection Against Privacy Risks

Homomorphic Encryption (HE)

It allows computations to be performed on encrypted data without the need to decrypt it beforehand.

Secure Multi-Party Computation (SMPC)

A protocol that allows at least two different parties to jointly process combined data, without any party having to share all of their data with the others.

Federated Learning

A technique that allows multiple parties to train AI models using their own data. They then combine some of the patterns identified by the models into a single, accurate ‘global’ model, without the need to share training data with each other.

Trusted Execution Environments (TEEs)

They allow code to be executed and data to be accessed in isolation from the rest of the system.

Zero-Knowledge Proofs (ZKP)

They provide data minimization by allowing an individual to prove private information about themselves without revealing what it actually is.

Differential Privacy

A technique that allows the collection and analysis of information from large groups of people, ensuring privacy protection at the individual level.

Use of Synthetic Data

Generation of “artificial” data produced through data synthesis algorithms, which replicate the patterns and statistical properties of real data, with the possibility of including personal data.

According to the European Union Agency for Cybersecurity (ENISA), Privacy-Enhancing Technologies (PETs) are described as “software and hardware solutions that encompass processes, methods, or technical knowledge designed to fulfill a specific privacy or data protection function, or to protect individuals from privacy risks.” These technologies have become a key tool for professionals working with personal data, especially in sensitive sectors, by minimizing the risks associated with handling personal information.

PETs offer a set of innovative tools that allow organizations to manage large volumes of personal data securely and in compliance with current regulations (Table 5). Among the most notable solutions are homomorphic encryption, which protects data even during calculations, and secure multi-party computation, which enables collaborative processing without revealing sensitive information between parties. Additionally, techniques such as federated learning, zero-knowledge proofs, and synthetic data allow extracting value from data without compromising individuals’ privacy, strengthening trust in organizations and promoting innovation.

Adopting PETs allows organizations to ensure compliance with fundamental data protection principles, such as data minimization and purpose limitation, while enabling new forms of analysis that do not compromise privacy. This not only protects individuals’ privacy but also fosters trust in organizations and facilitates the development of data-driven innovations.

However, the use of PETs is not without risks. Although these technologies represent significant advances, they should not be considered foolproof solutions. Their implementation must be carried out legally, fairly, and transparently, carefully evaluating their impact, purpose, and regulatory compliance. Some PETs, still under development, present limitations in terms of scalability and resistance to attacks, which may compromise their effectiveness. Additionally, a lack of adequate technical expertise can lead to configuration errors that affect the balance between privacy and utility.

Finally, it is essential to consider that gaps between theory and practice can increase risks. Therefore, it is crucial to monitor vulnerabilities and have strong organizational and legal measures in place to ensure that PETs are effective and do not compromise individuals’ rights.

Proposals for Recommendations and Identification of Opportunities Associated with Personal Data Protection in Latin American Countries.

As is the case in most developing regions, Latin America has not developed a comprehensive regulatory framework for artificial intelligence at the international treaty level, as it lacks both a general AI treaty and a unified regulatory framework for personal data protection and cybersecurity. Although there are soft law initiatives, the application of international human rights treaties, such as the American Convention, has been limited. Thus, despite the Inter-American Court expressing interest in expanding privacy protection rights, no clear precedents have yet been set regarding the technological challenges posed by AI (Contreras, 2024).

Currently, countries like Argentina, Brazil, Chile, Colombia, Mexico, and Uruguay are leading the development of regulatory frameworks around AI, prioritizing ethical development, promoting citizen participation, and seeking to ensure that this technology provides societal benefits. However, despite the development of national strategies and plans, a significant gap persists between the formulation of public policies and their effective implementation. Challenges such as a lack of inter-institutional coordination and insufficient allocation of adequate resources continue to be major obstacles to establishing a solid and effective regulatory framework in the region (Perez, 2024).

Thus, a series of proposals were developed regarding personal data management and AI use in the region, based on an analysis of the regulatory frameworks of the U.S., the European Union, and China. These recommendations are listed below:

  • Adopt a Rights-Based Approach: Taking the European Union as a reference and its focus on the protection of fundamental rights, Latin American governments should prioritize human rights protection when implementing a regulatory framework for AI. This involves safeguarding privacy, ensuring the right to non-discrimination, and guaranteeing transparency in the use of algorithms that may influence personal decisions. When implementing these regulations, it is crucial to protect vulnerable populations, such as minors. Defining a minimum age, applying preventive measures against data manipulation, and establishing legal liability in case of non-compliance are fundamental steps to protect this group’s data.

  • Establish or Designate Specialized Regulatory Entities: Following the EU example, it is recommended that Latin American governments create new specialized regulatory entities for AI or expand the mandate of existing entities, such as data protection agencies, to include AI system oversight. These entities must be equipped with adequate resources and trained personnel, both in technical and ethical matters.

  • Promote Algorithm Transparency: As observed in the approaches of the U.S. and the EU, ensuring transparency in AI systems is essential. Both companies and government agencies using AI should be required to explain how their algorithms work and how they influence decision-making based on personal data. It is also crucial to implement policies that require algorithm auditability, especially in sensitive sectors such as health, finance, security, and public services.

  • Develop Gradual and Adaptable Regulatory Frameworks: Using the U.S. as a reference, Latin American countries can opt for a gradual regulatory approach, starting with establishing guidelines and principles that promote the ethical development and use of AI. This approach would provide the necessary flexibility to adapt to rapid technological advances, ensuring up-to-date regulation.

  • Strengthen International Cooperation: It is suggested to promote collaboration among Latin American countries to develop a common regional regulatory framework that can be harmonized with international standards, such as the EU General Data Protection Regulation. Such collaboration would facilitate trade and investment in AI technologies while ensuring uniform personal data protection throughout the region.

  • Promote Education and Public Awareness: Develop educational programs and public awareness campaigns about citizens’ rights concerning the use of their data by AI systems, following the example of emerging initiatives in the EU. In addition, promoting digital literacy and an understanding of privacy and data protection rights would empower citizens in the face of the growing use of AI, ensuring informed decision-making and self-protection against potential abuses in the use of emerging technologies.

  • Implement Robust Cybersecurity Policies: Considering the work carried out by China’s Cyberspace Administration (CAC), it is important for Latin American countries to adopt strong cybersecurity measures to protect personal data and avoid vulnerabilities in AI systems. It is also essential to strengthen national cybersecurity legislation to complement efforts in AI regulation, ensuring a comprehensive data protection framework.

  • Artificial Intelligence Governance (AI Gov): Criminal liability should be established for companies that violate fundamental rights related to user data privacy. This would not only impose fines or administrative sanctions but also ensure that the company’s legal representative can be prosecuted in court for such non-compliance. While this measure may seem severe, it would mark an important milestone for developers and providers of emerging technologies to implement exhaustive quality controls before launching their products to the market. It is also recommended to create an Ethics Commission that would act as an audit for all services using generative AI. This commission would be responsible for rigorously evaluating service compliance with quality standards before launch, using ethical regulatory frameworks from the U.S., the EU, and China as a reference. If the service does not meet the established requirements, it cannot be commercialized.

Latin America can learn from the experiences of the U.S., the EU, and China to develop its own regulatory framework that integrates the positive aspects of each model, adapting them to its own needs and realities. It is essential that the region’s governments collaborate with the private sector and other entities to build a future where AI is a driving force for development and societal well-being.

Perspectives

Long-term Influence in the Region

  • The goal of this project is to serve as a guide for governments in the process of regulating AI, by identifying and analyzing existing regulations related to personal data protection. Additionally, it aims to be a useful tool for citizens by detailing the personal data protected by each regulation, allowing for a better understanding of sensitive data and the prevention of risks associated with its manipulation.

  • Artificial intelligence is in a phase of exponential growth, making it crucial that regulations evolve at the same pace. Otherwise, this technology could become a global catastrophic risk.

  • The study of leading regions in the creation of generative AI language models could be further deepened by analyzing more strategies and recommendations useful for governments that are just beginning the AI regulation process (such as Latin America) while considering the political and social characteristics of the nations involved. A clear example of this is China, a country that, despite having a different culture and government system compared to the United States and the European Union, has implemented effective strategies that have allowed it to successfully commercialize its own services and products.

  • It will also be important to analyze the practices and policies of leading developers like OpenAI, Google, and Anthropic to compare whether they comply with the regulations set by their respective governments and take the necessary measures to prevent risks related to the manipulation of personal data when using generative AI.

  • Another key aspect to analyze is the possibility of establishing criminal liability for developers and providers who violate users’ privacy through their generative AI products and services. To date, administrative sanctions have been the primary control mechanism, but is that enough to regulate innovation in this technology?

  • Finally, it would be valuable to analyze case studies where non-binding entities, such as the Inter-American Court of Human Rights (IACHR), have intervened in the regulation of these technologies. While the IACHR has yet to issue specific rulings regarding personal data protection in the context of AI, important aspects of its jurisprudence on the right to privacy and data protection should be considered.

Adaptation Based on Technological Advances and New Needs

  • The project has the potential to adapt to new technological advances and government needs through continuous research, similar to organizations like the Global Catastrophic Risk Observatory and Epoch AI, which focus on researching AI’s development trajectory for the benefit of society.

  • The project can also evolve through collaboration with specialists in AI governance (GovIA), enabling the proposal of regulations or bills for approval.

  • Additionally, the project’s findings could be integrated into national public policies, serving as a foundation for creating specific legal frameworks. This would allow for the initiation or continuation of laws dedicated to AI and data protection, as well as guidelines for algorithm impact assessments and digital education programs for citizens.

  • The research results could also be used to organize conferences and raise public awareness about the importance of personal data protection when using generative AI, and about preventing the manipulation of this data to avoid fraud or deception that could harm people’s dignity or cause damage.

Potential Challenges in Implementation or Expansion

  • One of the most significant challenges in implementing new laws, particularly in countries just beginning the regulatory process, is the variability of legal and regulatory frameworks. This disparity can hinder the creation of unified regulations, leading to jurisdictional conflicts or difficulties in cross-border cooperation.

  • Another challenge for Latin America is the sustained financing of projects, especially in countries with limited resources. Moreover, resistance from certain industrial or economic sectors that view regulation as an obstacle could delay or complicate the adoption of the proposed policies.

Opportunities for Future Alliances

  • It is important to foster collaborations with universities and research centers, both locally and internationally, to develop joint research, pilot projects, and promote the training of experts in key areas such as ethics, law, and AI-related technologies.

  • Additionally, encouraging public participation in programs like “Carreras con Impacto” and “Global Catastrophic Risks” would help generate interest in mitigating risks that represent significant threats to humanity and the planet, such as AI safety.

  • Finally, involving the private sector and non-governmental organizations (NGOs) in policy formulation is recommended, ensuring an inclusive perspective that considers the interests of all actors involved in AI development.

Limitations

Time and Availability:

The development of the project was constrained by the time allocated for mentorship and limited personal availability due to parallel responsibilities. This restriction affected the ability to conduct an exhaustive analysis of all the areas originally planned.

Areas Not Addressed:

Due to the aforementioned restrictions, the project did not cover several important aspects, including:

  • A detailed analysis of specific case studies to identify personal data breaches through AI use.

  • Precise identification of the violated articles in these cases according to regulations, limiting a deeper understanding of the current legal framework.

  • The evaluation of the reasonableness of the imposed sanctions, a crucial aspect for understanding current compliance practices and their proportionality to the severity of the infringements.

Access to Resources and Specific Information:

  • The availability and quality of data posed another significant challenge, as it was not always possible to access up-to-date and complete information regarding the application of sanctions and specific regulations in each country. In particular, the case of China was quite challenging due to the lack of direct access to its regulations on its official website, forcing the reliance on secondary sources for obtaining such information. As a result, this affected the ability to conduct a more robust comparative analysis between different jurisdictions.

  • Additionally, the constant updates to regulations made it difficult to draw conclusions and generate results.

Methods and Tools Used:

  • The selection of methods for data collection and analysis was also influenced by time and resource limitations. This resulted in greater reliance on document review methods, making it difficult to integrate advanced data analysis tools or statistical techniques that could have strengthened the study’s findings.

Personal Skills:

  • Personal skills and abilities were challenged, as the project demanded competencies in various interdisciplinary areas, including those related to AI technological development. This situation influenced the depth and scope of certain sections of the research project focused on technical and development aspects.

Despite the challenges faced, such as time and resource limitations, the project succeeded in establishing a solid foundation for analysis and comparison of AI regulations for personal data protection between the United States, the European Union, and China. The results also enabled the proposal of recommendations applicable to the regulatory framework in Latin America and developing regions to prevent these types of risks.

References