Summary of the AI Bill of Rights and Policy Implications

TLDR: In the High Level Overview, I attempt to glean the upshots from what is said in the document for the purposes of AI Governance work. Following, Policy Proposals Implications is an attempt to see how this might relate to various current AI Governance policy proposals. Next is Selected Quotes and Layout where I copied all the headers and potentially relevant quotes over to provide a way to engage more without having to read the whole thing. Finally, there is Further Reading, a collection of documents mentioned in the AIBoR. I hope that this helps to better understand where the executive branch is at on AI and helps to craft policy change that is (at least somewhat) more likely to make headway.

Epistemic Status: I have read through the document in its entirety and spent probably about 8 hours crafting this document, but am also new to the AI Governance space so please take what I have to say here as my best attempt, not as any definitive take on the document, especially when I’m trying to abstract outwards to speak on the AIBoR in relation to AI policy proposals

High Level Overview

Released as a white paper by the White House Office of Science and Technology Policy in October of 2022, the “Blueprint for an AI Bill of Rights” (AIBoR) contains: a core argument for five principles, notes on how to apply these principles, and a “technical companion” that gives further, concrete steps that can be taken “by many kinds of organizations—from governments at all levels to companies of all sizes” (4) to uphold these values. It’s trying to guide AI policy across sectors and touts itself as a “national values statement” (4)[1].

Upshots from the process

  • They spent a lot of time gathering data and responses not just from a select few experts but from a broad range of businesses, experts from multiple fields, and the public, so perhaps repeating a sort of process like this where you can level sentiments like “the American public and experts agree that” would be helpful when creating policy suggestions

Upshots from what they say

  • They are really concerned with sensitive domains[2] where systems that interact with these domains must meet a higher bar. If you can connect generative systems to these domains (i.e. by showing GPT-4 pulls from information that contains some subsect of these) then you can avail yourself to the additional rules they suggest, such as:

    • “Designers, developers, and deployers of automated systems should consider limited waivers of confidentiality (including those related to trade secrets) where necessary in order to provide meaningful oversight of systems used in sensitive domains…This includes (potentially private and protected) meaningful access to source code, documentation, and related data during any associated legal discovery, subject to effective confidentiality or court orders.” (51)

    • “Some novel uses of automated systems in this context, where the algorithm is dynamically developing and where the science behind the use case is not well established, may also count as human subject experimentation, and require special review under organizational compliance bodies applying medical, scientific, and academic human subject experimentation ethics rules and governance procedures” (38)

  • They repeatedly reference the “American public’s rights” so appealing to how generative models may violate rights like privacy could be a good leverage point

  • A small, but noteworthy thing they mention is that the impact of AIs may be most visible “at the community level” so “the harms of automated systems should be evaluated, protected against, and redressed at both the individual and community levels” (10). So even though they focus mostly on individual rights, reasoning from community or population level harm or benefit isn’t totally off the table.

  • Though it may seem like the AIBoR’s general definition of automated system[3] would apply to AGI, they made a list of examples of automated systems and generative systems like GPT seem suspiciously absent[4], such that it seems likely that they intentionally did not include them in this framework, for what reason I’m unsure. Some senators picked up on this too and wrote a letter inquiring into the difference, but I’m currently unaware of any response by the OSTP.

The Five Principles AI x-risk Application

  1. Safe and Effective Systems

    1. This is by far the best principle to focus on. It has the most language amenable to adaptation to AI x-risk and I would recommend at least reading my selected quotes from this principle below for more context

  2. Algorithmic Discrimination Protections

    1. You could leverage this principle to potentially focus on how generative models have been discriminatory, but there isn’t much here to relate to x-risk and you’d probably have trouble making the extension as LLMs like OpenAI get better at avoiding these sorts of pitfalls.

    2. Perhaps you could also argue interpretability is needed to be able to remedy discrimination, pulling this together with the Notice and Explanation principle, but I’m not sure how successful this might be

  3. Data Privacy

    1. LLMs lack of obtaining consent could be leveraged here under this principle, but you’d have to somehow figure out how obtaining consent could be applied to LLMs in a way that isn’t nonsensical

  4. Notice and Explanation

    1. On the notice side, you could apply this to generative models but all this does is force them to disclose when the model is being used

    2. On the explanation side, you could try to extend this to interpretability where application to generative models would force creators to have a deep understanding of how each output was achieved. But this application doesn’t fit in entirely with their other example cases given.

  5. Human Alternatives, Consideration, and Fallback

    1. I don’t think this principle will be helpful beyond a few random quotes

Next steps

  • See the further reading at the bottom of this document for other documents mentioned in the AIBoR that mostly detail various institutions within the government’s position on AI

  • The panel attendees section lists all the people that played a role in a multitude of panels that informed the creation of this document, could mine that for potential outreach opportunities, but many are from disciplines outside AI

  • This podcast with Alondra Nelson, the person who spearheaded this project, would probably be fruitful

Policy Proposal Implications

If I had the time I’d go through Zach Stein-Perlman’s entire list, but in lieu of that I’ve created categories that attempt to capture some common policy proposals, with descriptions for anyone who may not be familiar. The list isn’t exhaustive, but it hopefully captures a fairly broad range. Below each of those I then assess the relevance of the AIBoR to each, mostly considering if such a method is mentioned in the AIBoR, but also if there are other sentiments that might support it. At a glance, the AIBoR doesn’t give support for many of the various AI policy proposals, the exceptions being decided support for Regulation by Government, Regulation from Within, and Auditing, but generally little support otherwise.

  1. Hardware Controls: targeting specific changes to hardware that help impede certain worst cases or cap the capabilities of the hardware

    1. Assessment: Nothing here to support this, hardware is rarely (if at all) mentioned.

  2. Monitoring: tracking something that you might not want to (or can’t) regulate, like stocks of cutting-edge chips or the state of frontier AI development in other countries

    1. Assessment: There is some support for ongoing monitoring, a whole section dedicated to it under the Safe and Effective System principle. But this is mostly connected to monitoring as a means to trigger specific regulation, where the monitoring is focused on things you could control or alter, thus meaning there’s not much support for this category.

  3. Regulation from Within: having some sort of internal process for risk assessment or prevention, that is created, or at least implemented, by some part of the company

    1. Assessment: While this wasn’t a prominent type of proposal in the policy proposals below (even the one mentioned is specifically an internal auditing proposal, a mix between two categories) there is a lot to support this principle in the AIBoR. Many times throughout they speak of things companies should do, in a way that can sometimes be ambiguous as to whether that directive should be fulfilled by governmental regulation or rather taken as an opportunity for companies to proactively build out safe practices themselves. On the one hand, sections like Clear Organizational Oversight (19) seem to indicate the latter, the AIBoR giving directives to be fulfilled by the companies from within. But the Reporting subheading that appears under each principle is a bit more ambiguous. Each Reporting section seems to give rough guidelines for creating reports that the companies fulfill themselves (and make open to the public). But then put this into conversation with the “How These Principles Can Move Into Practice” sections at the end of each Principle, where they give examples of how the principle and its subheadings play out in the world. In these sections they continually point to Regulation by Government and only twice (21, 29) mention an example of Regulation from Within as a successful fulfillment, indicating that Regulation from Within may be a helpful step but might not be enough in the end.

  4. Regulation by Government: these are proposals where the government would handle the given regulatory procedure, whether it be setting requirements for information security to prevent AI model leaks, or setting up incident reporting similar to what the FAA does after a plane crash

    1. Assessment: My assessment here goes hand in hand with that of Regulation from Within. Throughout the AIBoR they give suggestions like those found in the Reporting sections that seem like they could factor in as a step in a Regulation by Government, but as elsewhere they stop short of saying so. They say the reports should be “open to the public” but they fail to say who the reports should be for, if they should be overviewed by the government or some other entity or if just publishing the report is enough. There are multiple instances of this ambiguity throughout the suggestions they make, but what solidifies Regulation by Government as a method supported by the AIBoR is the “How These Principles Can Move Into Practice” sections mentioned before. Nearly 90% of examples here are examples of Regulation by Government, often implemented by one of the independent agencies of the US government (like the National Science Foundation or The National Highway Traffic Safety Administration), seemingly indicating this is one of the best avenues to follow for future AI policy proposals. My best guess is that they are trying to create principles that can be implemented and used by a wide range of actors in a wide range of situations, principles that can be broadly implemented, but that when it comes down to it the best ways they see these principles being implemented are when government takes the role of the implementer and crafter of these policies.

  5. Regulation by Legal Code: this could also be a subsection under Regulation by Government, but basically involves changes to the legal code like clearly spelling out who is responsible for harms done by AI

    1. Assessment: Any talk of legal liability in the AIBoR references existing laws, and it proposes no such changes to the legal code in a way that might codify liability for harms done by AI, so this principle isn’t really supported.

  6. Licensing: again a principle that could probably function as a subsection under Regulation by Government, but remaining separate because it’s an open possibility that third parties could do the licensing, these proposals focus on precaution, on making sure those working on frontier AI models (perhaps more specifically those amassing large amounts of cutting edge chips) go through some sort of training or process first to make sure they are prepared to handle such risks

    1. Assessment: Mentioned nowhere in the AIBoR, nearly no grounding.

  7. Auditing: having an individual or group of individuals who evaluate whether an organization is following its promised procedures and safety protocols, whether by assessing data or testing the model itself

    1. Assessment: This solution is mentioned directly in the sections of each of the first three principles, where they support a certain sort of independent or third party auditing to assess a variety of metrics they’ve put forth. Some instances are more amenable to AI x–risk oversight, like the independent evaluations mentioned in the Safe and Effective Systems section, and others less so, like the independent evaluations mentioned in the Algorithmic Discrimination Protections section, where the assessment is geared specifically towards making sure the system is behaving in a non-discriminatory way. There is support for this assessment being both pre and post deployment, as is mentioned specifically in Ethical review and use prohibitions (38) section of the Data Privacy principle, a section that also highlights the possibility of the auditing not being tied just to more technical considerations but ethical ones as well.

  8. Funding Preventative Work: mostly just funding alignment research, but also efforts towards interpretability and to improve model evaluation

    1. Assessment: There is pretty much nothing here to support this, as most all of this is aimed at the application layer of the product cycle, with the little that is aimed at pre-deployment assessment focusing more on Regulation by Government or Auditing.

Selected Quotes and Layout

This is generally laid out by presenting relevant quotes from the document largely in chronological order, and also gathering the section headers for each of the five principles for a sense of the overall arc of the document at a quicker glance, and pulling the relevant quotes from the summary of each principle right below it.

General Quotes

  • “It is intended to support the development of policies and practices that protect civil rights and promote democratic values in the building, deployment, and governance of automated systems.” (2)

  • “The appropriate application of the principles set forth in this white paper depends significantly on the context in which automated systems are being utilized. In some circumstances, application of these principles in whole or in part may not be appropriate given the intended use of automated systems to achieve government agency missions. Future sector-specific guidance will likely be necessary and important for guiding the use of automated systems in certain settings such as AI systems used as part of school building security or automated health diagnostic systems” (2)

  • “This white paper recognizes that national security (which includes certain law enforcement and homeland security activities) and defense activities are of increased sensitivity and interest to our nation’s adversaries and are often subject to special requirements, such as those governing classified information and other protected data. Such activities require alternative, compatible safeguards through existing policies that govern automated systems and AI, such as the Department of Defense (DOD) AI Ethical Principles and Responsible AI Implementation Pathway and the Intelligence Community (IC) AI Ethics Principles and Framework. The implementation of these policies to national security and defense activities can be informed by the Blueprint for an AI Bill of Rights where feasible.” (2)

  • “Among the great challenges posed to democracy today is the use of technology, data, and automated systems in ways that threaten the rights of the American public…These problems are well documented. In America and around the world, systems supposed to help with patient care have proven unsafe, ineffective, or biased. Algorithms used in hiring and credit decisions have been found to reflect and reproduce existing unwanted inequities or embed new harmful bias and discrimination. Unchecked social media data collection has been used to threaten people’s opportunities, undermine their privacy, or pervasively track their activity—often without their knowledge or consent. ” (3)

    • “Automated systems have brought about extraordinary benefits, from technology that helps farmers grow food more efficiently and computers that predict storm paths, to algorithms that can identify diseases in patients.” (3)

  • From the outset, the framing is focused on a Open AI flavor of AI safety where the worries are focused on how AI might “threaten the rights of the American public” (3) focusing on examples of AI harm like the “harmful bias” in hiring and credit decision algorithms or the “undermining of privacy” associated with “unchecked social media data collection” (3).

  • They are at least somewhat familiar with value alignment when it comes to AI, mentioning “ these principles are a blueprint for building and deploying automated systems that are aligned with democratic values and protect civil rights, civil liberties, and privacy” (4).

  • “Automated systems with an intended use within sensitive domains, including, but not limited to, criminal justice, employment, education, and health, should additionally be tailored to the purpose, provide meaningful access for oversight, include training for any people interacting with the system, and incorporate human consideration for adverse or high-risk decisions” (8)

  • “The Blueprint for an AI Bill of Rights is an exercise in envisioning a future where the American public is protected from the potential harms, and can fully enjoy the benefits, of automated systems. It describes principles that can help ensure these protections. Some of these protections are already required by the U.S. Constitution or implemented under existing U.S. laws. For example, government surveillance, and data search and seizure are subject to legal requirements and judicial oversight. There are Constitutional requirements for human review of criminal investigative matters and statutory requirements for judicial review. Civil rights laws protect the American people against discrimination.” (8)

  • “An “automated system” is any system, software, or process that uses computation as whole or part of a system to determine outcomes, make or aid decisions, inform policy implementation, collect data or observations, or otherwise interact with individuals and/​or communities” (10)

  • “AI and other data-driven automated systems most directly collect data on, make inferences about, and may cause harm to individuals. But the overall magnitude of their impacts may be most readily visible at the level of communities. Accordingly, the concept of community is integral to the scope of the Blueprint for an AI Bill of Rights. United States law and policy have long employed approaches for protecting the rights of individuals, but existing frameworks have sometimes struggled to provide protections when effects manifest most clearly at a community level. For these reasons, the Blueprint for an AI Bill of Rights asserts that the harms of automated systems should be evaluated, protected against, and redressed at both the individual and community levels.” (10)

1. Safe and Effective Systems

“Systems should undergo pre-deployment testing, risk identification and mitigation, and ongoing monitoring that demonstrate they are safe and effective based on their intended use, mitigation of unsafe outcomes including those beyond the intended use, and adherence to domain-specific standards…Independent evaluation and reporting that confirms that the system is safe and effective, including reporting of steps taken to mitigate potential harms, should be performed and the results made public whenever possible.” (5, 15)

  • Consultation

    • “The public should be consulted in the design, implementation, deployment, acquisition, and maintenance phases of automated system development, with emphasis on early-stage consultation before a system is introduced or a large change implemented” (18)

  • Testing

  • Risk identification and mitigation

  • Ongoing monitoring

    • “Automated systems should have ongoing monitoring procedures, including recalibration procedures, in place to ensure that their performance does not fall below an acceptable level over time, based on changing real-world conditions or deployment contexts, post-deployment modification, or unexpected conditions. This ongoing monitoring should include continuous evaluation of performance metrics and harm assessments, updates of any systems, and retraining of any machine learning models as necessary, as well as ensuring that fallback mechanisms are in place to allow reversion to a previously working system” (19)

  • Clear organizational oversight.

    • “Entities responsible for the development or use of automated systems should lay out clear governance structures and procedures. This includes clearly-stated governance procedures before deploying the system, as well as responsibility of specific individuals or entities to oversee ongoing assessment and mitigation” (19)

    • “In some cases, it may be appropriate for an independent ethics review to be conducted before deployment.” (19)

  • Relevant and High-Quality Data

    • “Additionally, justification should be documented for each data attribute and source to explain why it is appropriate to use” (19)

  • Independent evaluation.

    • “Automated systems should be designed to allow for independent evaluation (e.g., via application programming interfaces). Independent evaluators, such as researchers, journalists, ethics review boards, inspectors general, and third-party auditors, should be given access to the system and samples of associated data, in a manner consistent with privacy, security, law, or regulation (including, e.g., intellectual property law), in order to perform such evaluations. Mechanisms should be included to ensure that system access for evaluation is: provided in a timely manner to the deployment-ready version of the system; trusted to provide genuine, unfiltered access to the full system; and truly independent such that evaluator access cannot be revoked without reasonable and verified justification” (20)

  • Reporting

    • “Entities responsible for the development or use of automated systems should provide regularly-updated reports that include: an overview of the system, including how it is embedded in the organization’s business processes or other activities, system goals, any human-run procedures that form a part of the system, and specific performance expectations; a description of any data used to train machine learning models or for other purposes, including how data sources were processed and interpreted, a summary of what data might be missing, incomplete, or erroneous, and data relevancy justifications; the results of public consultation such as concerns raised and any decisions made due to these concerns; risk identification and management assessments and any steps taken to mitigate potential harms; the results of performance testing including, but not limited to, accuracy, differential demographic impact, resulting error rates (overall and per demographic group), and comparisons to previously deployed systems; ongoing monitoring procedures and regular performance testing reports, including monitoring frequency, results, and actions taken; and the procedures for and results from independent evaluations. Reporting should be provided in a plain language and machine-readable manner” (20)

  • Examples of this principle in play

    • “The law and policy landscape for motor vehicles shows that strong safety regulations—and measures to address harms when they occur—can enhance innovation in the context of complex technologies. Cars, like automated digital systems, comprise a complex collection of components. The National Highway Traffic Safety Administration, through its rigorous standards and independent evaluation, helps make sure vehicles on our roads are safe without limiting manufacturers’ ability to innovate. At the same time, rules of the road are implemented locally to impose contextually appropriate requirements on drivers, such as slowing down near schools or playgrounds.” (21)

    • “The National Science Foundation (NSF) funds extensive research to help foster the development of automated systems that adhere to and advance their safety, security and effectiveness” (22)

    • “Some state legislatures have placed strong transparency and validity requirements on the use of pretrial risk assessments.” (22)

2. Algorithmic Discrimination Protections

“Algorithmic discrimination occurs when automated systems contribute to unjustified different treatment or impacts disfavoring people based on any protected status (race, sex, religion, age, disability, etc.)...Independent evaluation and plain language reporting in the form of an algorithmic impact assessment, including disparity testing results and mitigation information, should be performed and made public whenever possible to confirm these protections.” (5, 23)

  • Proactive assessment of equity in design (26)

    • Most issues mentioned here are quite specific to near term AI risk involving discrimination where assessments are again posed as a solution to conduct “both qualitative and quantitative evaluations of the system” (26)

  • Representative and robust data (26)

  • Guarding against proxies (26)

  • Ensuring accessibility during design, development, and deployment (27)

  • Disparity assessment (27)

    • “For every instance where the deployed automated system leads to different treatment or impacts disfavoring the identified groups, the entity governing, implementing, or using the system should document the disparity and a justification for any continued use of the system.” (27)

  • Disparity mitigation (27)

  • Ongoing monitoring and mitigation (27)

  • Independent evaluation (27)

    • “As described in the section on Safe and Effective Systems, entities should allow independent evaluation of potential algorithmic discrimination caused by automated systems they use or oversee. In the case of public sector uses, these independent evaluations should be made public unless law enforcement or national security restrictions prevent doing so.”

  • Reporting (27)

3. Data Privacy

“Ensuring that data collection conforms to reasonable expectations and that only data strictly necessary for the specific context is collected…Any consent requests should be brief, be understandable in plain language, and give you agency over data collection and the specific context of use; current hard-to understand notice-and-choice practices for broad uses of data should be changed. Enhanced protections and restrictions for data and inferences related to sensitive domains, including health, work, education, criminal justice, and finance, and for data pertaining to youth should put you first…surveillance technologies should be subject to heightened oversight that includes at least pre-deployment assessment of their potential harms and scope limits to protect privacy and civil liberties.” (6, 30)

  • Privacy by design and by default (33)

  • Data collection and use-case scope limits (33)

  • Risk identification and mitigation (33)

  • Heightened oversight of surveillance. (34)

  • Limited and proportionate surveillance (34)

  • Scope limits on surveillance to protect rights and democratic values (34)

  • Use-specific consent (34)

  • Brief and direct consent requests (34)

  • Data access and correction (35)

  • Consent withdrawal and data deletion (35)

  • Automated system support (35)

  • Independent evaluation. (35)

    • “As described in the section on Safe and Effective Systems, entities should allow independent evaluation of the claims made regarding data policies. These independent evaluations should be made public whenever possible.”

  • Reporting (35)

  • Extra protections related to sensitive domains (36)

    • “Data and metadata generated by or about those who are not yet legal adults is also sensitive, even if not related to a sensitive domain” (36)

    • Necessary functions only (38)

    • Ethical review and use prohibitions (38)

      • “Any use of sensitive data or decision process based in part on sensitive data that might limit rights, opportunities, or access, whether the decision is automated or not, should go through a thorough ethical review and monitoring, both in advance and by periodic review (e.g., via an independent ethics committee or similarly robust process).”

      • “Some novel uses of automated systems in this context, where the algorithm is dynamically developing and where the science behind the use case is not well established, may also count as human subject experimentation, and require special review under organizational compliance bodies applying medical, scientific, and academic human subject experimentation ethics rules and governance procedures”

    • Data quality (38)

    • Limit access to sensitive data and derived data (38)

    • Reporting (38)

4. Notice and Explanation

“You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you…Whenever possible, you should have access to reporting that confirms your data decisions have been respected and provides an assessment of the potential impact of surveillance technologies on your rights, opportunities, or access.” (6, 40)

  • Generally accessible plain language documentation (43)

  • Accountable (43)

  • Timely and up-to-date (43)

  • Brief and clear (43)

  • Tailored to the purpose (43)

  • Tailored to the target of the explanation (43)

  • Tailored to the level of risk (44)

    • “In settings where the consequences are high as determined by a risk assessment, or extensive oversight is expected (e.g., in criminal justice or some public sector settings), explanatory mechanisms should be built into the system design so that the system’s full behavior can be explained in advance (i.e., only fully transparent models should be used), rather than as an after-the-decision interpretation. In other settings, the extent of explanation provided should be tailored to the risk level.”

  • Valid (44)

  • Reporting (44)

  • Example of this in play

    • “Lenders are required by federal law to notify consumers about certain decisions made about them…The CFPB has also asserted that “[t]he law gives every applicant the right to a specific explanation if their application for credit was denied, and that right is not diminished simply because a company uses a complex algorithm that it doesn’t understand.” (45)

5. Human Alternatives, Consideration, and Fallback

“Whenever possible, you should have access to reporting that confirms your data decisions have been respected and provides an assessment of the potential impact of surveillance technologies on your rights, opportunities, or access…Automated systems with an intended use within sensitive domains, including, but not limited to, criminal justice, employment, education, and health, should additionally be tailored to the purpose, provide meaningful access for oversight, include training for any people interacting with the system, and incorporate human consideration for adverse or high-risk decisions” (7, 46)

  • “No matter how rigorously an automated system is tested, there will always be situations for which the system fails. The American public deserves protection via human review against these outlying or unexpected scenarios.” (47)

  • Brief, clear, accessible notice and instructions (49)

  • Human alternatives provided when appropriate (49)

  • Proportionate (49)

  • Accessible (49)

  • Convenient (49)

  • Equitable (50)

  • Timely (50)

  • Effective (50)

  • Maintained (50)

  • Training and assessment (50)

    • “Anyone administering, interacting with, or interpreting the outputs of an automated system should receive training in that system, including how to properly interpret outputs of a system in light of its intended purpose”

  • Oversight (50)

  • Implement additional human oversight and safeguards for automated systems related to sensitive domains (51)

    • Narrowly scoped data and inferences (51)

    • Tailored to the situation (51)

    • Human consideration before any high-risk decision (51)

    • Meaningful access to examine the system. (51)

      • “Designers, developers, and deployers of automated systems should consider limited waivers of confidentiality (including those related to trade secrets) where necessary in order to provide meaningful oversight of systems used in sensitive domains…This includes (potentially private and protected) meaningful access to source code, documentation, and related data during any associated legal discovery, subject to effective confidentiality or court orders.”

    • Reporting (51)

Examples of automated systems (53)

  • Civil rights, civil liberties, or privacy, including but not limited to:

    • Speech-related systems such as automated content moderation tools

    • Surveillance and criminal justice system algorithms such as risk assessments, predictive policing, automated license plate readers, real-time facial recognition systems (especially those used in public places or during protected activities like peaceful protests), social media monitoring, and ankle monitoring devices

    • Voting-related systems such as signature matching tools

    • Systems with a potential privacy impact such as smart home systems and associated data, systems that use or collect health-related data, systems that use or collect education-related data, criminal justice system data, ad-targeting systems, and systems that perform big data analytics in order to build profiles or infer personal information about individuals

    • Any system that has the meaningful potential to lead to algorithmic discrimination

  • Equal opportunities, including but not limited to:

    • Education-related systems such as algorithms that purport to detect student cheating or plagiarism, admissions algorithms, online or virtual reality student monitoring systems, projections of student progress or outcomes, algorithms that determine access to resources or programs, and surveillance of classes (whether online or in-person)

    • Housing-related systems such as tenant screening algorithms, automated valuation systems that estimate the value of homes used in mortgage underwriting or home insurance, and automated valuations from online aggregator websites

    • Employment-related systems such as workplace algorithms that inform all aspects of the terms and conditions of employment including, but not limited to, pay or promotion, hiring or termination algorithms, virtual or augmented reality workplace training programs, and electronic workplace surveillance and management systems

  • Access to critical resources and services, including but not limited to:

    • Health and health insurance technologies such as medical AI systems and devices, AI-assisted diagnostic tools, algorithms or predictive models used to support clinical decision making, medical or insurance health risk assessments, drug addiction risk assessments and associated access algorithms, wearable technologies, wellness apps, insurance care allocation algorithms, and health insurance cost and underwriting algorithms

    • Financial system algorithms such as loan allocation algorithms, financial system access determination algorithms, credit scoring systems, insurance algorithms including risk assessments, auto-mated interest rate determinations, and financial algorithms that apply penalties (e.g., that can garnish wages or withhold tax returns)

    • Systems that impact the safety of communities such as automated traffic control systems, electrical grid controls, smart city technologies, and industrial emissions and environmental impact control algorithms

    • Systems related to access to benefits or services or assignment of penalties such as systems that support decision-makers who adjudicate benefits such as collating or analyzing information or matching records, systems which similarly assist in the adjudication of administrative or criminal penalties, fraud detection algorithms, services or benefits access control algorithms, biometric systems used as access control, and systems which make benefits or services related decisions on a fully or partially autonomous basis (such as a determination to revoke benefits).

Panel attendees:

  • Panel 1: Consumer Rights and Protections.

    • Rashida Richardson, Senior Policy Advisor for Data and Democracy, White House Office of Science and Technology Policy

    • Rashida Richardson, Senior Policy Advisor for Data and Democracy, White House Office of Science and Technology Policy

    • Devin E. Willis, Attorney, Division of Privacy and Identity Protection, Bureau of Consumer Protection, Federal Trade Commission

    • Tamika L. Butler, Principal, Tamika L. Butler Consulting

    • Jennifer Clark, Professor and Head of City and Regional Planning, Knowlton School of Engineering, Ohio State University

    • Carl Holshouser, Senior Vice President for Operations and Strategic Initiatives, TechNet

    • Surya Mattu, Senior Data Engineer and Investigative Data Journalist, The Markup

    • Mariah Montgomery, National Campaign Director, Partnership for Working Families

  • Panel 2: The Criminal Justice System

    • Suresh Venkatasubramanian, Assistant Director for Science and Justice, White House Office of Science and Technology Policy

    • Ben Winters, Counsel, Electronic Privacy Information Center

    • Chiraag Bains, Deputy Assistant to the President on Racial Justice & Equity

    • Sean Malinowski, Director of Policing Innovation and Reform, University of Chicago Crime Lab

    • Kristian Lum, Researcher

    • Jumana Musa, Director, Fourth Amendment Center, National Association of Criminal Defense Lawyers

    • Stanley Andrisse, Executive Director, From Prison Cells to PHD; Assistant Professor, Howard University College of Medicine

    • Myaisha Hayes, Campaign Strategies Director, MediaJustice

  • Panel 3: Equal Opportunities and Civil Justice

    • Rashida Richardson, Senior Policy Advisor for Data and Democracy, White House Office of Science and Technology Policy

    • Dominique Harrison, Director for Technology Policy, The Joint Center for Political and Economic Studies

    • Jenny Yang, Director, Office of Federal Contract Compliance Programs, Department of Labor

    • Christo Wilson, Associate Professor of Computer Science, Northeastern University

    • Frida Polli, CEO, Pymetrics

    • Karen Levy, Assistant Professor, Department of Information Science, Cornell University

    • Natasha Duarte, Project Director, Upturn

    • Elana Zeide, Assistant Professor, University of Nebraska College of Law

    • Fabian Rogers, Constituent Advocate, Office of NY State Senator Jabari Brisport and Community Advocate and Floor Captain, Atlantic Plaza Towers Tenants Association

  • Panel 4: Artificial Intelligence and Democratic Values

    • Sorelle Friedler, Assistant Director for Data and Democracy, White House Office of Science and Technology Policy

    • J. Bob Alotta, Vice President for Global Programs, Mozilla Foundation

    • Navrina Singh, Board Member, Mozilla Foundation

    • Kathy Pham Evans, Deputy Chief Technology Officer for Product and Engineering, U.S Federal Trade Commission.

    • Liz O’Sullivan, CEO, Parity AI

    • Timnit Gebru, Independent Scholar

    • Jennifer Wortman Vaughan, Senior Principal Researcher, Microsoft Research, New York City

    • Pamela Wisniewski, Associate Professor of Computer Science, University of Central Florida; Director, Socio-technical Interaction Research (STIR) Lab

    • Seny Kamara, Associate Professor of Computer Science, Brown University

  • Panel 5: Social Welfare and Development.

    • Suresh Venkatasubramanian, Assistant Director for Science and Justice, White House Office of Science and Technology Policy

    • Anne-Marie Slaughter, CEO, New America

    • Michele Evermore, Deputy Director for Policy, Office of Unemployment Insurance Modernization, Office of the Secretary, Department of Labor

    • Blake Hall, CEO and Founder, ID.Me

    • Karrie Karahalios, Professor of Computer Science, University of Illinois, Urbana-Champaign

    • Christiaan van Veen, Director of Digital Welfare State and Human Rights Project, NYU School of Law’s Center for Human Rights and Global Justice

    • Julia Simon-Mishel, Supervising Attorney, Philadelphia Legal Assistance

    • Dr. Zachary Mahafza, Research & Data Analyst, Southern Poverty Law Center

    • J. Khadijah Abdurahman, Tech Impact Network Research Fellow, AI Now Institute, UCLA C2I1, and UWA Law School

  • Panel 6: The Healthcare System

    • Alondra Nelson, Deputy Director for Science and Society, White House Office of Science and Technology Policy

    • Patrick Gaspard, President and CEO, Center for American Progress

    • Micky Tripathi, National Coordinator for Health Information Technology, U.S Department of Health and Human Services

    • Mark Schneider, Health Innovation Advisor, ChristianaCare

    • Ziad Obermeyer, Blue Cross of California Distinguished Associate Professor of Policy and Management, University of California, Berkeley School of Public Health

    • Dorothy Roberts, George A. Weiss University Professor of Law and Sociology and the Raymond Pace and Sadie Tanner Mossell Alexander Professor of Civil Rights, University of Pennsylvania

    • David Jones, A. Bernard Ackerman Professor of the Culture of Medicine, Harvard University

    • Jamila Michener, Associate Professor of Government, Cornell University; Co-Director, Cornell Center for Health Equity

Further Reading

And finally, thanks to Jakub Kraus for his help over multiple iterations of this document, the guidance was quite helpful and much appreciated.

  1. ^

    Citation format is (page number of AIBoR) so (4) is page 4 in the document

  2. ^

    Namely, health, work, education, criminal justice, and finance, and data pertaining to youth

  3. ^

    An automated system is “any system, software, or process that uses computation as whole or part of a system to determine outcomes, make or aid decisions, inform policy implementation, collect data or observations, or otherwise interact with individuals and/​or communities” including those derived from “machine learning” or other “artificial intelligence techniques” (10)

  4. ^

    As this document was released 9/​2022, they should have been aware of GPT-3 and potentially other LLMs already released at the time (i.e. Chinchilla)

No comments.