- As artificial intelligence (AI) and generative AI (GAI) continue to evolve and become integral to business operations, businesses must be mindful of the risks associated with deploying AI solutions.
- Although there is not yet a comprehensive law governing AI, regulators have tools to hold businesses accountable. They are focused on transparent and explainable AI solutions to ensure that consumers and key stakeholders understand how these systems operate and make decisions.
- Because regulators are focused on these requirements, businesses should develop AI solutions aligned with industry standards, such as leveraging the NIST framework.
What Is Transparency and Explainability?
Transparency refers to the capability of a user to understand how the AI model is being built, how the data is being used and processed, and how the data is affecting internal weights and biases. In other words, transparency enables users to understand “what happened” in the system.1 For example, if the AI solution hallucinates, a user should be able to identify what caused the hallucination. Further, transparency allows users to act when an AI solution generates incorrect content or could otherwise lead to negative consequences.2 Because transparent AI systems enable users to better understand how it creates content, it increases user trust and confidence in its capabilities.3
Explainability refers to understanding “how” the GAI system made a decision and makes it easier for others to describe a model, its expected impact, and potential biases. Like transparency, an explainable GAI model increases users’ trust in the model’s outputs.4 There are various types of explainability, including global and local-level explainability.5 Global-level explainability is a general understanding of the algorithm’s behavior at a high level.6 Conversely, local-level explainability is understanding how the algorithm targets individuals, such as making decisions based on an individual’s credit or job application.7
Transparency and explainability are related concepts but are not interchangeable terms. Transparency provides a user with information about how a solution makes decisions, which allows for external auditing and evaluation. Conversely, explainability offers a user a rational justification for why a solution made a particular decision in a way humans can understand. Transparency and explainability are necessary components of AI governance and increase the credibility and trustworthiness of AI solutions.
Using Transparency and Explainability to Manage Risk
A significant focus surrounding transparent and explainable AI solutions is managing risk. Many AI solutions are inherently opaque, akin to a black box. AI solutions such as GAI models are trained on many inputs, with the most significant models having a corpus size in the trillions. These inputs are then used to determine connections within a neural network, which enable the model to find patterns in the data, make predictions, and generate content. The opaque nature of some AI solutions can make it difficult to manage risk and fully understand how data is used and processed, even by the machine learning researchers that build them. However, the inability to understand an AI solution is not a sufficient defense to legal liability.
To effectively manage risk with opaque systems, businesses need to document design decisions and training data, the structure of the model, its intended use cases, and how and when deployment, post-deployment, or end-user choices were made and by whom. Further, businesses should consider having a policy that notifies a human operator when a potential or actual negative outcome caused by a GAI system is detected.8 Finally, companies must ensure that using data does not infringe on copyrighted works.9
Transparency and Explainability with Third-Party Solutions
Another challenge for businesses deploying AI solutions is managing third-party risk. AI as a service will continue to grow, and companies must be mindful of the risks associated with engaging third parties to develop or operate their AI solutions.10 Contracts with third parties should include governance structures, risk assessment frameworks, monitoring and auditing protocols, and technical safeguards.11 Further, businesses should have policies and procedures for using third-party solutions, evaluation criteria, and technical safeguards. Companies should proceed with caution if a third party is not transparent about the risk metrics or methodologies used to develop or train the AI solution, as the lack of transparency presents a considerable level of risk.
Transparency and Explainability Regulatory Landscape
Although there is not yet a comprehensive law governing AI, the European Union (EU) recently voted to approve the Artificial Intelligence Act to establish rules surrounding explainability and transparency in AI applications. The Artificial Intelligence Act has global-level explainability requirements. The Act requires technical documentation of an AI system including, but not limited to, general and detailed descriptions of the AI system, detailed information about monitoring, functioning, and control of the AI system, a detailed description of the risk management system, and a description of any change made to the system over its lifecycle.12
Further, the EU General Data Protection Regulation (GDPR) contains transparency requirements related to automated decision-making. Controllers that engage in automated decision-making must comply with the GDPR’s transparency requirements. Under the GDPR, automated decision-making means the ability to decide about a data subject based solely on automated means without human involvement.13 The GDPR prohibits solely automated decision-making that produces legal or similarly significant effects unless the decision is (i) necessary for the performance of a contract, (ii) authorized by EU or member state law, or (iii) is based on the data subject’s explicit consent.14 Under the GDPR, controllers must process personal data in a transparent manner, which includes the data subject’s right to receive information about the controller’s identity and the nature of the processing, whether or not their personal data is being processed, and if so, the nature of the purposes of that processing, and any personal data breach that is likely to result in a high risk to their rights and freedoms.15
Beyond the EU, federal regulators in the U.S. also want businesses to provide documentation related to their AI solutions. For example, the U.S. Federal Trade Commission (FTC) recently opened an investigation into OpenAI, the creator of ChatGPT, for potential violations of Section 5 of the FTC Act.16 In short, the FTC is requiring OpenAI to provide detailed descriptions of each large language model (LLM) product, the data used to train their LLMs, the policies and procedures followed to assess the risk and safety of new LLM products, how they prevent personal information from being included in the training data for any LLMs, and how their LLM products generate information or statements about individuals.17
State regulators are also focused on transparent and explainable AI solutions. New York City passed Local Law 144, enacted on May 6, 2023. The Rule governs the use of Automated Employment Decision Tools (AEDT) and makes it unlawful for an employer or employment agency to use an AEDT to screen candidates and employees unless (i) the tool has undergone a bias audit no more than one year before its use, (ii) a summary of the most recent bias audit is made publicly available, and (iii) notice of the AEDT use and an opportunity to request an alternative selection process is provided to each candidate and employee who resides in New York City.18
Recent regulatory actions demonstrate the importance of transparent and explainable AI solutions. In 2019, consumer complaints surfaced regarding Apple Card’s creditworthiness decisions. Consumers claimed that Apple Card’s creditworthiness determinations violated the federal Equal Credit Opportunity Act because women received substantially lower credit limits than men.19 Consumers further alleged that Apple relied on algorithms and machine learning that Apple employees could not explain.20 The New York Department of Financial Services (NYDFS) opened an investigation into Apple’s underwriting data and ultimately determined that there was no evidence of discrimination.21 As part of the investigation, Apple and its partner bank had to provide their policies related to creditworthiness determinations and underwriting data.22 The bank provided its policies and explained its creditworthiness decisions for each consumer who complained.23 The bank identified the factors they rely on to make creditworthiness determinations, such as credit score, indebtedness, income credit utilization, and missed payments, among other credit history elements.24 Further, the NYDFS found that based on these factors, men and women with similar credit histories received similar credit limits.25
Beyond New York, California also intends to regulate AI. Earlier this month, the California Privacy Protection Agency (CPPA) released draft regulations related to Risk Assessments under the CPRA.26 Businesses that use artificial intelligence or automated decisionmaking technologies will be subject to additional risk assessment requirements. These new requirements are focused on transparency and explainability of AI solutions. For example, the draft regulations provide that businesses that use automated decisionmaking technology will need to provide plain language explanations of how the business evaluates their use of the automated decisionmaking technology for validity, reliability, and fairness.27 The draft regulations also require businesses to identify any third parties that provide software or other technological components for their automated decisionmaking technology.28 Further, businesses that make their artificial intelligence or automated decisionmaking available to other businesses must provide “facts necessary” for those recipient businesses to conduct their own risk assessments.29 The draft regulations also provide that businesses must identify the degree and details of any human involvement in the use of their automated decisionmaking technology and whether the human can influence how the business uses the outputs generated by the technology.30 Although the draft regulations are subject to change, businesses using automated decisionmaking technology or artificial intelligence should monitor these development and ensure that they have policies and procedures in place to comply with these future requirements under the CPRA.
Recent class actions regarding training data further illustrate the importance of transparent and explainable AI solutions. Businesses that operate large language models (LLMs) must proceed cautiously as copyright owners are bringing copyright infringement claims related to LLM training data.31 Because LLMs are trained on massive amounts of text from various sources, some text is likely subject to copyright protection. When faced with copyright infringement claims, a business must demonstrate that their AI solutions did not infringe on copyrighted works, such as proving that the solution was not trained on a specific work. Furthermore, because such claims are relatively new, it is unclear how arguments will shake out in court and whether a business can attempt to invoke trade secret protection related to how their LLM is trained. As such, companies need to know what data their LLMs are pulling from and have policies and procedures in place to ensure their model is not infringing on copyrighted works.
Minimizing the Risk of Potential Liability
Businesses that operate AI solutions face significant liability if their solutions are not transparent and explainable. Explainable and transparent AI solutions not only build trust and allow stakeholders to have confidence in their outputs but can help minimize the risk of potential liability if they can clearly explain how their AI solutions work, how it is trained, and why they made certain predictions or decisions. Businesses also need policies and procedures to protect privacy and security and make necessary disclosures to consumers related to how the AI solution operates and uses consumer data.
The authors gratefully acknowledge the contributions of Lauren Hudon, a student at Marquette University Law School and 2023 summer associate at Foley & Lardner LLP.
1 “Artificial Intelligence Risk Management Framework (AI RMF 1.0)” National Institute of Standards and Technology, 1 January 2023 at 17.
2 Id. at 16.
3 Id. at 15.
4 Id.at 16.
5 “What is AI Transparency?” Holistic AI, 6 February 2023.; CERRE Think Tank “ Towards and EU Regulatory Framework for AI Explainability [Video]” YouTube
6 Id.
7 Id.
8 Id. at 15.
9 Id. at 16.
10 Id. at 5.
11 Id.
12 “Annex IV, Technical Documentation referred to in Article 11(1).” European Commission, 21 April 2023.
13 “Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679” at 8, 23. European Commission, 22 August 2018.
14 GDPR Article 22(2)(a)-(c); see also “Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679” at 21-23. European Commission, 22 August 2018.
15 See GDPR Articles 13-15, 34.
16 “Federal Trade Commission (“FTC”) Civil Investigative Demand (“CID”) Schedule FTC File No. 232-3044.” United States Federal Trade Commission. Accessed 18 September 2023.
17 Id.
18 “Use of Automated Employment Decision Tools.” New York City Department of Consumer and Worker Protection, Accessed 18 September 2023.
19 “Report on Apple Card Investigation.” at 4. New York State Department of Financial Services, March 2021
20 Id.
21 Id. at 5-7.
22 Id.
23 Id. at 7.
24 Id.
25 Id.
26 See “Draft Risk Assessment Regulations for California Privacy Protection Agency September 8, 2023 Board Meeting.” California Privacy Protection Agency, 8 September 2023.
27 Id. at 13-15.
28 Id. at 14.
29 Id. at 16.
30 Id. at 15.
31 Marathe, I. “In OpenAI Copyright Lawsuits, Discovery Complications Likely to Take Center Stage | National Law Journal.” The National Law Journal, 21 July 2023.