Generative AI (Gen AI) is becoming ubiquitous in business, research, health care, education and virtually every aspect of daily life. Directors are inundated with advice about how, why and where to use Gen AI to gain market share, improve the delivery of goods and services, cut costs and increase productivity, and achieve strategic goals — and with good reason. Gen AI can exponentially increase the company’s reach, effectiveness and profitability.
Gen AI is a subset of artificial intelligence that creates content (output) — in text, image, audio, video, computer code and other formats — by analyzing large sets of training data (input), identifying patterns and generating original material. Federal Reserve Board policy analyst Jean Xiao Timmerman writes in “Educational Exposure to Generative Artificial Intelligence” that Gen AI is distinct from prior forms of AI because of its ability to produce new content, rather than merely analyzing content or using that analysis to make predictions, as in earlier forms of AI.
A company that uses Gen AI must do so strategically, balancing the benefits against its risks. The threats include legal and compliance, operational and reputational threats, and ethical considerations.
Legal and Compliance Risks
The legal and compliance risks of Gen AI include ensuring the business complies with the many varied laws and regulations that address Gen AI use across different jurisdictions and in different environments. That means, at a minimum, determining whether the company is complying with laws regulating copyright and other intellectual property as well as data security and privacy. Businesses can face claims that Gen AI is improperly copying the protected works on which it is trained or could face challenges establishing protection for works they create using Gen AI. In the data security and privacy context, businesses need to avoid violating laws that protect confidential information, as can happen when data input into a generative model becomes part of the model’s training set or is further used or disseminated.
The board and management also need to avoid the company’s use of Gen AI that credibly could lead to charges of bias or discrimination. Some companies have learned the hard way that outsourcing consequential decisions to Gen AI can lead to bias. CoreLogic (now Cotality), a leading provider of information services, has been defending itself in federal court since 2018 against claims that it is liable under the Federal Fair Housing Act for a discriminatory decision made by a landlord who relied on CoreLogic’s automated tenant screening software in deciding not to lease to a specific tenant. CoreLogic has since changed its policies, emphasizing in a September 30, 2024, online post that the removal of bias is one of its most pressing issues in the development and deployment of AI systems, particularly in fields like real estate, where historical biases can have far-reaching consequences in promoting systemic inequalities.
Underscoring the importance of bias-free training data, Leonardo Nicoletti and Dina Bass write in “Humans Are Biased. Generative AI Is Even Worse: Stable Diffusion’s Text-to-Image Model Amplifies Stereotypes About Race and Gender – Here’s Why That Matters” that Bloomberg conducted an evaluation in 2023 of 5,100 images it created using text prompts to the Gen AI model Stable Diffusion related to jobs considered high-paying and low-paying. Bloomberg’s assessment found that, compared to data provided by the U.S. Bureau of Labor Statistics, the images disproportionately represented people in the low-paying positions as more female and with darker skin than they are in the workforce and people in the high-paying jobs as more male and with fairer skin than they are in the workforce. Both examples underscore that a company ought not use Gen AI to accomplish something that would violate the law even if accomplished without Gen AI.
Director Duties Apply to Gen AI Governance
Directors have a critical responsibility in overseeing the ethical, strategic, legal and compliance implications of Gen AI. They must ensure that the technologies are developed and deployed in alignment with the company’s values, regulatory requirements and societal expectations. This involves addressing the risks related to bias, data privacy, transparency and accountability, while also fostering innovation and staying informed about evolving technologies. Additionally, directors need to guide management in developing policies on AI governance, ensuring the responsible use of GenAI and mitigating any potential harm from misuse, all while creating frameworks for continuous monitoring and adaptation to emerging legal and compliance standards.
Governance of the use of Gen AI is dynamic. The technology is evolving at an exponential pace. Creative executives and other employees will, and should, experiment with using Gen AI in innovative ways and in innumerable applications. Directors are not exempt. They should educate themselves about the technology and consider the extent to which the board might use Gen AI to perform its work, including drafting public statements and preparing board agendas and materials. Weil, Gotshal & Manges LLP advises in a November 27, 2023, Governance & Securities Alert, that the board needs to oversee the company’s disclosures of its use of Gen AI. By keeping informed, prioritizing innovation and exercising diligence and good judgment, directors can explore the possibilities of Gen AI and optimize its usage, while minimizing and mitigating the threats.
Action Steps
The board and management need to actively oversee and manage the threats of both legacy and new risks. The following are 10 action steps that those in the boardroom should consider to address the legal and compliance risks Gen AI can pose to the company.
Understand why the company is using Gen AI. The company needs to be clear from the outset why it is using Gen AI. Allison J. Pugh writes in The Last Human Job: The Work of Connectivity in a Disconnected World that businesses use AI for one of three reasons:
- It is better than nothing.
- It is better than humans.
- AI and humans are better together.
Pugh suggests that “connective laboring” is uniquely human in its conduct and experience. Any replacement is bound to be inferior. AI and other software are bound to be worse, but at least they are “better than nothing.” In other cases, the technologies can improve upon some of the characteristic flaws of people’s connective laboring or be “better than humans.” In the perceived wisdom of Silicon Valley, Pugh emphasizes that, in some applications, AI will not replace human labor but instead augment it, resulting in technology and humans that are “better together.” Directors should ask and understand why the company is using Gen AI and to what ends.
Know and understand which GenAI model the company is using. Tracy Rubin, T.J. Graham, Chris Chynoweth and Kristin Leavy of Cooley LLP write in “Top Ten Considerations for Companies Using Generative AI Internally” that businesses need to do their homework in understanding how each Gen AI model they plan to use handles both the input and output. This requires understanding how the model grapples with the security of the data fed into the model, how long the model retains input data and how the model uses or shares the input data. It also means evaluating how the model was trained and on what data sets and what responsibility, if any, the model assumes for errors and the consequences. Directors should ask management the right questions and consider whether traditional contract terms are sufficient or whether the company should negotiate its own contract for better protection. Different models can carry different legal and compliance risks. One model may be appropriate for one use at the company but inappropriate for another application at the same company.
Identify and evaluate the legal and compliance risks of using Gen AI. The landscape of Gen AI risk is dynamic, leaving the board and management to navigate an evolving and complex environment with little legal or regulatory guidance. It is important that directors ask and receive sufficient information from management to identify the legal and compliance risks posed by the company’s Gen AI deployment. Directors and management should evaluate each risk, analyze how likely it is to happen and understand the consequences for the business if it does. The International Organization for Standardization calls this conducting a risk assessment.
Determine the company’s risk appetite and risk tolerance for Gen AI. Once the company has conducted its Gen AI risk assessment, PwC recommends in its September 2023 Director’s Guide to ERM Fundamentals that the board and management should then determine the company’s appetite and tolerance for each risk or the level of risk the business is willing to accept in pursuit of its strategic objectives. Mary Carmichael writes in “Risk Appetite vs. Risk Tolerance: What Is the Difference?” that every company will have its own appetite and tolerance for risk, informed by factors such as its stage in the life cycle and strategic goals, the environment and the criticality of Gen AI to the business. The same company may have a different risk appetite and tolerance for Gen AI risk in one context than it does in others. Directors must determine how tightly they want or need to control the company’s management of these risks.
Evaluate how the company will treat the legal and compliance risks of Gen AI. Once the board has assessed the Gen AI risks including the legal and compliance threats, and articulated its appetite and tolerance for them, it should curate the company’s response to each one. The business must determine whether to accept, mitigate, transfer, share or avoid the risk. This calls for discussions among directors, informed by data provided by management. Some companies will accept more Gen AI risk than others, while others will seek to avoid (by selective choices) or transfer or share (by contract or otherwise) the risk. The board’s decision-making should focus on how best to mitigate the Gen AI threats, defining the responsibilities and accountabilities within the company to develop ethical and compliance frameworks to do so.
Questions Directors Should Ask About the Legal and Compliance Risks of Gen AI 1. What are the organization’s guiding principles for the use of Gen AI? 2. In what ways is Gen AI used in the business? How are the legal and compliance risks impacting the organization’s AI plans? 3. Who within the company has the authority to decide when to use Gen AI? Is the authority the same for every application? When should the deployment decision come to the board? 4. How does the organization identify and mitigate the legal and compliance risks of its use of Gen AI in specific applications? 5. What are the company’s legal and compliance risk appetite and risk tolerance when it comes to the use of Gen AI? 6. How does the business address the legal and compliance concerns related to data privacy and potential biases in the outputs of Gen AI? 7. What contract terms and indemnifications have been established in case of AI-related disputes? 8. What vetting is performed on Gen AI third-party providers for their compliance with legal and ethical standards? 9. How does the organization train its management and employees about the legal and compliance risks of the use of Gen AI? 10. How can directors and executives stay informed on the evolving legal and compliance risk environment around the use of Gen AI? |
Evaluate the company’s Gen AI risk mitigation strategies. The legal and compliance risks posed by using Gen AI can require varied mitigation measures. Rubin, Graham, Chenoweth and Leavy recommend establishing a Gen AI policy, emphasizing human oversight of Gen AI (so-called “human air-gapping”), preparing for corporate diligence questions and being transparent. In “Generative AI and the Legal Landscape: Evolving Regulations and Implications,” Nick Leone and Seth Batey emphasize communicating the use of GenAI transparently, preserving the human element and maintaining diligence records. Leone and Batey recommend keeping company data on bespoke Gen AI models, starting small and experimenting, and using Gen AI to discover new insights and make connections across the company. Kirkland and Ellis recommend involving the company’s enterprise risk management function in its 2023 “Generative Artificial Intelligence – Legal Risks and Compliance Issues” alert, while Cooley, in its September 9, 2024, post by Jimmy Gilligan, Tom Connors and Tracey Rubin, suggests carefully assessing contract terms to mitigate the risks.
Assess the company’s Gen AI use policy. Directors should ensure that the business has a Gen AI use policy. Mary Carmichael writes in “Key Considerations for Developing Organizational Generative AI Policies” that given the rapid advancements in the technology, organizations need to provide clear guidance on its use that balances the benefits against the risks. Carmichael sets forth several considerations in developing the use policy: the policy’s scope, the critical role of data security, ethical and acceptable use of GenAI at the company, training, transparency, compliance, a process for exceptions, how alleged violations are reported and investigated, monitoring and auditing compliance, and the cadence of policy review. The policy can, and should, include both general principles and specific directions that cascade from them. The University of California, for example, has identified eight responsible AI principles that govern how the university system uses AI, including Gen AI, across its enterprise, which annually educates over 299,000 students, conducts more than $7 billion in research and impacts the California economy in the amount of $82 billion, while employing 266,000 faculty and staff. The eight principles, posted on the university’s AI website, include appropriateness; transparency, accuracy, reliability and safety; fairness and non-discrimination; privacy and security; human values; shared benefit and prosperity; and accountability. The company’s policy should be regularly reviewed and modified along the way.
Educate directors, management and employees about Gen AI. Gen AI is an evolving technology. Its capabilities, methodology, optimal usage and risks are changing rapidly. Directors must understand Gen AI to dispatch their duties responsibly. Lawrence A. Cunningham, Arvin Maskin and James B. Carlson write in “Generative Artificial Intelligence and Corporate Boards: Cautions and Considerations” that the duty of care of board members, which requires them to act in an informed manner with requisite care and in what they in good faith believe to be the best interests of the corporation and its shareholders, requires directors to exercise good faith and act with reasonable care regarding both the company’s and the board’s use of Gen AI. Similarly, management needs to understand Gen AI to execute on the company’s strategic plans and employees must understand it to carry out their functions responsibly.
Ensure the company is transparent about its use of Gen AI. As AI-generated content and Gen AI decision-making become ever more prevalent and integrated into daily operations, businesses must clearly communicate when and how they use GenAI, its limitations and the potential impacts to customers, employees and regulators. Transparency can help prevent misinformation, biases and unintended consequences while demonstrating accountability in Gen AI-driven processes. Moreover, transparency can help to ensure compliance with evolving regulations and foster a responsible approach to AI usage. By prioritizing openness, the company can leverage Gen AI’s benefits while addressing the legal and compliance risks. The State of California recently enacted AB 3030, a statute that requires health care workers who use Gen AI to communicate with patients unmoderated by human review to disclose to the patient that the message was generated by AI. The new law provides the patient a way to communicate with a human. Other jurisdictions are sure to follow suit.
Monitor and audit the company’s use of Gen AI. As Gen AI becomes increasingly autonomous and capable of performing complex tasks without human intervention, it will become increasingly risky to determine legal liability when things go wrong. Regulators worldwide are preparing legislation to address AI accountability, while experts are recommending transparency as a legal, compliance and ethical best practice. Building transparency and accountability into the company’s processes is essential to limit the legal exposures and ensure that the company’s use of Gen AI complies with evolving standards. The board and management need to ensure that the company monitors and audits its use of Gen AI, in both the company’s operations and the board’s work, while course-correcting when necessary.
The increasing use of Gen AI is reshaping businesses and determining competitive advantage. Legal and compliance risk considerations can impact how organizations deploy Gen AI in decision-making, customer interactions and content creation, directly affecting operating models and innovation strategies. These 10 action steps that directors should consider to address the legal and compliance risks with GenAI represent an ambitious start. Directors need to work with management, legal counsel and AI ethics experts to establish governance frameworks that appropriately balance innovation with the organization’s risk management practices. By staying informed and adopting flexible approaches, directors can guide management in leveraging Gen AI responsibly, while avoiding the threats that could hinder the organization’s longer-term growth and resiliency.
Legal and compliance risks can make directors wary of supporting management’s deployment of Gen AI at scale without clear strategies and pathways. Until policymakers and regulators develop frameworks that appropriately balance innovation with protecting individual and commercial rights and public safety, some boards and management may choose to take a cautionary pathway. In these instances, the business will avoid the use of Gen AI altogether until there are clearly established protections and industry best practices. The downside, of course, could be an abundance of missed opportunities.