May 24, 2023

Five Articles to Help Govern AI

Lawmakers in Washington made clear last week that they expect to build rules for AI – and soon. During a hearing of the Senate Judiciary Subcommittee on Privacy, Technology and the Law, senators underscored the possible promises and harms of AI, and Sam Altman, the CEO of OpenAI, which released ChatGPT, said that “regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.” With so much data being collected and processed, privacy risks surrounding AI include the potential for data breaches, and automated operations going awry and sharing troves of personal data. In this retrospective, we feature five articles that address legal issues arising from AI’s use, management of its risks, compliance and governance priorities, and insights on regulatory initiatives.

1. NIST Advances Soft Law for AI While World Awaits Hard Laws

The National Institute of Standards & Technology (NIST) voluntary AI Risk Management Framework (RMF), released in January 2023, remains the most influential U.S. federal action so far to support organizations’ responsible use of AI. In April, NIST boosted the practicality of the RMF by adding several supportive resources. In this article, we discussed, with commentary from two RMF contributors, how the RMF features work, how companies can use it and how it aligns with the flurry of other global measures to de-risk and legally constrain AI. We also included insights that PwC Responsible AI lead Ilana Golbin Blumenfeld shared with us about the RMF’s limits. 

2. First Independent Certification of Responsible AI Launches

A Responsible AI certification program has launched to assess and validate companies’ AI uses and practices. Established by the nonprofit Responsible Artificial Intelligence Institute (RAII), the program currently focuses on two core uses popular across industries: human resources and procurement. Separate RAII certifications are tailored to evaluate automated decision making in financial services and in health care. For this article, the Cybersecurity Law Report spoke with RAII’s director of partnerships and market development Alyssa Lefaivre Škopac about the emphases of its certification program, the three ways that companies can use it, its pending approval by standards organizations, and how RAII expects to keep up with the fast changes in generative AI. 

3. Managing Legal Issues Arising From Use of ChatGPT and Generative AI Series

When ChatGPT was introduced in November 2022, it quickly became a household name. Within months, it morphed into a more advanced version of generative AI and Microsoft announced its integration of OpenAI’s GPT-4 model into Bing, providing a ChatGPT-like experience within the search engine. While these tools have demonstrated that generative AI has tremendous operational and business potential, a constellation of privacy and data security risks arising from their use have become visible. In this two-part guest article series, Ballard Spahr attorneys explored these legal and regulatory issues. The first article covered AI collection and use requirements under U.S. and E.U. privacy laws and regulations. Part two addressed product liability, healthcare and employment risks, issues under wiretapping laws, and practical compliance measures.

4. AI Governance Gets Real Series

In the sea of published commentary on ChatGPT and the other fun AI platforms that generate images and text, one message recurs: companies must undertake governance to ensure their adoption of AI is responsible. This two-part article series addressed the practical realities of AI governance, with observations from front-line practitioners. Part one offered tips from a leading AI language and recommendations provider on forging a corporate culture to address AI risks, including toxic language output and misuse of available data sets. Part two presented top priorities for companies trying to craft AI compliance programs and discussed the brand-new market for automated AI governance platforms, including insights from experts at IBM, the AI Responsibility Lab and PwC. It also shared findings from an Accenture report.

5. Takeaways From the New Push for a Federal AI Law

As the race to regulate AI heated up, we delved into congressional initiatives and noted that non-partisan concerns about the transformational technology could open a path past gridlock. This article provided insights on movements in the federal, state and global drive to comprehensively regulate AI use, including five developments that occurred in October 2022, with recommendations on how companies can prepare during this current period of uncertainty from AI policy experts at Uber, the Senate, Cozen Strategies and Future of Privacy Forum.