NIST Advances Soft Law for AI While World Awaits Hard Laws

The National Institute of Standards & Technology (NIST) voluntary AI Risk Management Framework (RMF), released in January, is likely to be the most influential U.S. federal action so far to support responsible use of AI. This month, NIST boosted the practicality of the RMF by adding several supportive resources. We discuss, as explained by two RMF contributors on a recent Responsible AI Institute panel, how the new features work, how companies can use the RMF and how it aligns with the flurry of other global measures to de-risk and legally constrain AI. We also include insights that PwC Responsible AI lead Ilana Golbin Blumenfeld shared with us about the RMF’s limits. See “First Independent Certification of Responsible AI Launches” (Apr. 12, 2023).

To read the full article

Continue reading your article with a CSLR subscription.