Touted as a transformative tool for boosting productivity, elevating efficiency and crunching lots of data really fast, AI is impacting seemingly every industry. Along with AI’s tremendous potential, however, comes risk.
An AI risk assessment can help identify potential issues like bias, security vulnerabilities and privacy concerns, and inform mitigation strategies. This article, with insights from Covington & Burling and PwC, offers practical guidance on the AI risk assessment process, including who to involve, timing and identifying key risks, and addresses how to use the results to help mitigate risks.
See “Navigating NIST’s AI Risk Management Framework” (Nov. 15, 2023).
Growing Use Despite Risks
Already, there have been some concerns and more than a few embarrassments associated with AI’s use – AI that flat out lies, identifies actual sources that do not support the proposition for which they are cited, exhibits bias, and might just be ingesting vast chunks of a company’s copyrighted content and sharing it with others. Finally, there are data privacy concerns with a technology that can remember and recall everything about everything.
Despite AI’s risks, companies are embracing it. “The adoption rate is unlike anything we’ve seen in the past,” Micaela McMurrough, a partner at Covington & Burling, told the Cybersecurity Law Report. Indeed, a survey by McKinsey published in March 2025 found that organizations are more likely now than they were in early 2024 to be managing AI risks involving inaccuracy, cybersecurity and intellectual property. Seventy-eight percent of respondents reported that their organizations use AI in at least one business function (up from 20 percent in 2017).
“It is great to see that companies are leaning into [AI tools] and adapting as the technology evolves,” observed McMurrough. “The goal is to benefit from the upside of these technologies while minimizing the downside risk,” she added.
What Is an AI Risk Assessment?
An Evolving Definition
An AI risk assessment can be defined as “a process to identify and evaluate potential issues with an AI system, such as bias, security risks or compliance concerns,” Ilana Golbin Blumenfeld, the Responsible AI R&D lead at PwC, told the Cybersecurity Law Report.
“There can be levels to AI risk assessment,” McMurrough noted. A broader AI risk assessment can guide the process “across the enterprise, and within that framework, specific use cases may warrant individual AI risk assessments.” For example, “there can be an assessment of risk associated with the specific use of a particular AI tool, or an assessment of an organization’s use of AI more broadly,” she explained.
New Territory
No matter how the term “AI risk assessment” is defined, it differs from more traditional risk assessments. “Traditional technology, cyber and privacy risk assessments are often performed as gap assessments against well-known standards – organizations are assessing whether there are gaps in programs or processes against known benchmarks and existing laws, regulations and guidelines, such as the NIST’s Cybersecurity Framework or the New York Department of Financial Services cybersecurity regulation,” explained McMurrough.
In contrast, AI risk assessments are being conducted right now in somewhat unchartered territory. “There are fewer firm guidelines or laws regarding what is expected,” McMurrough added.
Currently, “one of the biggest challenges companies face with AI risk assessments is the lack of clear standards or consistent frameworks, which can make it hard to know where to start or what ‘good’ looks like,” noted Golbin Blumenfeld.
A company’s business model, industry, jurisdiction and risk tolerance all impact how it defines and manages AI risk. As a starting point, companies should “figure out what legal frameworks apply in the jurisdictions where they operate and determine what their own risk tolerance is with respect to AI,” suggested McMurrough. “From that starting point, they can design appropriate frameworks and processes to identify and manage risk,” she added.
See “Navigating Ever-Increasing State AI Laws and Regulations” (Jan. 15, 2025).
Stakeholders to Involve
The parties to include in the risk assessment process is a determination that is “fact- and context-dependent – it depends on your industry, the kind of data you have, how your company plans to use AI and other factors,” said McMurrough.
Any AI risk assessment is likely to be a multidepartment affair. “Stakeholders in an AI risk assessment should include a cross-functional team to cover the full lifecycle of risk management,” advised Golbin Blumenfeld. During planning, “legal, compliance, and governance teams help the business teams define scope and align with regulatory requirements,” she noted. Then, when conducting the assessment, “data scientists, engineers and cybersecurity experts evaluate technical and data-related risks,” she continued. “Once results are in, business leaders, compliance and ethics, and senior executives should be involved in interpreting findings and deciding mitigation steps,” she said.
See “AI Governance Strategies for Privacy Pros” (Apr. 17, 2024).
Timing
AI risk assessments should be performed early, “before deploying a new model, making major updates or using AI in critical areas of a business,” Golbin Blumenfeld advised. Like traditional tech, cybersecurity or privacy risk assessments, AI risk assessments also should be undertaken “when a system is being developed and designed, and not just prior to deployment,” she said.
When assessments are “triggered by external factors like new regulations, security incidents or complaints about system behavior,” they should “be performed early enough in the process where changes in the deployment of a system or the controls around it can still be made,” advised Golbin Blumenfeld.
“Reviewing too late makes it challenging to influence a specific system, and may create conflict between the business and compliance teams,” cautioned Golbin Blumenfeld.
An AI risk assessment ideally should be “repeated whenever there are significant changes – such as major model updates, new data sources, or shifts in how the AI is used,” Golbin Blumenfeld continued. “Regular reassessments should also be built into ongoing monitoring, especially for high-impact or high-risk applications,” she said.
Process Overview
An effective AI risk assessment, explained Golbin Blumenfeld, typically involves the following steps:
- Define the scope and context. Identify the AI system, its purpose, stakeholders and potential areas of impact.
- Map data flows and model behavior. Understand what data is used, how it is processed and how the model makes decisions.
- Identify potential risks. Assess for issues like bias, lack of explainability, privacy breaches, security vulnerabilities and regulatory noncompliance.
- Evaluate likelihood and impact. Prioritize risks based on their potential consequences and how likely they are to occur.
- Develop mitigation strategies. Recommend controls, design changes or oversight mechanisms to address high-priority risks.
- Document and communicate findings. Clearly record the assessment process, results and action plan for stakeholders and auditors.
- Establish ongoing monitoring. Set up regular reviews and update the assessment as the system or its use evolves.
When using AI, McMurough said, “it is often important to understand the technical aspects of the tool behind the scenes” and address questions such as “Where is data being stored? Who has access to that data? What controls are in place to limit access to that data?”
See “AI Governance: Striking the Balance Between Innovation, Ethics and Accountability” (Feb. 12, 2025).
Alignment With Other Assessments
An AI risk assessment should not be conducted in a vacuum. It should “coordinate with the other risk assessments, to both identify AI risks that are tied to specific cyber or privacy risks as well as to alleviate duplication of information collected across the different assessments,” opined Golbin Blumenfeld.
To integrate AI risk assessment with other risk efforts, best practices include “aligning frameworks, sharing data and insights across teams, and embedding AI-specific questions into existing risk processes,” suggested Golbin Blumenfeld.
With the coordination of privacy, cybersecurity and compliance teams, a “holistic view of risk” can be created, continued Golbin Blumenfeld. “Using shared tools, common taxonomies, and cross-functional collaboration helps avoid duplication and makes sure risks aren’t missed at the intersections – like where an AI system uses personal data or introduces new attack surfaces,” she explained.
“Regular coordination and centralized documentation also support accountability and audit readiness,” said Golbin Blumenfeld. In the end, “all risk assessments should ultimately align to the broader enterprise risk framework,” she advised.
See “Unifying Risk Assessments: Breaking Silos to Enhance Efficiency and Manage Risk” (Jan. 29, 2025).
Identifying AI Risk
Recognizing Unique Risks
The focus of an AI risk assessment differs from other types of risk assessments. It “goes beyond traditional tech, cybersecurity or privacy reviews by focusing on risks unique to AI – like model bias, lack of transparency, drift over time and the unpredictability of learning systems,” explained Golbin Blumenfeld. The assessment also involves examining “how AI impacts human decision-making, fairness and accountability, which aren’t typically covered in standard assessments,” she noted.
Evaluating by Use Case
AI risk should be evaluated by use case rather than by applying a uniform approach “because the impact, context and regulatory exposure of AI systems can vary significantly depending on how and where they are used,” explained Golbin Blumenfeld. It makes sense. After all, “a chatbot for internal IT support poses very different risks than an AI tool used for hiring decisions or medical diagnostics,” she elaborated. “A uniform approach may miss critical nuances – like the need for stricter fairness or explainability standards in high-stakes use cases,” she cautioned.
Higher-risk use cases may “require deeper evaluations, more rigorous testing for bias or performance issues, or human oversight,” explained Golbin Blumenfeld. In turn, lower-risk applications “may need lighter-touch controls.” Taking a targeted approach “confirms resources are focused where the potential for harm is greatest,” she said.
Not every AI use case necessitates a bespoke process, however, noted McMurrough. A company can “have a uniform system for categorizing risk for any particular use and applying mitigation measures accordingly,” she explained. A uniform approach for assessing and mitigating AI risk can still account “for differences between use cases within the broader framework,” she noted.
In the end, the objective is to make sure resources are funneled to where the potential for harm is the greatest.
Prioritizing Risks Related to Business Strategy
Companies should prioritize focusing assessment efforts on AI-related risks “that directly impact trust, compliance and business outcomes,” Golbin Blumenfeld recommended. These risks include “bias and discrimination, which can lead to legal and reputational harm; lack of explainability, especially in regulated or high-stakes decisions; data privacy and security risks; and hallucination or confabulation, where AI generates false or misleading information,” she elaborated. “Addressing these core issues helps support responsible, safe and effective AI deployment.”
In prioritizing what risks to focus on in the assessment, companies also should consider their business strategy, Golbin Blumenfeld suggested, and ask themselves what business areas are priorities, and what their short- and long-term goals are. “The risks that are most important, then, relate specifically to those that are critical to this strategy,” she said.
Keeping Legal and Regulatory Risks in Mind
AI risk assessments can “help surface circumstances where compliance with laws like the E.U. AI Act or sector-specific laws like Colorado’s Division of Insurance AI regulation may be necessary,” Golbin Blumenfeld noted.
Improper use, implementation and control of AI can create litigation and regulatory enforcement risk, both Golbin Blumenfeld and McMurrough noted. For example, Golbin Blumenfeld elaborated, missteps “can lead to noncompliance with privacy and cybersecurity laws [if AI tools] collect or process personal data without proper consent, store data insecurely, or leverage third-party tooling (including models) that may expose or leak sensitive information.” AI also could violate regulations or sector-specific cybersecurity standards “by making decisions that lack transparency or fairness,” she said.
If an organization is relying “on inaccurate information generated by AI,” it can give rise to legal risk,” McMurrough cautioned. Moreover, corporate use of AI can lead to “potential issues related to legal privilege, preservation obligations or confidentiality concerns,” she said.
Thus, “as part of an AI governance program, the risk assessments conducted can and should ask questions related to regulatory compliance as well as educate staff (not just developers but also business case owners, third-party risk management, risk functions themselves, and all enterprise AI users) [on] potential risks and their obligations to support enterprise-wide regulatory compliance,” urged Golbin Blumenfeld.
AI and its prospective hazards are still new to many companies. “In some cases, noncompliance is inadvertent – AI was adopted so quickly that teams did not have the opportunity to reflect on what could go awry and if the system itself was subject to any additional compliance requirements,” acknowledged Golbin Blumenfeld. To manage risk, “human oversight, clear roles and responsibilities, and clear guidance for development, use, and management of AI coupled with clear risk management processes (including AI assessments) are key,” she continued.
See “Benchmarking AI Governance Practices and Challenges” (May 7, 2025).
Formulating the Risk Questionnaire
To bring relevant AI risks to light, the assessment questionnaire should ask “about what the system does, who it impacts and whether it is being used in a sensitive or high-stakes area,” suggested Golbin Blumenfeld. The questionnaire should seek information on where “the data comes from, how the model was built and whether it’s been tested for bias or can be easily explained,” she continued. Companies should also “include questions about privacy, security, legal compliance, and who’s responsible for oversight.” It is also important to “look at how the system will be monitored and updated over time to catch issues early and keep it working as intended,” she added.
See “Innovation and Accountability: Asking Better Questions in Implementing Generative AI” (Aug. 2, 2023).
How to Use the Risk Assessment
To be valuable, an AI risk assessment needs to inform governance and compliance decisions. “Governance and controls for a specific AI application should be in part defined by a risk-based approach, where practices are proportional to the risk,” advised Golbin Blumenfeld.
“Companies should consider how their management of AI risk fits within their broader enterprise risk management program,” according to McMurrough. That includes classifying high-risk versus low-risk use cases and clarifying responsibility for managing AI-related risk, she continued.
Foundation for Governance
The AI risk assessment should “serve as a foundation for governance and compliance decisions by identifying where oversight, controls or safeguards are most needed,” advised Golbin Blumenfeld. “The findings can inform policies on model approval, human-in-the-loop requirements, data use and monitoring practices,” she said.
A strong enterprise AI governance program facilitates defining acceptable practices and aligns AI use with existing practices around data governance, cybersecurity, privacy and more,” continued Golbin Blumenfeld. “Good data practices such as data minimization, data anonymization/sanitization, and access controls help restrict unintended use of data,” she explained. Moreover, “devising a culture of periodic testing coupled with guidance around testing practices enables resilient model design, bias testing, transparency evaluation and enhanced quality,” she said.
See “Dos and Don’ts for Employee Use of Generative AI” (Dec. 6, 2023).
Informing Risk Mitigation
There are plenty of risk mitigation tools for companies to use. “A wide range of safeguards can be deployed to mitigate the risks of AI and other emerging technologies depending on the use cases and circumstances surrounding their use,” observed Golbin Blumenfeld.
Security measures “such as model hardening and adversarial testing are also key to preventing misuse or data leakage,” said Golbin Blumenfeld. In addition, “robust documentation practices enable more transparency around model design and use.” “Equally important are human-in-the-loop processes for critical decisions, continuous monitoring to catch model drift or errors, and clear escalation paths when issues arise,” she added.
Processes and controls “should be backed by strong governance frameworks and aligned with relevant regulations to make sure risks are managed responsibly and consistently across the organization,” advised Golbin Blumenfeld.
Ultimately, “companies need to reflect on their enterprise risk management frameworks to understand where AI risks are covered by existing risk domains, where new risks remain and therefore need to be incorporated, as well as where existing risks may be exacerbated,” Golbin Blumenfeld instructed. Moreover, they should “document their risk assessments, oversight processes and roles/responsibilities related to AI risk management to demonstrate accountability if issues arise,” she continued.
See “Checklist for AI Procurement” (Apr. 16, 2025).