Risk Assessment

Guide to AI Risk Assessments


Touted as a transformative tool for boosting productivity, elevating efficiency and crunching lots of data really fast, AI is impacting seemingly every industry. Along with AI’s tremendous potential, however, comes risk.

An AI risk assessment can help identify potential issues like bias, security vulnerabilities and privacy concerns, and inform mitigation strategies. This article, with insights from Covington & Burling and PwC, offers practical guidance on the AI risk assessment process, including who to involve, timing and identifying key risks, and addresses how to use the results to help mitigate risks.

See “Navigating NIST’s AI Risk Management Framework” (Nov. 15, 2023).

Growing Use Despite Risks

Already, there have been some concerns and more than a few embarrassments associated with AI’s use – AI that flat out lies, identifies actual sources that do not support the proposition for which they are cited, exhibits bias, and might just be ingesting vast chunks of a company’s copyrighted content and sharing it with others. Finally, there are data privacy concerns with a technology that can remember and recall everything about everything.

Despite AI’s risks, companies are embracing it. “The adoption rate is unlike anything we’ve seen in the past,” Micaela McMurrough, a partner at Covington & Burling, told the Cybersecurity Law Report. Indeed, a survey by McKinsey published in March 2025 found that organizations are more likely now than they were in early 2024 to be managing AI risks involving inaccuracy, cybersecurity and intellectual property. Seventy-eight percent of respondents reported that their organizations use AI in at least one business function (up from 20 percent in 2017).

“It is great to see that companies are leaning into [AI tools] and adapting as the technology evolves,” observed McMurrough. “The goal is to benefit from the upside of these technologies while minimizing the downside risk,” she added.

What Is an AI Risk Assessment?

An Evolving Definition

An AI risk assessment can be defined as “a process to identify and evaluate potential issues with an AI system, such as bias, security risks or compliance concerns,” Ilana Golbin Blumenfeld, the Responsible AI R&D lead at PwC, told the Cybersecurity Law Report.

“There can be levels to AI risk assessment,” McMurrough noted. A broader AI risk assessment can guide the process “across the enterprise, and within that framework, specific use cases may warrant individual AI risk assessments.” For example, “there can be an assessment of risk associated with the specific use of a particular AI tool, or an assessment of an organization’s use of AI more broadly,” she explained.

New Territory

No matter how the term “AI risk assessment” is defined, it differs from more traditional risk assessments. “Traditional technology, cyber and privacy risk assessments are often performed as gap assessments against well-known standards – organizations are assessing whether there are gaps in programs or processes against known benchmarks and existing laws, regulations and guidelines, such as the NIST’s Cybersecurity Framework or the New York Department of Financial Services cybersecurity regulation,” explained McMurrough.

In contrast, AI risk assessments are being conducted right now in somewhat unchartered territory. “There are fewer firm guidelines or laws regarding what is expected,” McMurrough added.

Currently, “one of the biggest challenges companies face with AI risk assessments is the lack of clear standards or consistent frameworks, which can make it hard to know where to start or what ‘good’ looks like,” noted Golbin Blumenfeld.

A company’s business model, industry, jurisdiction and risk tolerance all impact how it defines and manages AI risk. As a starting point, companies should “figure out what legal frameworks apply in the jurisdictions where they operate and determine what their own risk tolerance is with respect to AI,” suggested McMurrough. “From that starting point, they can design appropriate frameworks and processes to identify and manage risk,” she added.

See “Navigating Ever-Increasing State AI Laws and Regulations” (Jan. 15, 2025).

Stakeholders to Involve

The parties to include in the risk assessment process is a determination that is “fact- and context-dependent – it depends on your industry, the kind of data you have, how your company plans to use AI and other factors,” said McMurrough.

Any AI risk assessment is likely to be a multidepartment affair. “Stakeholders in an AI risk assessment should include a cross-functional team to cover the full lifecycle of risk management,” advised Golbin Blumenfeld. During planning, “legal, compliance, and governance teams help the business teams define scope and align with regulatory requirements,” she noted. Then, when conducting the assessment, “data scientists, engineers and cybersecurity experts evaluate technical and data-related risks,” she continued. “Once results are in, business leaders, compliance and ethics, and senior executives should be involved in interpreting findings and deciding mitigation steps,” she said.

See “AI Governance Strategies for Privacy Pros” (Apr. 17, 2024).

Timing

AI risk assessments should be performed early, “before deploying a new model, making major updates or using AI in critical areas of a business,” Golbin Blumenfeld advised. Like traditional tech, cybersecurity or privacy risk assessments, AI risk assessments also should be undertaken “when a system is being developed and designed, and not just prior to deployment,” she said.

When assessments are “triggered by external factors like new regulations, security incidents or complaints about system behavior,” they should “be performed early enough in the process where changes in the deployment of a system or the controls around it can still be made,” advised Golbin Blumenfeld.

“Reviewing too late makes it challenging to influence a specific system, and may create conflict between the business and compliance teams,” cautioned Golbin Blumenfeld.

An AI risk assessment ideally should be “repeated whenever there are significant changes – such as major model updates, new data sources, or shifts in how the AI is used,” Golbin Blumenfeld continued. “Regular reassessments should also be built into ongoing monitoring, especially for high-impact or high-risk applications,” she said.

Process Overview

An effective AI risk assessment, explained Golbin Blumenfeld, typically involves the following steps:

  • Define the scope and context. Identify the AI system, its purpose, stakeholders and potential areas of impact.
  • Map data flows and model behavior. Understand what data is used, how it is processed and how the model makes decisions.
  • Identify potential risks. Assess for issues like bias, lack of explainability, privacy breaches, security vulnerabilities and regulatory noncompliance.
  • Evaluate likelihood and impact. Prioritize risks based on their potential consequences and how likely they are to occur.
  • Develop mitigation strategies. Recommend controls, design changes or oversight mechanisms to address high-priority risks.
  • Document and communicate findings. Clearly record the assessment process, results and action plan for stakeholders and auditors.
  • Establish ongoing monitoring. Set up regular reviews and update the assessment as the system or its use evolves.

When using AI, McMurough said, “it is often important to understand the technical aspects of the tool behind the scenes” and address questions such as “Where is data being stored? Who has access to that data? What controls are in place to limit access to that data?”

See “AI Governance: Striking the Balance Between Innovation, Ethics and Accountability” (Feb. 12, 2025).

Alignment With Other Assessments

An AI risk assessment should not be conducted in a vacuum. It should “coordinate with the other risk assessments, to both identify AI risks that are tied to specific cyber or privacy risks as well as to alleviate duplication of information collected across the different assessments,” opined Golbin Blumenfeld.

To integrate AI risk assessment with other risk efforts, best practices include “aligning frameworks, sharing data and insights across teams, and embedding AI-specific questions into existing risk processes,” suggested Golbin Blumenfeld.

With the coordination of privacy, cybersecurity and compliance teams, a “holistic view of risk” can be created, continued Golbin Blumenfeld. “Using shared tools, common taxonomies, and cross-functional collaboration helps avoid duplication and makes sure risks aren’t missed at the intersections – like where an AI system uses personal data or introduces new attack surfaces,” she explained.

“Regular coordination and centralized documentation also support accountability and audit readiness,” said Golbin Blumenfeld. In the end, “all risk assessments should ultimately align to the broader enterprise risk framework,” she advised.

See “Unifying Risk Assessments: Breaking Silos to Enhance Efficiency and Manage Risk” (Jan. 29, 2025).

Identifying AI Risk

Recognizing Unique Risks

The focus of an AI risk assessment differs from other types of risk assessments. It “goes beyond traditional tech, cybersecurity or privacy reviews by focusing on risks unique to AI – like model bias, lack of transparency, drift over time and the unpredictability of learning systems,” explained Golbin Blumenfeld. The assessment also involves examining “how AI impacts human decision-making, fairness and accountability, which aren’t typically covered in standard assessments,” she noted.

Evaluating by Use Case

AI risk should be evaluated by use case rather than by applying a uniform approachbecause the impact, context and regulatory exposure of AI systems can vary significantly depending on how and where they are used,” explained Golbin Blumenfeld. It makes sense. After all, “a chatbot for internal IT support poses very different risks than an AI tool used for hiring decisions or medical diagnostics,” she elaborated. “A uniform approach may miss critical nuances – like the need for stricter fairness or explainability standards in high-stakes use cases,” she cautioned.

Higher-risk use cases may “require deeper evaluations, more rigorous testing for bias or performance issues, or human oversight,” explained Golbin Blumenfeld. In turn, lower-risk applications “may need lighter-touch controls.” Taking a targeted approach “confirms resources are focused where the potential for harm is greatest,” she said.

Not every AI use case necessitates a bespoke process, however, noted McMurrough. A company can “have a uniform system for categorizing risk for any particular use and applying mitigation measures accordingly,” she explained. A uniform approach for assessing and mitigating AI risk can still account “for differences between use cases within the broader framework,” she noted.

In the end, the objective is to make sure resources are funneled to where the potential for harm is the greatest.

Prioritizing Risks Related to Business Strategy

Companies should prioritize focusing assessment efforts on AI-related risks “that directly impact trust, compliance and business outcomes,” Golbin Blumenfeld recommended. These risks include “bias and discrimination, which can lead to legal and reputational harm; lack of explainability, especially in regulated or high-stakes decisions; data privacy and security risks; and hallucination or confabulation, where AI generates false or misleading information,” she elaborated. “Addressing these core issues helps support responsible, safe and effective AI deployment.”

In prioritizing what risks to focus on in the assessment, companies also should consider their business strategy, Golbin Blumenfeld suggested, and ask themselves what business areas are priorities, and what their short- and long-term goals are. “The risks that are most important, then, relate specifically to those that are critical to this strategy,” she said.

Keeping Legal and Regulatory Risks in Mind

AI risk assessments can “help surface circumstances where compliance with laws like the E.U. AI Act or sector-specific laws like Colorado’s Division of Insurance AI regulation may be necessary,” Golbin Blumenfeld noted.

Improper use, implementation and control of AI can create litigation and regulatory enforcement risk, both Golbin Blumenfeld and McMurrough noted. For example, Golbin Blumenfeld elaborated, missteps “can lead to noncompliance with privacy and cybersecurity laws [if AI tools] collect or process personal data without proper consent, store data insecurely, or leverage third-party tooling (including models) that may expose or leak sensitive information.” AI also could violate regulations or sector-specific cybersecurity standards “by making decisions that lack transparency or fairness,” she said.

If an organization is relying “on inaccurate information generated by AI,” it can give rise to legal risk,” McMurrough cautioned. Moreover, corporate use of AI can lead to “potential issues related to legal privilege, preservation obligations or confidentiality concerns,” she said.

Thus, “as part of an AI governance program, the risk assessments conducted can and should ask questions related to regulatory compliance as well as educate staff (not just developers but also business case owners, third-party risk management, risk functions themselves, and all enterprise AI users) [on] potential risks and their obligations to support enterprise-wide regulatory compliance,” urged Golbin Blumenfeld.

AI and its prospective hazards are still new to many companies. “In some cases, noncompliance is inadvertent – AI was adopted so quickly that teams did not have the opportunity to reflect on what could go awry and if the system itself was subject to any additional compliance requirements,” acknowledged Golbin Blumenfeld. To manage risk, “human oversight, clear roles and responsibilities, and clear guidance for development, use, and management of AI coupled with clear risk management processes (including AI assessments) are key,” she continued.

See “Benchmarking AI Governance Practices and Challenges” (May 7, 2025).

Formulating the Risk Questionnaire

To bring relevant AI risks to light, the assessment questionnaire should ask “about what the system does, who it impacts and whether it is being used in a sensitive or high-stakes area,” suggested Golbin Blumenfeld. The questionnaire should seek information on where “the data comes from, how the model was built and whether it’s been tested for bias or can be easily explained,” she continued. Companies should also “include questions about privacy, security, legal compliance, and who’s responsible for oversight.” It is also important to “look at how the system will be monitored and updated over time to catch issues early and keep it working as intended,” she added.

See “Innovation and Accountability: Asking Better Questions in Implementing Generative AI” (Aug. 2, 2023).

How to Use the Risk Assessment

To be valuable, an AI risk assessment needs to inform governance and compliance decisions. “Governance and controls for a specific AI application should be in part defined by a risk-based approach, where practices are proportional to the risk,” advised Golbin Blumenfeld.

“Companies should consider how their management of AI risk fits within their broader enterprise risk management program,” according to McMurrough. That includes classifying high-risk versus low-risk use cases and clarifying responsibility for managing AI-related risk, she continued.

Foundation for Governance

The AI risk assessment should “serve as a foundation for governance and compliance decisions by identifying where oversight, controls or safeguards are most needed,” advised Golbin Blumenfeld. “The findings can inform policies on model approval, human-in-the-loop requirements, data use and monitoring practices,” she said.

A strong enterprise AI governance program facilitates defining acceptable practices and aligns AI use with existing practices around data governance, cybersecurity, privacy and more,” continued Golbin Blumenfeld. “Good data practices such as data minimization, data anonymization/sanitization, and access controls help restrict unintended use of data,” she explained. Moreover, “devising a culture of periodic testing coupled with guidance around testing practices enables resilient model design, bias testing, transparency evaluation and enhanced quality,” she said.

See “Dos and Don’ts for Employee Use of Generative AI” (Dec. 6, 2023).

Informing Risk Mitigation

There are plenty of risk mitigation tools for companies to use. “A wide range of safeguards can be deployed to mitigate the risks of AI and other emerging technologies depending on the use cases and circumstances surrounding their use,” observed Golbin Blumenfeld.

Security measures “such as model hardening and adversarial testing are also key to preventing misuse or data leakage,” said Golbin Blumenfeld. In addition, “robust documentation practices enable more transparency around model design and use.” “Equally important are human-in-the-loop processes for critical decisions, continuous monitoring to catch model drift or errors, and clear escalation paths when issues arise,” she added.

Processes and controls “should be backed by strong governance frameworks and aligned with relevant regulations to make sure risks are managed responsibly and consistently across the organization,” advised Golbin Blumenfeld.

Ultimately, “companies need to reflect on their enterprise risk management frameworks to understand where AI risks are covered by existing risk domains, where new risks remain and therefore need to be incorporated, as well as where existing risks may be exacerbated,” Golbin Blumenfeld instructed. Moreover, they should “document their risk assessments, oversight processes and roles/responsibilities related to AI risk management to demonstrate accountability if issues arise,” she continued.

See “Checklist for AI Procurement” (Apr. 16, 2025).

Artificial Intelligence

Pain Points and New Demands in AI Contracts


As AI has become a must-have business tool, attention to contract terms is an essential part of managing its risks. Navigating provisions around rights and responsibilities in AI-related contracts, however, can be tricky given developing and unsettled variables.

“In general, the landing spot in negotiating these contracts is still all over the map and very context-dependent,” Orrick partner Matthew Coleman told the Cybersecurity Law Report.

Contract teams are trying to adapt classic technology contract terms like warranties, indemnity and scope of license to AI’s multifarious risks and features. Parties’ positions on privacy-sensitive issues like use of company data in AI development are also in flux. Novelties have emerged, like buyers demanding the right to know the logic that AI products use to generate their outputs, whether text, image or agents’ automated actions.

Business pressures to deploy the latest AI tools are an immediate top concern for contract lawyers. Amid the percussive beat of hype that AI is fundamentally transforming how companies deliver value to customers, impatience reigns among businesspeople. They welcome salespeople calling to announce new AI features added to their current products and are sidestepping contract and procurement teams, an in-house lawyer lamented during an audience discussion at the Privacy+Security Academy May 2025 conference.

This article presents insights on the current pain points for contract teams, the negotiating stances parties are taking and ways to surmount the accompanying challenges, with commentary from Coleman and conference panelists.

See “Key Legal and Business Issues in AI-Related Contracts” (Aug. 9, 2023).

Five Sticking Points in Negotiations Over AI Rights

Lawyers adapting older technology acquisition contracts to address AI’s latest risks and capabilities can encounter trouble spots across some of the traditional terms and conditions. These pose substantial hurdles to reaching agreements. “The market has not adopted cohesive ways of approaching these negotiations,” Baker McKenzie senior associate Cristina Messerschmidt, who moderated the panel, told the Cybersecurity Law Report.

Contract lawyers can lose their bearings, between the ambient corporate stress over “profiting from AI” and the legal world’s discussion of AI risks. When deciding how tough to be in negotiating a contract, trust in the AI seller is a first focus, Coleman recommended. Representations, warranties, third-party audits, transparency about error rates all can clarify the level of trust merited. Another place to draw a firm line in contracting, he suggested, is “if you’re talking about highly sensitive data.”

1) Descriptions and Definitions of AI

A frequent issue with AI contracting is lack of alignment internally on a tool’s use and purpose, Wolters Kluwer assistant GC Trinity Car said during the panel. When a new AI product emerges, she brings stakeholders across different departments – including procurement, the business team and vendor management – to the table so that everyone can work together to connect the dots regarding the product’s value.

Once negotiations outside the company start, if parties maintain different views of how to define their AI tool, “I try to pull it back to a definition with applicable law. The E.U. AI Act directive is usually where we go,” Car told the Cybersecurity Law Report. However, adding to the complexity, industries “like healthcare, finance and automotive increasingly have regulatory bodies that define the same term [like AI system] differently,” she noted.

Parties often will agree on AI’s definition but need to hammer out the description of services, Messerschmidt observed. It is crucial to precisely characterize the AI’s capabilities and uses “because that description of services will inform how hard parties push on certain aspects of the contract,” she said.

Another reason for a detailed description of services, Gilead Sciences associate GC Joanne Charles highlighted, is that “specificity can become an issue down the line” when enforcing rights and contract provisions.

Deployers typically have no ability to change definitions set by their largest AI providers, Charles pointed out. “If there’s a huge gap between what your company understands those terms to mean and what the [vendor’s license] says, work orders and some additional documentation can help bridge the gap” and address the buyer’s concerns, she suggested. When introducing contract addenda, lawyers should look to ensure they do not introduce new conflicts with existing service agreements, she cautioned.

See “Dos and Don’ts for Employee Use of Generative AI” (Dec. 6, 2023).

2) Use of Company Data

Data use rights often prompt a pitched negotiation. Vendors want their customers’ data for training, refining and testing their AI (and beating the competition). “Buyers are pushing back on the idea that they need to give up any data [rights] to the provider,” said Coleman.

Buyers question “what data is needed to support the development of the product or service. In a lot of cases, the AI tools that are being developed can be created these days without the need for end customer data,” Coleman noted. “Customer data could accelerate the [AI tool’s development] process, but developers could use open-source datasets, publicly available datasets, or can license datasets or AI-generated synthetic datasets,” he added.

In highly regulated industries that previously resisted any data sharing, “companies, with the passing of time, have gotten a bit more comfortable with” allowing some use of the company’s data, Messerschmidt observed.

Beta testing and proof of concept (POC) pilots are proliferating among data and AI teams, catching attorneys’ attention. “Teams will say, ‘well, it’s just a pilot, so it doesn’t matter. The AI data is going to be in and out of the system in 30 days. It’s not a lot of risk,’” Car shared. When, as has happened many times, the business asserts that because it is “just a POC,” it can simply put a non-disclosure agreement around the data in place, “I die inside,” she said. Companies should protect themselves by inserting a clause restricting data use during testing, “whether you call it a POC agreement or a bid agreement," she urged.

Companies also should push to have beta testing conducted with non-sensitive datasets, Charles advised. Lawyers need to pick their fights, but data use may be an appropriate priority, she said. There certainly should not, for example, be “a ten-day beta test where [the AI tool has] access to your entire dataset, including patient data or PHI,” she cautioned.

Existing agreements are a consideration when bringing on an AI tool as well, Messerschmidt noted. Determining the controlling rights for data use “is incredibly relevant for all these vendors that have been with your company for years and years,” so companies should investigate which older terms govern current requests to share, she advised.

See “The Tension Between Data Scraping and Data Protection in an AI-Driven World” (Feb. 26, 2025).

3) Acceptable Use Terms and Addenda

AI tools are often built using foundational large language models (LLMs). Purveyors vary in how they pass along the constraints that LLMs impose on them, noted Coleman. Some AI sellers apply an LLM’s entire terms of use to the buying company. Others take “a more surgical approach of ‘Do not use our service for these specific things restricted under OpenAI’s terms,’” he elaborated.

The largest AI players tend to impose a product-specific addenda for each service, an in-house attorney warned during the panel’s audience discussion. “With really big vendors, you probably have a dozen of these addenda, all of which control” when conflicts arise, the lawyer said.

See our two-part series on how to manage AI procurement: “Leadership and Preparation” (Sep. 18, 2024), and “Five Steps” (Oct. 2, 2024).

4) Rights to AI Logic Along With the Output

“The rights to the AI output are certainly very hotly negotiated,” Messerschmidt noted. Depending on the AI capabilities, a negotiation may focus on only certain parts of the output rather than a blanket grant to use it all, she added.

Buyers also have begun asking for rights to know the AI’s logic in producing the output, Coleman reported. “If it’s a chatbot, for example, [the rights would cover] what was said and why it was said,” he explained. There is precedent in software agreements for placing similar rights, which sometimes include access to code relevant to the output. Growing regulator attention to the impact of AI decisions make buyers’ request for rights to know the AI’s logic sensible. Using explainable AI for a consequential decision shows the company’s diligence and care if scrutinized.

AI sellers have rebuffed demands for explainability and transparency about training, however. “I have yet to see any providers that are outright willing to assign any [rights] in anything other than just the straight-up output,” Coleman reported. Developers typically contend that such logic reflects their trade secrets, or even the “inherent value and the secret sauce of the company itself,” he noted.

See “SEC Enforcement Actions Target ‘AI Washing’” (May 22, 2024).

5) Indemnity and Liability

“Resolving who takes on the legal risk for things going wrong is one of the biggest conversations” in AI contract discussions, Coleman reported. With so little history of negotiations over liability for AI risks, and so many startup companies now negotiating, procurement lawyers cannot rely on any settled practices or “reasonable expectations” to invoke at the table.

Taking Risks and Capabilities Into Account

At a recent gathering of AI tool providers, Coleman heard a wide variety of discussions about addressing buyers’ and sellers’ liability. Some AI vendors said their customers readily accepted liability for harms from their product or service, including IP infringements. Other providers said they regularly tussled over liability, claiming “that ‘every single contract we get in the door, [the buyers] try to push uncapped indemnities for any harm that’s caused,’” he recalled.

Contract negotiators should not make decisions about how firm a position they will take in talks about liability until they define the risk from the described services, Messerschmidt cautioned. “Who is adding risk to the whole problem – is it us? Is it the input? Is it just the way the model works?” Based on those answers, she added, it may be “worth it” to go through several rounds of negotiations, while in less risky cases, it might make sense to settle for less.

Third-party claims about AI’s output may be a factor before long, so it is worth tracking any developments over the next year, Messerschmidt continued.

See “How Hedge Funds Are Approaching AI Use” (Jul. 31, 2024).

Taking Competition Into Account

The emergence of AI agent development has led many smaller AI tool vendors to introduce new products in 2025. With newer technology, “intermediary companies want to offer terms that are commercially reasonable given the power imbalance between their buyers and them,” Coleman shared. But they are locked into another power imbalance, subject to the LLMs that support their AI tools.

The contracts for some newer AI vendors may not address risks in detail because the vendors “are interested in growing as fast as they can. So, they say, ‘sure, we’ll take on the risk of the indemnities that our customers are asking us for,’” Coleman observed. However, he continued, though it may be their customers’ preference, “the smaller players cannot really be the backstop” on liability. The LLM giants generally have inflexible terms of use. “Foundational models are not necessarily giving [smaller players] the same indemnity that their buyers might be asking of them – which leaves them holding the bag if anything goes wrong,” he said.

One model for the contracting issues that arise with newer AI vendors reliant on larger LLMs may be the mobile app world.

See “Apple Overhauls Privacy for iPhone Apps, but Will It Enforce Its Policies?” (Sep. 23, 2020).

The Prospects for Standardization in AI Contracts

Contract lawyers have voiced a desire for a standard AI rider, or a widely accepted template for an AI agreement like a data processing agreement (DPA), Coleman noted. It is sensible to have “a dedicated place where all of the AI terms live,” expressing, for example, acceptable use terms, the potential recourse, the limitation of liability and informational rights covering the seller’s obligations to help the buyer comply with legal requirements, he detailed.

At the same time, lawyers have “some trepidation” about how a standard agreement might interact with all the other product addenda and contract layers that buyers and sellers may impose, Coleman added.

The market will be the main driving force for AI contracts for the near future, Messerschmidt predicted. “Unlike the GDPR, where you’ve got very specific clauses that need to be included in a DPA for Article 28 requirements, there’s nothing like that for AI,” she noted.

One force that could promote such standardization is the Colorado AI Act’s requirements for deployers, Coleman pointed out. Buyers may start invoking the law’s provisions to get certain reps and warranties in place with their providers to allow compliance.

See “How to Address the Colorado AI Act’s ‘Complex Compliance Regime’” (Jun. 5, 2024).

Managing Risk Inside the Company

The widespread business pressure in 2025 to benefit from AI tilts company leaders toward risk-taking. “Your AI choices may be the most crucial decisions not just this year but of your career,” warned PwC’s 2025 AI predictions for business.

Contract lawyers, Car noted, likely will be dealing with company leaders “who are so attracted to the shiny thing that they don’t care what the risks are.” Fear of missing out is fierce in 2025.

Company leaders also use AI in their personal lives, where they do not have the constraints that their company imposes, Charles observed. Personal use limits employees’ grasp of the business stakes. She has heard employees say that a vendor “indemnifies” their use. “They don’t understand what the terms mean, and don’t understand the risks that [AI use] creates for the enterprise,” she cautioned.

Contract lawyers can take some steps to avoid trouble, the panelists agreed. Up front, they can become a trusted advisor to business teams, engaging them on what they will need from AI technologies before any external negotiations, Charles suggested.

As the push for the shiny AI tool grows, lawyers can seek a middle ground. They can start by acknowledging the value of acquiring the AI, Car recommended. “Talk about how your advice is going to help create efficiencies. Everyone always wants to move faster and do more,” she urged. Meanwhile, she added, look out for how contract terms might create conflicts for the business.

Absent early and thoughtful engagement from the lawyers, Messerschmidt cautioned, the businesspeople aching for the career-making AI acquisition “will go ahead and do it anyway, without [attorneys’] approval and without [their] help and without [their] supervision.”

See “Unifying Risk Assessments: Breaking Silos to Enhance Efficiency and Manage Risk” (Jan. 29, 2025).

Chief Compliance Officers

Skills and Qualities of Effective Compliance Officers


The role of the CCO has evolved significantly in recent years, extending across industries. Contemporaneously, salaries have increased, though growth in compensation slowed in 2025 as compared to 2024, according to BarkerGilmore’s 2025 CCO Compensation Report.

This article synthesizes relevant findings from the report and distills insights from the firm’s webinar, which included professionals from Radical Compliance and Spark Compliance Consulting, on the current market for CCOs, compensation trends, relevant skills and experience, and challenges facing dual-hatted GC-CCOs.

See “To Work Effectively, CCOs Need Authority, Autonomy and Information” (Nov. 13, 2024).

Active CCO Market

At the time of the November 2024 presidential election, the market for compliance professionals slowed down, said John Gilmore, managing partner at BarkerGilmore. Following the election, however, search activity surged and has remained robust. BarkerGilmore placed 11 CCOs in the first quarter of 2025, which was “a big deal for us,” he said. The firm made placements in tech, financial services, healthcare and chemical industry companies, as well as two private equity sponsors.

It should continue to be a busy year for placements. Compliance has taken “a real strong hold in members of the executive leadership team,” added Gilmore. “Companies respect compliance more than I’ve ever seen them respect compliance.” Highly regulated industries like healthcare and financial services continue to have high demand for compliance professionals. However, certain industries can be “narrow-minded” when seeking a CCO.

It is hard to estimate the size of the compliance market because individuals are shifting in and out of compliance-related roles or performing compliance-related functions as part of another discipline, such as law or auditing, noted Matt Kelly, editor and CEO of Radical Compliance. For example, an information security officer may be responsible for compliance because “that’s where the risk is,” he said.

Ellen Hunt, a principal at Spark Compliance Consulting, and Haydee Olinger, a strategic adviser/coach at BarkerGilmore, are both bullish on the profession. They expect it to grow, especially as companies become more dependent on technology and the global economy evolves. Additionally, more and more boards are recognizing the importance of the CCO.

Compensation Trends

During 2024 and 2025, BarkerGilmore surveyed more than 260 CCOs from a range of industries, primarily financial services (52%) and healthcare (20%). Nearly two-thirds of the CCOs were from private companies, 24% from public companies and 11% from non-profits. A majority (56%) were male.

Half of the respondents have been in their current position for between one and five years. Most of the rest have been in their current position for more than five years. Forty-four percent said they are not considering new opportunities, which indicates a high level of job satisfaction, according to Gilmore. Of those considering new opportunities, roughly one-third are seeking better compensation and benefits. Sixty-one percent said they are not concerned about their job security. Still, there is “a lot of competition” in the compliance market, noted Gilmore. “If you’re not on your A game, you’re not getting to the finish line.”

Median total annual cash compensation (base salary plus bonus) was virtually the same for men and women. Men earned a median $382,500, consisting of a $275,000 base and a $107,500 bonus. Women earned $379,000, including a $279,000 base and a $100,000 bonus. Median total compensation varied widely by industry, with the highest being in technology ($777,000) and life sciences ($665,000), and the lowest in the non-profit sector ($253,500). The median in financial services was $350,000. Median total compensation at public companies was $626,000 – nearly double that at private companies ($350,000) and non-profits ($321,600).

The median base salary for compliance professionals and GCs rose 2.7% year-over-year. Notably, 88% of the individuals surveyed hit their target bonuses, said Gilmore. Compensation, however, is “all over the map.” It depends on team size, company size and the complexity of the relevant regulatory landscape. A survey cannot show how much a particular individual should be making. An individual must understand the responsibilities and risks of a position and what the company’s business objectives are.

See “Majority of In-House Counsel Satisfied With Compensation, but Gender Gap Remains” (Jul. 22, 2020).

Compliance Still Critical in Deregulated Environment

There is a lot of movement within the profession, said Hunt. Companies are increasingly aligning compliance efforts with their strategy and goals. Compliance remains a critical function in highly regulated industries, added Olinger. Regulations have become more complex, with more consumer-focused compliance requirements and a strong focus on risk in general.

There is often overlap between compliance and general business risk, according to Kelly. For example, while certain regulations require cybersecurity measures, companies also need such measures to protect their businesses. Even in the current deregulated environment, a strong compliance program is needed to help an organization identify, understand and mitigate risks. Still, “compliance officers might have to reframe some of what they do to show [boards] how [compliance] overlaps with strong risk management,” he observed.

“Any organization that has humans has risk. It’s just that simple,” remarked Hunt. Companies will always have to identify, manage and mitigate risk, especially as interconnectedness and complexity grow. Board members constantly look for ways for their companies to grow and become more profitable, added Olinger. Mitigating risk will be a critical part of those efforts. Businesses will always need to understand their vulnerabilities.

See “Hallmarks of High-Impact Compliance Programs and Compensation Trends for Compliance Officers Who Implement Them” (Sep. 25, 2019).

Law Degree Not a Prerequisite

Some candidates think they should have a juris doctor degree (JD) “because there’s a compensation premium if you do,” noted Kelly. There is always a salary premium for advanced education – but salary is not the only indicator of a CCO’s success, opined Hunt.

A CCO must have multiple skills, just one of which is understanding the law. For example, to be successful, a CCO must be “a strategic thinker.” A JD might help with achieving that skill, but so do other degrees.

Although a JD is not needed to be a successful CCO, companies often expect to hire a JD for the position, observed Gilmore. A JD definitely lends an advantage in getting interviews, but it will not ensure a job. To get hired, a candidate will have to be well-spoken, demonstrate an understanding of the relevant regulatory environment and have leadership skills. Individuals without a JD will have to be “extremely business minded” and able to explain how they will help the business achieve its goals, he stressed.

Some organizations have a “mental block” on the issue of whether a JD is required, observed Olinger. For example, a bank may think it needs someone who understands every banking law. However, “I think you can convince management that you can have a job in compliance without that degree,” she said.

“Most of what compliance officers are doing is rooting out dumb things human beings have done,” Kelly said. While knowing the law may help in handling many compliance issues, a non-lawyer CCO often can rely on the legal team for assistance.

The value of a JD also will depend on the type of lawyer the person is, added Kelly. For example, an attorney entering a compliance position after spending an entire career at a law firm may not understand how to navigate corporate culture. On the other hand, a litigator who becomes a compliance officer may be better able to inform or challenge company counsel.

Critical Non-Legal and Non-Technical Skills

Leadership Qualities

A CCO must be a “strategic leader” with “smart skills,” continued Hunt. The CCO must be able to manage polarities. For example, CCOs must be able to balance the need to be transparent with maintaining confidentiality. They must be consistent while permitting flexibility. Additionally, research by leadership consultant Ron Carucci has shown that CCOs and other executives can benefit from four qualities:

  1. Breadth: an understanding of the organization and how it works;
  2. Context: knowledge of why customers buy from the company and why they value its products or services;
  3. Choice: the ability to make good decisions and stay focused on realizing company goals; and
  4. Connection: building relationships based on trust.

See “Leadership Insights From CPOs at Google, J.P. Morgan and Dow Jones” (Sep. 15, 2021).

Soft Skills

CCOs, according to Olinger, need many soft skills, including:

  • communications;
  • leadership;
  • problem-solving;
  • “executive presence”;
  • relationship-building;
  • ethical decision-making; and
  • interpersonal.

CCOs also must be able to influence people by being “confident, but humble,” added Gilmore. No one likes people who are “full of themselves,” he said. A successful CCO can change mindsets from “have to” adhere to compliance policies to “want to.” One approach is to show people the “monetary return from doing it the right way.” Another is to focus on building the company’s culture, brand and reputation to ensure the long-term success of the company and its employees. CCOs should tie a strong compliance program to business success, agreed Kelly. Compliance professionals are “very aware of how things might be going wrong or how they could go wrong,” he observed. With the right tools, they can help the company avoid missteps.

Emotional Quotient

Emotional quotient (EQ) tests show whether a person is empathetic, a good relationship builder and a good listener, explained Gilmore. People often overestimate their EQ. Many candidates do poorly in interviews because “they never stop talking.” They should seek to have a “natural conversation” and “gel” with the interviewer. Hard questions are a good sign. “When all the questions are easy, they are trying to get rid of you,” he cautioned.

Technological Understanding

Compliance officers increasingly need to understand data management and how IT and business processes work, said Kelly. A compliance officer must be a student of the business and understand how data management and privacy initiatives affect the organization and its bottom line, Olinger added. That is especially critical now, when data management and AI are top-of-mind concerns. “When you understand the data and the process and how people can make the right choice in the easiest, most time-efficient way possible, you will be successful,” opined Hunt.

See “What Does It Mean to Be Technologically Competent?” (May 15, 2019).

Relevant Experience

Compliance officers benefit from a diversity of experience, as well as from networking and learning, said Olinger. Some skills are highly transferable, including communications, privacy, cybersecurity and risk management. Relevant experience includes:

  • work in international organizations;
  • risk management;
  • communications; and
  • accounting, audit and investigations.

There is no standard template for a CCO, noted Hunt. They have a variety of experiences and expertise. However, compliance professionals should not gather experience haphazardly. Instead, they should think about what they want out of their career, the type of company they want to work for, and the position and level to which they aspire. A person may have worked in multiple organizations and earned multiple certificates and qualifications, “but they don’t know how to bring that all together to represent to a future employer why that’s valuable,” she said. Compliance professionals should figure out what type of organization they would like to work for and then see what that organization would need. “The broader your portfolio of certifications or experience the better,” added Gilmore. However, although certifications will help get someone a foot in the door, that person “still has to pull off the perfect interview.”

Having experience across industries shows a person’s intellectual curiosity and ability to learn, according to Gilmore. However, a CCO will only succeed if the CCO enjoys the business. A CCO should not move to a different industry for higher pay if the CCO will not enjoy being there. Working in multiple industries shows a person’s breadth, curiosity, dedication and ability to learn, Hunt agreed. It also shows the person has the requisite soft skills – not just technical knowledge. “To John’s point, have a passion and an interest in what you’re doing instead of just doing time,” she said.

See “Transparency Needed, This Time in Roles for Privacy Professionals” (Dec. 18, 2024).

Learn the Business and Industry

A CCO should understand both the business and the corporate structure, according to Gilmore. A prospective CCO should consider working in another area of the business first. This could help the person spot issues that senior management may miss or overlook. It is important to build a deep knowledge of an industry, said Olinger. Some companies insist on specific compliance experience, noted Gilmore. For example, one fixed income manager insisted on fixed income compliance experience. Consequently, candidates will have to cast a wide net.

When seeking a CCO position, a person should first learn the company’s reputation with customers, regulators and the public, continued Gilmore. The person should also try to understand the reputation of the compliance department and what it wants to accomplish. For example, if a company has a successful, mature program, it will probably not want a change agent.

See “How CPOs Can Manage Evolving Privacy Risk and Add Value to Their Organizations” (Mar. 12, 2025).

Mitigate Risk of Getting Pigeonholed

Compliance professionals should seek other experience to avoid getting pigeonholed. People move among positions regularly, Gilmore noted. It is critical for people to show they have earned the trust of others and, as a result, been given more responsibility over time. If a person is still doing the same thing after 10 years, “that’s a problem,” he said.

Some organizations have very narrow compliance programs focused strictly on regulatory compliance, noted Hunt. Others address culture, ethics and compliance. It may be advisable to work at an organization with a broader compliance mandate. Compliance professionals should always seek opportunities to work with other functions, which helps them learn the business and understand others’ points of view. That, in turn, enhances their influence and ability to persuade.

Certain attributes of the compliance role can be selling points when changing positions, added Olinger. Compliance professionals can highlight their ability to collaborate and influence others. Similarly, the ability to handle difficult situations and difficult people is an asset in any role.

Challenges Facing a Dual-Hatted GC-CCO

Requests to recruit for GCs often include in the job description that the GC will also lead compliance, explained Gilmore. The issue with that arrangement is that GC and CCO roles are both full-time jobs. “I’ve never talked to a chief compliance officer who only works half days,” said Kelly. It is hard for one person to fulfill both roles on “an active, in-depth basis,” according to Olinger. Both roles are huge.

A combined GC-CCO will almost certainly need a deputy to assist with compliance. In those situations, because some boards can be prone to trusting the GC in the compliance role, it is incumbent on the GC to introduce the deputy to the board so members understand who is managing the function.

The success of having a deputy manage compliance under the GC will depend on the relationship between the GC-CCO and the deputy, said Hunt. It can be an issue for GCs to have to trust “people underneath them who don’t have the compensation or the title to really do the job [well],” Kelly observed. If the GC does not trust a deputy, or feels the need to compete with the person, it will not be good for the organization.

Good GCs understand that compliance is different from legal and foster trust between the legal and compliance functions. Legal needs information from compliance so that it can manage legal risk. Compliance focuses on building a compliance culture and preventing and detecting issues. A GC who takes the CCO title but handles compliance “on the side” is not serving the company, Hunt cautioned.

See our three-part series on the first 100 days as GC/CCO: “Preparing for the Role and Setting the Tone” (Apr. 14, 2021), “Developing Knowledge and Forging Key Relationships” (Apr. 21, 2021), and “Managing Daily Work, Performing Risk Assessments and Looking Ahead” (Apr. 28, 2021).

Preparing Potential Successors

A CCO who wants to groom potential successors from within the organization should help them develop the necessary skills, build a network and learn from each other, advised Hunt. The CCO should also give others opportunities, show trust in them and recognize their accomplishments.

Impact of AI

Although AI will not eliminate the compliance function, it could help with analyzing large volumes of data, risk assessments and monitoring, according to Olinger. AI is not going away, so CCOs should think about how to use it to make compliance more efficient. AI will be another tool to assist the profession, just as spreadsheets helped accountants without replacing them, added Kelly. On the other hand, “if AI will let you do more with less, the company’s not going to let you do less – the company is going to give you more,” he opined.

See “Assessing and Managing AI’s Transformation of Cybersecurity in 2025” (Mar. 19, 2025).

People Moves

Cooley Adds Tech and Privacy Litigators in San Francisco


Cooley has welcomed seven new partners – Simona Agnolucci, Benedict Hur, Joshua Anderson, Tiffany Lin, Jonathan Patchen, Michael Rome and Eduardo Santacana – to its global litigation department in San Francisco. The group joins from Willkie Farr & Gallagher.

The team is known for high-stakes complex litigation, including class actions addressing rights of publicity and privacy, intellectual property disputes and trade secret theft for a range of global technology, financial services and life sciences companies. The litigators also handle federal and state regulatory and enforcement matters, trials and arbitrations.

Most recently, Agnolucci and Hur were managing partners in Willkie’s San Francisco office. Along with Anderson, Lin and Santacana, the two represented Google in a variety of class action lawsuits alleging privacy violations, including claims under the federal Wiretap Act, the California Invasion of Privacy Act and HIPAA. These cases highlighted issues involving Google’s analytics software, transmission of personal information, communications containing individually identifiable health information, use of personal data to bolster advertising offerings, consent, web infrastructure and tracking of IP addresses.

Agnolucci also has specialized in high-stakes complex litigation involving white-collar criminal defense and represented clients in DOJ and SEC investigations, including those related to FCPA violations.

For insights from Cooley, see “‘Everyone Wants to Speak to the CISO” and Other Realities of Addressing Vendor Breaches’” (May 14, 2025); and “Connecticut AG’s Report Reveals Privacy Enforcers Reaching Deeper Into Their State Laws” (Apr. 30, 2025).

People Moves

Former Assistant U.S. Attorney Joins Stoel Rives As Partner in Portland


Ethan Knight, former Assistant U.S. Attorney for the District of Oregon, has joined Stoel Rives as a partner in the firm’s litigation group. He will be based in Portland.

Bringing more than 25 years of experience handling high-profile matters in federal and state courts, Knight focuses primarily on government investigations and white-collar crime, defending businesses, executives and public officials facing sensitive government investigations and complex legal challenges. He helps companies navigate a modern regulatory environment shaped by rapid technological advancement, AI and evolving cybersecurity threats.

Prior to joining Stoel Rives, Knight spent nearly two decades as an Assistant U.S. Attorney for Oregon, where he led the National Security and Cyber Crime Unit, overseeing prosecutions involving counterintelligence, cybercrime, international terrorism, domestic terrorism and intellectual property offenses. He also previously served as Chief of the Economic Crimes Unit, where he managed cases involving complex fraud, tax evasion, environmental crimes and public corruption.

Earlier in his career, Knight spent eight years as a Deputy District Attorney in Multnomah County, Oregon.