Government Investigations

Gen AI Chats Becoming Evidence: Law Enforcement Warrants and Subpoenas


Users should exercise caution before prompting ChatGPT or Claude. As three 2025 cases demonstrate, generative AI (Gen AI) chats are being used as evidence in criminal prosecutions, with warrants and complaints citing ChatGPT conversations in actions involving child exploitation, arson and vandalism.

Most requests that AI providers have received for users’ prompts and AI-generated chats have come from federal officials, observed Richard Salgado, a Stanford University law professor and consultant who oversaw Google’s response to national security and law enforcement demands for 13 years. The Stored Communications Act (SCA) authorizes law enforcement to force companies to disclose information identifying their users by issuing subpoenas unilaterally, but most demands for private user content so far have been issued through search warrants with a judge’s signature. “It seems like the prosecutors are giving this type of data the respect given to email and other nonpublic content,” Salgado told the Cybersecurity Law Report.

This two-part article series examines developments around use of Gen AI chats as digital evidence, with insights from Salgado and experts from Integreon, the Electronic Privacy Information Center (EPIC), Loeb & Loeb, McCarter & English, and Winston & Strawn. This first installment shares law enforcement’s views on obtaining Gen AI for investigations, explains the unsettled law around access to Gen AI use records and identifies expected conflict points to watch. Part two will discuss strategies for companies to prepare for a steady increase of government and litigation requests for Gen AI user data. It will also examine OpenAI’s forceful statements in October after a court loss on producing Gen AI logs in discovery.

See “Google Settlement Shows DOJ’s Increased Focus on Data Preservation” (Dec. 7, 2022).

Three Criminal Cases Exposing Gen AI Prompts and Chats

OpenAI Issued First-Known Warrant for Prompts

A District of Maine court issued the first known federal search warrant asking OpenAI for user data in U.S. v. Hoehner. The suspect’s use of ChatGPT helped federal agents identify the individual in an investigation into a dark web child exploitation site. Homeland Security Investigations revealed it had been watching the site administrator in an undercover capacity when the suspect mentioned two prompts he submitted to ChatGPT. One prompt asked for a story about a Star Trek character meeting Sherlock Holmes; the second sought a 200,000-word humorous poem about President Donald Trump’s love of the song “Y.M.C.A.”

The investigators sought OpenAI user information for the two prompts. The two prompts led to a single account, though investigators identified the suspect through other clues and records without the user information from OpenAI, according to case records. The court unsealed the warrant in October 2025, Forbes reported, but the court resealed it in November, according to the docket.

Prompts and Chats Appear in Palisades Fire Complaint

In California, prosecutors used several of Jonathan Rinderknecht’s ChatGPT prompts and chats in their complaint accusing him of intentionally setting a small fire that rekindled a week later into the Palisades fire, which they said killed 12 people and destroyed 6,837 buildings. The day that he allegedly set the fire, he typed a question into a ChatGPT app on his phone, asking if a person would be at fault if they were smoking a cigarette and a fire erupted.

Prosecutors included other chats from Rinderknecht’s prior six months that indicate the accused’s thinking about fire, including a request that ChatGPT create a “dystopian” illustration of a crowd of poor people fleeing a forest fire while a crowd of rich people mock them behind a gate. The complaint included the resulting ChatGPT image.

Probable Cause Statement in Felony Property Damage Case Includes Chats

On October 1, 2025, a Missouri State University sophomore was charged with felony property damage for vandalizing 17 cars in Springfield, Missouri. The police department’s probable cause statement quotes many lines from a long ChatGPT session the student started on his phone 10 minutes after his spree. During the 3:47 a.m. chat, the teen asked, “is there any way they could know it was me,” and confessed to smashing windshields, according to the statement.

The police department’s statement ascribes feelings to the AI model, noting a point in the conversation when the accused student “begins to spiral. Even ChatGPT begins to get worried and asks him to stop talking about harming people and property.” The suspect had consented in writing to a phone search and provided his PIN, allowing the investigator to download the ChatGPT conversation and avoid having to seek a warrant.

See “CSIS’ James Lewis Discusses Balancing Law Enforcement and Privacy” (Mar. 16, 2016).

The Transition to a New Type of Digital Evidence

The number of law enforcement requests for Gen AI prompts appears small compared to that for search and social media details. From January to June 2025, OpenAI received 119 requests for user account information, 26 requests for chat content, and one emergency request. In the second half of 2024, Google reportedly received, in the United States, 56,674 requests involving 109,641 accounts, but did not reveal if any involved Gen AI use.

Investigators Likely to Pay More Attention in 2026

Investigators have not been attuned to subpoenaing AI chats, for a practical reason. “The federal government, particularly the law enforcement apparatus, is usually three to five years behind trend lines,” Winston & Strawn partner Damien Diggs, who served as U.S. Attorney for the Eastern District of Texas until 2025, told the Cybersecurity Law Report. Federal investigators have not had Gen AI on their work computers. “We were just blind to what it is and what it can do,” he recalled.

The government players who have started to pay attention see AI chats as an evolution, reported Loeb & Loeb partner Christopher Ott, a former federal prosecutor. “I’ve had unofficial conversations with people in the [DOJ], both on the agent side and the prosecutor side, about this. For the most part, they’re not thinking of it as something new. They’re saying, ‘oh, this is the same as Google search,’” he said.

With Gen AI chatbots supplanting traditional search engines, and OpenAI reporting 800 million users weekly, “we are going to see a lot more warrants like [the one issued in Hoehner],” Diggs predicted.

Another force that will drive an uptick in warrants for Gen AI prompt information, noted Integreon senior director of litigation services Robert Daniel, is that “a lot of law enforcement investigators now are younger. They’re used to social media. They know what’s out there.”

The Gen AI chats likely remain out there for the investigators to request. In 2025, criminal suspects tend to know to delete their search history and wipe their “how to dispose of a body” search. However, it could be years until GPT users are conscious that their chats could be evidence.

Multiple Ways That Prompts and Chats Help Prosecutors

The three 2025 cases show a few different ways that Gen AI chats create evidence trails that investigators can use to build criminal cases. The Missouri case, for example, highlights that chatbot phone apps may inspire longer confessional monologues than one would leave, for instance, in a phone’s note-taking app – thus, creating a record of motivations that police could seize during a physical search. The California case underscores that chatbots offer individuals intimate ways to process secrets, including by generating images that might be vivid evidence to a jury. The Maine warrant shows that people’s impulse to entertain friends and online forums with tales of their Gen AI chats can also provide investigators leads to gather more evidence.

Gen AI chats could offer more revelatory evidence than web searches, Ott highlighted. As the Missouri and California cases show, “that flow of conversation with a chatbot, even though it’s an artificial conversation, will contain more information and nuance than the static searches an investigator would get from a Google history,” he told the Cybersecurity Law Report.

While three 2025 cases show how user logs can help investigations, the public details so far do not reveal whether the Gen AI chats would be evidence admissible at trial, Salgado noted.

See “Second Circuit Quashes Warrant for Microsoft to Produce Email Content Stored Overseas” (Aug. 3, 2016).

Digital Evidence Law Remains Unsettled

While the SCA has existed for decades, “there are legacy questions when it comes to electronically stored communications that curiously, have not really been litigated out,” including the bounds of when law enforcement may obtain the contents of stored emails and other digital records, Ott noted. The dynamics with Gen AI could lead courts, with few precedents addressing reverse searches to uncover users, to take a fresh view of the statutory and constitutional constraints on law enforcement access.

The SCA’s Low Hurdle for Law Enforcement Access

The SCA is doubly forceful. It directs companies to shield stored communications. It also authorizes law enforcement – without having to show probable cause – to unilaterally access user-identifying information with a subpoena when it has reasonable grounds to believe that the information is relevant to an ongoing criminal investigation.

Under the SCA, prosecutors need a warrant to obtain “electronic communications content,” which apparently includes AI chat transcripts. Aggressive prosecutors might try to persuade a court that Gen AI chats, which are artificial, do not count as “communications” content the way an email message does, but merely retained business records of the customer’s use of company software, Ott noted. It would be an increasingly tough argument, he added.

See “Utah Act Increases Restrictions on Access to Third-Party Data” (Apr. 10, 2019).

Constitutional Rights and Reverse Searches

The Fourth Amendment will likely govern answers to questions around reverse searches for Gen AI chats. Reverse search requests served on large data repositories seeking the identity of users unknown to law enforcement “have real potential to sweep up a lot of non-target and innocent people’s data,” effectively becoming dragnets that violate the Fourth Amendment prohibition on unreasonable government searches, said EPIC president Alan Butler.

Technology platforms have fielded two primary types of reverse searches for records: (1) all people present in specified locations; and (2) all those entering searches with specific keywords.

“Reverse warrants seeking to identify users who use terms in their queries have been very troublesome for the search engines. Law enforcement often underestimates the enormous volume of search requests that are submitted,” Salgado shared. Fulfilling the requests “can mean that the provider discloses information about gobs of users, some or maybe all of whom have nothing to do with the events being investigated. The amount of search traffic that Google gets in 10 seconds is enough to knock most systems offline, but for Google, it’s just another 10 seconds on Tuesday,” he said. Unsurprisingly, search engines often respond to subpoenas and warrants by arguing that they need to be narrowed.

“I can see Gen AI records as being very similar to search queries, where one might think a prompt is going to be unique, but it’s not,” Salgado predicted. “Police may not know the exact wording of the prompt. For keyword search queries, the warrants have often said, ‘these terms or similar terms,’” an ambiguity that can greatly inflate the results and requires the provider to guess what “similar” means, he noted.

See “California Law Enforcement Faces Higher Bar in Acquiring Electronic Information” (Nov. 11, 2015).

The Supreme Court’s Third-Party Doctrine

The Supreme Court established in the 1970s the third-party doctrine under the Fourth Amendment, holding that people categorically have no “reasonable expectation of privacy” in information that they voluntarily share with third parties. In the case establishing the doctrine, the Court granted the government warrantless access to incriminating information that Jimmy Hoffa provided to a stool pigeon. Later, courts extended the doctrine to cover confidential information that customers give to banks and internet service providers to receive goods or services.

In the Carpenter decision of 2018, the Supreme Court reversed direction, strengthening Fourth Amendment protection around individuals’ location data in advanced technology systems. The Court required law enforcement to obtain a warrant for requests for cell site location information. Phone users had no meaningful choice about sharing an ongoing stream of their location data with the cellular service providers, the Court found.

The Fifth Circuit Court of Appeals in 2024, applying Carpenter in U.S. v. Smith, held government investigators similarly would need warrants to force map app providers to deliver identifying data for all people in an area during a set time period. “The potential intrusiveness of even a snapshot of precise location data should not be understated,” and users of map apps have a privacy expectation over their location records, the Fifth Circuit concluded. However, the court declined to retroactively suppress the prosecutor’s use of Smith’s location history, holding that police had acted in good faith.

Three decisions since 2022 have split from the Fifth Circuit’s view. The Fourth Circuit Court of Appeals, the Supreme Court of Pennsylvania and the Colorado Supreme Court each let the government force companies with just a subpoena to unveil users’ location or keyword-search histories under the third-party doctrine.

See “Implications of the Supreme Court’s Carpenter Decision on the Treatment of Cellphone Location Records” (Jul. 25, 2018).

Questions Around Privacy Interests In Light of Training

Law enforcement could aggressively argue that because Gen AI trains with user chats, even anonymized ones, individuals have no privacy interest in the content of those chats. “As the business is going to use a person’s prompts to train up the AI, it’s not for private communication purposes” between people, Ott noted. Thus, government lawyers could argue that courts should not “treat AI prompts like an email, as neither side of the relationship is treating it like an email,” he said.

“I can see [the status of] AI chats being a constitutional issue that gets ginned up and litigated pretty heavily within the next year or two,” Diggs predicted.

See “California Law Enforcement Faces Higher Bar in Acquiring Electronic Information” (Nov. 11, 2015).

Arguments for an AI Interaction Privilege

One heated litigation, between OpenAI and The New York Times over copyright infringement, has prompted debate about whether Gen AI chats deserve similar privilege protections as established types of communications. “If you talk to a therapist or a lawyer or a doctor about [] problems, there’s legal privilege for it,” OpenAI CEO Sam Altman noted in August 2025, suggesting that sensitive conversations with AI deserve similar protections. He lamented that a New York federal court had ordered OpenAI to hand over to adversaries the prompt-output logs for millions of ChatGPT users.

Legislators, not courts, will probably need to establish any privilege for AI chats, noted McCarter & English partner Erin Prest. “We’ve already seen courts saying that if a doctor puts in a patient’s information to the open ChatGPT, that appears to destroy their privilege,” she said.

Commentators have suggested extending the psychotherapy/patient privilege to ChatGPT interactions that seek counsel or emotional processing. “The social benefit of candid interaction outweighs the cost of occasional lost evidence,” Nils Gilman, a policy historian, wrote in an essay featured in The New York Times. Chat providers could create a setting for “sensitive” conversations to help establish the privilege, while any use of AI to plan or execute a crime should be discoverable under judicial oversight, he contended.

Public discussion around AI chat therapy and instances of “psychosis” from chats could sway courts to balk at warrant requests, Butler noted. “Judges sometimes have a conceptual barrier to understanding what is at stake on the privacy side when it comes to a Google Maps reverse search. They think about what’s searched as ‘oh, it’s just an address,’” he observed. “The context of chatbots might bring courts along to understanding that what we are talking about fundamentally are a person’s thoughts and communications that we have for decades protected,” he posited.

Chief Privacy Officer

Tips From Big Tech Leaders on Navigating Global Privacy Regulations


It is more important than ever for the privacy officers of multinational companies to understand and know how to deal with the increasing complexities of privacy regulations and innovation in a global context. Big Tech privacy processes can offer valuable lessons to other companies because they demonstrate best practices for building consumer trust, ensuring regulatory compliance and managing data in a way that provides a competitive advantage.

This article distills commentary from three former leaders of major tech companies, who spoke at the IAPP’s Privacy. Security. Risk. 2025 conference, on strategies for navigating multiple regulatory systems, engaging with regulators, governing AI and the importance of consumer trust.

See “How CPOs Can Manage Evolving Privacy Risk and Add Value to Their Organizations” (Mar. 12, 2025).

Navigating Global Regulations

Organizations can grapple with the complex challenge of complying with sometimes conflicting privacy requirements across the globe, but there are measures they can take to ease the process.

Finding Commonalities

The privacy, cybersecurity and child safety regulatory fronts have all been moving at “1000 miles per hour” recently, said Jane Horvath, former Apple CPO and current partner at Gibson, Dunn & Crutcher. Privacy advisors should tell their clients to find the commonalities across various regulatory regimes, including those of small jurisdictions.

In the privacy space, most laws in the U.S. and elsewhere are based on the Fair Information Practice Principles (FIPPs), which include access and amendment, accountability, authority, minimization, quality and integrity, purpose specification and use limitation, security and transparency. “Build a privacy compliance program that hits most of the major areas,” Horvath recommended.

Preparing Defensible Legal Positions

An international company should develop rational and defensible legal positions for each regulatory system, while maintaining awareness that, in some jurisdictions, it might still lose if challenged, said Keith Enright, former Google CPO and current partner at Gibson, Dunn & Crutcher. A company dealing with cutting-edge technologies would be paralyzed if it tried to avoid all legal risk, he ackowledged.

Accordingly, a privacy officer should be able to tell leadership what the legal landscape looks like and to draw a “heat map” identifying individual jurisdictions where the company may face a legal challenge, Enright advised. For jurisdictions that present a challenge, privacy officers should offer leadership with credible, defensible, ethical arguments to defend the company’s actions. They should inform leadership that, “If we lose, we need the business to be prepared to do whatever that loss might signal. It might be that you have to pull back. It might mean that you have to make changes to a product faster than your normal product development cycle would allow for,” he said.

Nuanced and sophisticated legal risk analysis for a company such as Google also relies on global experts, Enright continued. “We had teams that were distributed around the world that were helping us understand: What did the law in each jurisdiction mean? How was it likely to be enforced?”

See “In‑House Perspectives on Compliance’s Role in Managing New and Emerging Risks” (Jun. 5, 2024).

Understanding and Articulating Non-Negotiables

A senior privacy officer’s job is to receive a product or business strategy from leadership and to help them understand how they can execute it in a manner consistent with non-negotiable legal guardrails, such as those applicable to protecting children, Enright explained. At Google, another non-negotiable was avoiding any criminal or otherwise illegal acts, he said. To earn leadership’s trust, privacy officers should have a very clear, well-articulated understanding of where the non-negotiables are. “What are the red lines?” he added.

Engaging Regulators to Build Trust

When presented with the opportunity to engage with a regulator, a privacy officer should maximize the value of that opening to build trust and goodwill and to create a collaborative dynamic, especially if the situation is not in a compulsory investigative or enforcement context, Enright suggested. “That is going to pay dividends when things go sideways. If the person on the other side of the table from you trusts you and believes that you are not hiding the ball and you’re being candid and open, everything goes better.” Even where the communication with a regulator arises in the context of a formal investigation or enforcement, “never miss the chance to use that engagement to build a relationship,” he advised.

When Enright engaged with regulators in Europe, he recounted, it strengthened Google’s relationships with almost all of them, which was useful during the run-up to the introduction of the GDPR. For example, “When many of Google’s competitors were somewhat panicking because they did not understand what the regulatory enforcement priorities were going to be and how disruptive GDPR was going to be, we felt like we had an incredibly clear view of what the future looked like,” he recalled. Given the preestablished relationship, he continued, “if there was an area that I was confused about, I could reach out directly and engage with one of the regulators that I had been involved with and get real, reliable, actionable intelligence that I could bring back to the business to help us guide our product strategy.”

When engaging with a regulator from another jurisdiction, privacy professionals should keep in mind geopolitical complexities and the political pressures facing that regulator, as well as “dynamics that may have nothing to do with [their] business or [their] subject matter domain,” Enright advised. Even when there are complicating factors, such as the need to protect confidential information or the prospect of enforcement or litigation, privacy professionals should treat all regulators as human beings, engage respectfully, presume goodwill and be aware that they are generally in a far more complicated situation now than previously, he said.

Governing AI and Preparing for Enforcement

Navigating Regulations

Companies should be prepared for enforcement in the AI space on the horizon. There is a lot of fluidity in global AI regulations now, which companies should take into account before investing deeply into complicated regulatory frameworks, advised Julie Brill, former FTC Commissioner and Microsoft CPO, and current owner of Brill Strategies and expert in residence at Harvard’s Law School and Innovation Labs.

The E.U. is likely to pause the next phase of regulations for high-risk AI, which will affect deployers as well as developers, Horvath noted. Comprehensive AI laws are likely to see a rethink, so it is worthwhile for a company to consider carefully before investing heavily either from a technical or HR perspective. Specific AI regulations about harm to children and automatic decision-making, however, will likely flourish.

See “State Privacy Enforcers Reveal Strategies, Priorities and Advice on Engagement” (Nov. 12, 2025).

Establishing Effective Governance

AI governance programs must include fundamental principles dealing with bias, human oversight and fail safe options, advised Horvath, noting that some of the FIPPs will create tensions with AI.

Companies should also develop a risk matrix regarding AI and a list of questions that an engineer or a product developer should ask when they want to develop an AI system. Filling in the questionnaire should lead to a “red, yellow or green” response. Red means that the proposed AI project is “hot” and requires executive authority, Horvath explained. Most systems will trigger a yellow, which means it should be reviewed by the AI steering committee, while green means it does not need to be reviewed by the committee. The green responses are very important, she cautioned, because “if you are telling your product people that every single AI system needs to be reviewed by your AI steering committee, you are going to become very unpopular quickly.”

The best person to assist with AI governance is a privacy officer who has run a privacy governance program, Horvath added.

Enright concurred. Companies are asking their smartest people to figure out how to continue executing their business strategy in light of a risk from AI that is not fully understood – a pattern that people in privacy have seen before, he said. Privacy officers are best placed to deal with AI issues because “we understand how audits work, we understand how technical controls work and different kinds of controls. We understand how to produce evidence. We understand how regulators might use different kinds of enforcement mechanisms,” he emphasized.

Enright noted that some of his clients are open-mindedly trying to understand how to embrace AI responsibly. Others, however, are reacting to the rise of AI by over-focusing compliance efforts on it, while at the same time neglecting other compliance priorities, such as privacy, an approach which Enright predicts will inevitably lead to a negative result. “We are one to two years away from a tsunami of regulatory enforcement in the context of legacy privacy and data protection, because those laws are still on the books. Many organizations have miscalculated that,” he observed. Regulators have been as distracted by the AI revolution as the corporates, but that is not going to be a persistent state. “Regulators are going to refocus. They are going to come back, and they are going to return to the tools that have been available to them all along. And many companies that have over-rotated toward AI are going to have a ton of compliance debt,” he predicted. Companies will experience “real disruption” when they cannot respond to privacy enforcement “because they have been so distracted by trying to figure out what AI means for their organization,” he cautioned.

See “Gauging Uptake of AI in Cybersecurity” (Nov. 12, 2025).

Recognizing the Importance of Trust

Regulation sets the baseline or foundation for privacy, but trust is what sustains relationships with clients, said panel moderator Samita Patel, senior director of privacy and data compliance at Alvarez & Marsal.

Measuring Trust

An easy way to measure trust is to see whether people continue to use a company’s products and services, Horvath said. “Once you lose trust, you are going to see flight.”

Another way to measure trust is by directly engaging with people of different demographics and in different geographies, and trying to assess how they feel about the brand, Enright suggested. Draw inferences from that engagement and try to tease out insights about the way trust in a company’s brand is evolving, he added. The discussion will often be comparative, assessing how consumers feel about one brand versus another.

“It is very difficult to quantify trust in a way that gives you a static metric that you can rely upon. But you can get some very high-confidence directional signals,” to allow a company to assess whether trust is eroding or building relative to other companies, Enright continued. It is also necessary to make sure that the decision makers in an organization do not look at metrics untethered from context, because there are so many variables that can affect the way that a consumer responds to a survey or whether the individual purchases a product. “You really need to make sure that you are being sensitive to the way that context filters the way that you interpret the data,” he said.

Rebuilding Trust

It is very hard to rebuild trust after you lose it, Brill noted. After a trust-damaging incident, privacy heads should sit down with the company leaders and say, “We need to address this. Tell me what we can do in the context of your priorities,” she suggested. Do not give them recommendations or say, “You must do this.” Talk about the timelines and geographic spread of the solution and the kinds of things that can help the company to scale it up over time, she advised.

At the same time, privacy leaders should communicate with external sources. If there was a particular regulator or group of regulators that pointed out the problem, sit down with them and discuss it, Brill urged. This step is easier for the Googles, Microsofts and Apples of the world to arrange than it is for smaller companies, she acknowledged. It is also important to communicate with customers. Tell them what happened, what has been done and, maybe, bring them into the solution. Companies could tell customers, she suggested, “Here is what we are thinking about. Does this work for you? Do you think this is the right solution? Should there be a different one? We did a lot of that at Microsoft.”

Considering Data Flows to China

The legal issues surrounding data flows between Europe and the U.S. have dominated the regulatory landscape and legal analysis for years, but the focus may move toward data flows to China, Horvath predicted, noting that the Irish Data Protection Commission is currently investigating TikTok’s data flows to China.

See “Update on Digital Governance in India and China” (May 21, 2025).

Artificial Intelligence

Benchmarking AI Uptake by Compliance Functions


ACA Group (ACA), in cooperation with the National Society of Compliance Professionals (NSCP), has released its second annual AI benchmarking report (Report), which is based on a survey of nearly 250 firms and compliance professionals. ACA and the NSCP conducted the survey to cut through the hype about AI and explore how it is presently used and where it is expected to be used, explained Carlo di Florio, president of ACA and former head of the SEC Division of Examinations (Division), in a webinar reviewing the findings.

The Report covers firms’ ever-increasing adoption of AI, how firms are using AI, the key risks associated with AI and how firms are seeking to mitigate those risks. This article synthesizes the key takeaways from the webinar and the survey’s key findings.

See “Gauging Uptake of AI in Cybersecurity” (Nov. 12, 2025).

Survey Demographics

ACA and the NSCP conducted the survey in September 2025. The 244 respondents consisted of ACA clients and members of the NSCP. The survey included firms with a wide range of regulatory assets under management (AUM), noted Aaron Pinnick, senior manager at ACA, who joined di Florio on the webinar. A majority of respondents’ firms have less than $10 billion of AUM, including a plurality (39%) with between $1 billion and $10 billion of AUM. Just over half of respondents have up to 50 full- and part-time employees, including a plurality (39%) with between 11 and 50 employees. Most respondents (83%) identified their firms as either asset managers/non-alternative investment advisers (42%), private market firms (27%) or alternative investment advisers (14%).

In 2024, ACA and the NSCP conducted a similar study (2024 Study), noted Pinnick. The demographics of that study, which included 219 respondents, were similar to those of the current study.

Receptive Regulatory Environment

There is “a very business-friendly environment around AI innovation, adoption [and] capital formation,’” said di Florio. President Donald Trump issued an Executive Order on removing barriers to U.S. leadership in AI. The SEC and other regulators are seeking to implement the new mandate. Notwithstanding the friendly regulatory environment, the SEC will continue to take enforcement action, especially for misleading marketing materials and disclosures concerning AI adoption – so-called “AI washing.”

Additionally, the Division has conducted an AI sweep that it will use to inform its examination methodology. Recent examinations have looked at:

  • whether firms have an acceptable use policy and how they developed and implemented it;
  • AI governance and oversight processes;
  • model testing and validation, including data inputs and whether outputs include errors, hallucinations or biases;
  • AI-related cybersecurity, information security, privacy, incident response and related controls; and
  • due diligence and oversight of AI use by vendors and third parties.

The survey responses suggest that many firms are not addressing all of these concerns, noted di Florio. The industry could benefit from a Division risk alert based on its AI sweep. SEC Chair Paul Atkins is “looking to be more helpful to the industry, so perhaps we’ll have a risk alert like that in the future,” he added.

Additionally, Atkins has launched the Project AI initiative exploring how the SEC can leverage AI in its rulemaking, examinations and enforcement processes. “We can expect to see more AI incorporated into the already very sophisticated technology that the SEC uses,” said di Florio.

See “SEC Regulatory and Examination Priorities in 2025” (Oct. 1, 2025).

Explosion in AI Adoption

In 2024, firms were largely “exploring” AI, said Pinnick. However, there was a year-over-year surge in adoption. In 2025, 71% of respondents are using AI tools, up from 45% in the 2024 Study. But adoption is concentrated among larger firms. The Report notes that just 52% of the smallest firms in the study – those with less than $1 billion of AUM – have adopted AI.

Internal Uses Predominate

The study assessed both external – i.e., client-facing – uses of AI and strictly internal applications. Internal use cases include, for example, investment research, due diligence, employee surveillance and creation of internal newsletters, said Pinnick. External uses include client chatbots and automated investment advice.

Notably, the proportion of firms solely using AI internally grew from just 37% to 60%, according to the Report. The proportion using it for both internal and external applications grew more modestly, from 8% to 11%. Additionally, the proportion that banned or significantly restricted AI has declined from 15% to 4%. There is a growing recognition that firms “can’t just ban this technology,” Pinnick observed.

Firm boards and investors have been asking how firms are using AI to add value, noted di Florio. The growth in external applications is driven by the business and competitive environment – but such applications bring significantly more regulatory scrutiny and risk.

See “Benchmarking Fund Managers’ Adoption and Governance of Generative AI” (Nov. 19, 2025).

Use of Private AI Tools Increasing

The most common AI tool is a private enterprise version of a generative AI tool such as Copilot, noted Pinnick. This has changed significantly since the 2024 Study, when firms most often used public AI models. The shift from public to private tools reflects firms’ growing investment in AI.

AI Spending Varies Widely

There is wide range of spending on AI, noted Pinnick. Among the respondents that have formally adopted AI, a plurality (31%) said they spend less than $10,000 annually on AI. At the other end of the spectrum, 7% of respondents spend at least $1 million annually. The cost ultimately depends on a firm’s use cases and what the firm expects to get from AI, he observed. Unsurprisingly, larger firms tend to spend more on AI than small ones.

AI Use Cases

Respondents’ five most common AI applications are the same as in 2024, but the proportion of respondents using each application is higher than it was a year ago, said Pinnick. Additionally, respondents cited a greater number of use cases than a year ago. The most common application is for investment research and due diligence – 64%, up from 50% last year. According to the Report, the other top uses include:

  • compliance and risk management (49% vs. 31% in 2024);
  • marketing and communications (44% vs. 32%);
  • operations (44% vs. 30%); and
  • IT (37% vs. 26%).

AI in Compliance Functions

Most Common Applications

Nearly half of respondents now use AI in their compliance programs, observed Pinnick. The most common ways such respondents use AI include:

  • developing policies and procedures (53%);
  • monitoring/testing (48%);
  • communicating and training (46%); and
  • conducting surveillance (41%).

These are areas whose processes usually require scale – for which AI is well-suited, noted Pinnick. Generative AI offers an efficient “first pass” at these areas. Use in investigations, remediation and discipline is much less common, which is not surprising, because it typically requires much more human effort and judgment, he observed.

There are two broad approaches to incorporating AI into compliance functions, said di Florio. First, some firms with a private AI license – such as Copilot – are experimenting with developing policies, procedures, research, training and communications. Other firms are leveraging vendors such as ACA, which are incorporating AI into individual compliance-related modules and using it to improve interoperability among modules. Areas in which AI is being leveraged include:

  • expert network chaperoning;
  • Marketing Rule compliance;
  • anti-money laundering and know-your-customer (AML/KYC) compliance; and
  • various types of trade and communications surveillance.

See “Risk and Compliance Survey Highlights the Role of Compliance in AI Governance” (Oct. 29, 2025).

Top Challenges

According to the Report, the top three challenges of integrating AI into compliance function workflows were:

  1. cybersecurity and privacy concerns (49%);
  2. regulatory uncertainty (48%); and
  3. lack of expertise (33%).

Nearly one-quarter of respondents also expressed concern over poor data quality or outputs and/or that available tools do not meet their needs. Just 11% are concerned about cost.

See “Benchmarking AI Governance Practices and Challenges” (May 7, 2025).

Regulators’ Uptake of AI

As firms increasingly implement AI to manage compliance, regulators will increasingly use it for oversight, noted di Florio. In that regard, as part of Project AI, Division staff are thinking about how they can incorporate AI into the examination process. For example, they are likely to implement AI to improve trade blotter review and other common focus areas, including marketing, personal trading, conflicts and AML/KYC.

See “SEC Stresses Cybersecurity, AI and Crypto in Its 2025 Exam Priorities” (Dec. 18, 2024).

Top Risks: Data Privacy, Data Security and Hallucination

ACA asked firms to rank the top risks associated with AI tools and technologies. Respondents identified data privacy (58%), information security (57%) and hallucinations (55%) as the top three risks. A significant minority also identified regulatory and compliance (46%) and/or cybersecurity (41%). On the other hand, just 12% cited third-party risk. There were notable year-over-year increases in the proportions of respondents that identified information security and/or hallucinations as key risks.

On the other hand, the proportion citing regulatory risk dropped by 13 percentage points. Additionally, the proportion citing intellectual property risk dropped from 21% to just 7%, which could reflect the ongoing shift toward private AI tools, according to the Report.

See “AI Governance: Striking the Balance Between Innovation, Ethics and Accountability” (Feb. 12, 2025).

Management of AI Risks

In the past year, firms have made meaningful progress in addressing AI risks, said Pinnick. The study examined adoption of acceptable use policies, governance, recordkeeping and efficiency gains. It also assessed how respondents’ firms are addressing the top AI-related risks they identified.

Acceptable Use Policies

An acceptable use policy is a fundamental element of risk management, said Pinnick. In the 2024 Study, half of respondents had acceptable use policies and 30% were working on one. In 2025, 70% of respondents have a policy and most of the rest (23%) are working on one.

See “Dos and Don’ts for Employee Use of Generative AI” (Dec. 6, 2023).

AI Governance

Additionally, “[AI] governance is taking shape,” continued Pinnick. In 2024, just one-third of respondents had an AI committee or governance body and just 14% were creating one. This year, 48% have one and an additional 13% are creating one. “A governance committee or working group isn’t the only way to establish good governance around AI, but it’s a helpful step in creating consistency around questions about AI use cases, AI adoption processes and AI risk appetite,” he noted.

See “AI Governance Strategies for Privacy Pros” (Apr. 17, 2024).

Recordkeeping

Twenty-nine percent of respondents in the 2024 Study said they preserved client interactions with AI tools. This year, 49% are doing so, said Pinnick. These firms primarily use AI tools either for interactions with clients or for transcriptions of client calls.

A key concern is what firms should treat as “books and records,” noted di Florio. Firms should work with counsel and tailor their interpretation of the requirements to their risk appetites. Increasingly, firms are considering summaries to be the relevant records, while transcripts are “more like taking notes,” he said. The Securities Industry and Financial Markets Association (SIFMA) has asked the SEC for clarity around application of the Books and Records Rule to modern technologies, including electronic communications and AI. SIFMA seeks to narrow what the SEC considers to be books and records. The SEC has not yet provided any guidance on this subject, however.

See “Off-Channel Communications Are Not the Only Source of Electronic Recordkeeping Violations” (May 1, 2024).

Efficiency

The 2024 Study found that CCOs were seeking to use AI to increase their efficiency, noted Pinnick. This year, 70% of respondents identified improving efficiency as their primary goal in adopting AI. In 2024, nearly one-third of respondents said AI had at least “slightly” improved program efficiency. This year, more than half said so. In each year, however, respondents that said it had “slightly improved” efficiency outnumbered those who said it “improved” or “significantly improved” efficiency.

Many firms have found AI helpful because it enables them to improve and expand their risk management controls, noted di Florio. It allows them to analyze much more data and has been “phenomenal in reducing the number of false positives,” which makes testing much more effective. As trust in AI tools increases, more firms are likely to adopt it, added Pinnick.

Human Oversight

Firms must keep a human in the loop, advised Pinnick. Of the roughly 30% of respondents with a formal program for testing AI outputs, 55% have a process for human review of outputs to ensure accuracy. This finding shows there is room for firms to increase human oversight, he said. They should establish a formal process for human input and train employees on the importance of that input and oversight.

Cybersecurity

Most respondents (71%) have incorporated AI topics into cybersecurity training. On the other hand, Pinnick noted that firms have been slow to adopt important technical controls for AI applications, including:

  • data encryption (29%);
  • vulnerability testing (27%);
  • external expert review (24%);
  • continuous monitoring (24%);
  • access controls (23%);
  • multi-factor authentication (20%);
  • network segmentation (13%); and
  • incident response plans (11%).

See “How to Create a Program to Combat Deepfakes” (Oct. 22, 2025).

Data Privacy

The four most common ways respondents address AI-related data privacy risks are:

  1. training in acceptable use and data protection (62%);
  2. using only nonpublic AI tools (61%);
  3. not using firm data for model training (35%); and
  4. assessing impact on privacy (25%).

The year-over-year increase in the proportion of respondents covering AI in training was 11%. However, significantly fewer respondents are using other available AI risk controls. For example, less than one-fifth block public AI tools, encrypt data, audit AI tools for privacy compliance, prohibit third parties from using firm data in AI tools, use access controls or employ privacy experts.

See “How Under Armour and People Inc. Took AI Governance From Crawl to Walk to Run” (Sep. 24, 2025).

Hallucinations

Of the respondents presently using AI, 28% have established a program for testing or validating outputs, notes the Report. An additional 26% are developing such a program. The most common testing methods for hallucinations include:

  • reviewing a sample of outputs by staff (55%);
  • ongoing monitoring (40%);
  • reviewing models (35%); and
  • back-testing models (11%).

Third Parties

Although firms have been focusing on their own use of AI, they should also understand how third parties are using AI and what controls they have in place, said Pinnick. Nearly half of respondents have either already established protocols or policies for addressing AI use by vendors and third parties (24%) or are developing them (21%). To mitigate third-party AI risk, firms should:

  • create an inventory of vendors’ AI applications;
  • develop appropriate policies and procedures;
  • conduct enhanced due diligence on third parties that use AI; and
  • monitor and audit use of AI by third parties.

Most respondents are either conducting (43%) or planning to conduct (32%) enhanced due diligence on vendors that provide AI solutions. Of the respondents conducting enhanced due diligence, more than 70% said such due diligence includes questions about how the vendor handles sensitive/confidential data and enhanced review of cybersecurity, privacy, and AI policies and procedures. Additionally, 62% ask about use of firm data for model training. More than one-quarter review business continuity plans, model-related documentation, and/or testing and validation reports.

See “Managing Third-Party AI Risk” (Aug. 20, 2025).