Users should exercise caution before prompting ChatGPT or Claude. As three 2025 cases demonstrate, generative AI (Gen AI) chats are being used as evidence in criminal prosecutions, with warrants and complaints citing ChatGPT conversations in actions involving child exploitation, arson and vandalism.
Most requests that AI providers have received for users’ prompts and AI-generated chats have come from federal officials, observed Richard Salgado, a Stanford University law professor and consultant who oversaw Google’s response to national security and law enforcement demands for 13 years. The Stored Communications Act (SCA) authorizes law enforcement to force companies to disclose information identifying their users by issuing subpoenas unilaterally, but most demands for private user content so far have been issued through search warrants with a judge’s signature. “It seems like the prosecutors are giving this type of data the respect given to email and other nonpublic content,” Salgado told the Cybersecurity Law Report.
This two-part article series examines developments around use of Gen AI chats as digital evidence, with insights from Salgado and experts from Integreon, the Electronic Privacy Information Center (EPIC), Loeb & Loeb, McCarter & English, and Winston & Strawn. This first installment shares law enforcement’s views on obtaining Gen AI for investigations, explains the unsettled law around access to Gen AI use records and identifies expected conflict points to watch. Part two will discuss strategies for companies to prepare for a steady increase of government and litigation requests for Gen AI user data. It will also examine OpenAI’s forceful statements in October after a court loss on producing Gen AI logs in discovery.
See “Google Settlement Shows DOJ’s Increased Focus on Data Preservation” (Dec. 7, 2022).
Three Criminal Cases Exposing Gen AI Prompts and Chats
OpenAI Issued First-Known Warrant for Prompts
A District of Maine court issued the first known federal search warrant asking OpenAI for user data in U.S. v. Hoehner. The suspect’s use of ChatGPT helped federal agents identify the individual in an investigation into a dark web child exploitation site. Homeland Security Investigations revealed it had been watching the site administrator in an undercover capacity when the suspect mentioned two prompts he submitted to ChatGPT. One prompt asked for a story about a Star Trek character meeting Sherlock Holmes; the second sought a 200,000-word humorous poem about President Donald Trump’s love of the song “Y.M.C.A.”
The investigators sought OpenAI user information for the two prompts. The two prompts led to a single account, though investigators identified the suspect through other clues and records without the user information from OpenAI, according to case records. The court unsealed the warrant in October 2025, Forbes reported, but the court resealed it in November, according to the docket.
Prompts and Chats Appear in Palisades Fire Complaint
In California, prosecutors used several of Jonathan Rinderknecht’s ChatGPT prompts and chats in their complaint accusing him of intentionally setting a small fire that rekindled a week later into the Palisades fire, which they said killed 12 people and destroyed 6,837 buildings. The day that he allegedly set the fire, he typed a question into a ChatGPT app on his phone, asking if a person would be at fault if they were smoking a cigarette and a fire erupted.
Prosecutors included other chats from Rinderknecht’s prior six months that indicate the accused’s thinking about fire, including a request that ChatGPT create a “dystopian” illustration of a crowd of poor people fleeing a forest fire while a crowd of rich people mock them behind a gate. The complaint included the resulting ChatGPT image.
Probable Cause Statement in Felony Property Damage Case Includes Chats
On October 1, 2025, a Missouri State University sophomore was charged with felony property damage for vandalizing 17 cars in Springfield, Missouri. The police department’s probable cause statement quotes many lines from a long ChatGPT session the student started on his phone 10 minutes after his spree. During the 3:47 a.m. chat, the teen asked, “is there any way they could know it was me,” and confessed to smashing windshields, according to the statement.
The police department’s statement ascribes feelings to the AI model, noting a point in the conversation when the accused student “begins to spiral. Even ChatGPT begins to get worried and asks him to stop talking about harming people and property.” The suspect had consented in writing to a phone search and provided his PIN, allowing the investigator to download the ChatGPT conversation and avoid having to seek a warrant.
See “CSIS’ James Lewis Discusses Balancing Law Enforcement and Privacy” (Mar. 16, 2016).
The Transition to a New Type of Digital Evidence
The number of law enforcement requests for Gen AI prompts appears small compared to that for search and social media details. From January to June 2025, OpenAI received 119 requests for user account information, 26 requests for chat content, and one emergency request. In the second half of 2024, Google reportedly received, in the United States, 56,674 requests involving 109,641 accounts, but did not reveal if any involved Gen AI use.
Investigators Likely to Pay More Attention in 2026
Investigators have not been attuned to subpoenaing AI chats, for a practical reason. “The federal government, particularly the law enforcement apparatus, is usually three to five years behind trend lines,” Winston & Strawn partner Damien Diggs, who served as U.S. Attorney for the Eastern District of Texas until 2025, told the Cybersecurity Law Report. Federal investigators have not had Gen AI on their work computers. “We were just blind to what it is and what it can do,” he recalled.
The government players who have started to pay attention see AI chats as an evolution, reported Loeb & Loeb partner Christopher Ott, a former federal prosecutor. “I’ve had unofficial conversations with people in the [DOJ], both on the agent side and the prosecutor side, about this. For the most part, they’re not thinking of it as something new. They’re saying, ‘oh, this is the same as Google search,’” he said.
With Gen AI chatbots supplanting traditional search engines, and OpenAI reporting 800 million users weekly, “we are going to see a lot more warrants like [the one issued in Hoehner],” Diggs predicted.
Another force that will drive an uptick in warrants for Gen AI prompt information, noted Integreon senior director of litigation services Robert Daniel, is that “a lot of law enforcement investigators now are younger. They’re used to social media. They know what’s out there.”
The Gen AI chats likely remain out there for the investigators to request. In 2025, criminal suspects tend to know to delete their search history and wipe their “how to dispose of a body” search. However, it could be years until GPT users are conscious that their chats could be evidence.
Multiple Ways That Prompts and Chats Help Prosecutors
The three 2025 cases show a few different ways that Gen AI chats create evidence trails that investigators can use to build criminal cases. The Missouri case, for example, highlights that chatbot phone apps may inspire longer confessional monologues than one would leave, for instance, in a phone’s note-taking app – thus, creating a record of motivations that police could seize during a physical search. The California case underscores that chatbots offer individuals intimate ways to process secrets, including by generating images that might be vivid evidence to a jury. The Maine warrant shows that people’s impulse to entertain friends and online forums with tales of their Gen AI chats can also provide investigators leads to gather more evidence.
Gen AI chats could offer more revelatory evidence than web searches, Ott highlighted. As the Missouri and California cases show, “that flow of conversation with a chatbot, even though it’s an artificial conversation, will contain more information and nuance than the static searches an investigator would get from a Google history,” he told the Cybersecurity Law Report.
While three 2025 cases show how user logs can help investigations, the public details so far do not reveal whether the Gen AI chats would be evidence admissible at trial, Salgado noted.
See “Second Circuit Quashes Warrant for Microsoft to Produce Email Content Stored Overseas” (Aug. 3, 2016).
Digital Evidence Law Remains Unsettled
While the SCA has existed for decades, “there are legacy questions when it comes to electronically stored communications that curiously, have not really been litigated out,” including the bounds of when law enforcement may obtain the contents of stored emails and other digital records, Ott noted. The dynamics with Gen AI could lead courts, with few precedents addressing reverse searches to uncover users, to take a fresh view of the statutory and constitutional constraints on law enforcement access.
The SCA’s Low Hurdle for Law Enforcement Access
The SCA is doubly forceful. It directs companies to shield stored communications. It also authorizes law enforcement – without having to show probable cause – to unilaterally access user-identifying information with a subpoena when it has reasonable grounds to believe that the information is relevant to an ongoing criminal investigation.
Under the SCA, prosecutors need a warrant to obtain “electronic communications content,” which apparently includes AI chat transcripts. Aggressive prosecutors might try to persuade a court that Gen AI chats, which are artificial, do not count as “communications” content the way an email message does, but merely retained business records of the customer’s use of company software, Ott noted. It would be an increasingly tough argument, he added.
See “Utah Act Increases Restrictions on Access to Third-Party Data” (Apr. 10, 2019).
Constitutional Rights and Reverse Searches
The Fourth Amendment will likely govern answers to questions around reverse searches for Gen AI chats. Reverse search requests served on large data repositories seeking the identity of users unknown to law enforcement “have real potential to sweep up a lot of non-target and innocent people’s data,” effectively becoming dragnets that violate the Fourth Amendment prohibition on unreasonable government searches, said EPIC president Alan Butler.
Technology platforms have fielded two primary types of reverse searches for records: (1) all people present in specified locations; and (2) all those entering searches with specific keywords.
“Reverse warrants seeking to identify users who use terms in their queries have been very troublesome for the search engines. Law enforcement often underestimates the enormous volume of search requests that are submitted,” Salgado shared. Fulfilling the requests “can mean that the provider discloses information about gobs of users, some or maybe all of whom have nothing to do with the events being investigated. The amount of search traffic that Google gets in 10 seconds is enough to knock most systems offline, but for Google, it’s just another 10 seconds on Tuesday,” he said. Unsurprisingly, search engines often respond to subpoenas and warrants by arguing that they need to be narrowed.
“I can see Gen AI records as being very similar to search queries, where one might think a prompt is going to be unique, but it’s not,” Salgado predicted. “Police may not know the exact wording of the prompt. For keyword search queries, the warrants have often said, ‘these terms or similar terms,’” an ambiguity that can greatly inflate the results and requires the provider to guess what “similar” means, he noted.
See “California Law Enforcement Faces Higher Bar in Acquiring Electronic Information” (Nov. 11, 2015).
The Supreme Court’s Third-Party Doctrine
The Supreme Court established in the 1970s the third-party doctrine under the Fourth Amendment, holding that people categorically have no “reasonable expectation of privacy” in information that they voluntarily share with third parties. In the case establishing the doctrine, the Court granted the government warrantless access to incriminating information that Jimmy Hoffa provided to a stool pigeon. Later, courts extended the doctrine to cover confidential information that customers give to banks and internet service providers to receive goods or services.
In the Carpenter decision of 2018, the Supreme Court reversed direction, strengthening Fourth Amendment protection around individuals’ location data in advanced technology systems. The Court required law enforcement to obtain a warrant for requests for cell site location information. Phone users had no meaningful choice about sharing an ongoing stream of their location data with the cellular service providers, the Court found.
The Fifth Circuit Court of Appeals in 2024, applying Carpenter in U.S. v. Smith, held government investigators similarly would need warrants to force map app providers to deliver identifying data for all people in an area during a set time period. “The potential intrusiveness of even a snapshot of precise location data should not be understated,” and users of map apps have a privacy expectation over their location records, the Fifth Circuit concluded. However, the court declined to retroactively suppress the prosecutor’s use of Smith’s location history, holding that police had acted in good faith.
Three decisions since 2022 have split from the Fifth Circuit’s view. The Fourth Circuit Court of Appeals, the Supreme Court of Pennsylvania and the Colorado Supreme Court each let the government force companies with just a subpoena to unveil users’ location or keyword-search histories under the third-party doctrine.
See “Implications of the Supreme Court’s Carpenter Decision on the Treatment of Cellphone Location Records” (Jul. 25, 2018).
Questions Around Privacy Interests In Light of Training
Law enforcement could aggressively argue that because Gen AI trains with user chats, even anonymized ones, individuals have no privacy interest in the content of those chats. “As the business is going to use a person’s prompts to train up the AI, it’s not for private communication purposes” between people, Ott noted. Thus, government lawyers could argue that courts should not “treat AI prompts like an email, as neither side of the relationship is treating it like an email,” he said.
“I can see [the status of] AI chats being a constitutional issue that gets ginned up and litigated pretty heavily within the next year or two,” Diggs predicted.
See “California Law Enforcement Faces Higher Bar in Acquiring Electronic Information” (Nov. 11, 2015).
Arguments for an AI Interaction Privilege
One heated litigation, between OpenAI and The New York Times over copyright infringement, has prompted debate about whether Gen AI chats deserve similar privilege protections as established types of communications. “If you talk to a therapist or a lawyer or a doctor about [] problems, there’s legal privilege for it,” OpenAI CEO Sam Altman noted in August 2025, suggesting that sensitive conversations with AI deserve similar protections. He lamented that a New York federal court had ordered OpenAI to hand over to adversaries the prompt-output logs for millions of ChatGPT users.
Legislators, not courts, will probably need to establish any privilege for AI chats, noted McCarter & English partner Erin Prest. “We’ve already seen courts saying that if a doctor puts in a patient’s information to the open ChatGPT, that appears to destroy their privilege,” she said.
Commentators have suggested extending the psychotherapy/patient privilege to ChatGPT interactions that seek counsel or emotional processing. “The social benefit of candid interaction outweighs the cost of occasional lost evidence,” Nils Gilman, a policy historian, wrote in an essay featured in The New York Times. Chat providers could create a setting for “sensitive” conversations to help establish the privilege, while any use of AI to plan or execute a crime should be discoverable under judicial oversight, he contended.
Public discussion around AI chat therapy and instances of “psychosis” from chats could sway courts to balk at warrant requests, Butler noted. “Judges sometimes have a conceptual barrier to understanding what is at stake on the privacy side when it comes to a Google Maps reverse search. They think about what’s searched as ‘oh, it’s just an address,’” he observed. “The context of chatbots might bring courts along to understanding that what we are talking about fundamentally are a person’s thoughts and communications that we have for decades protected,” he posited.

