Information Sharing

From CEO Deepfakes to AI Slop, AI Incident Tracking Ramps Up


After years of eye-opening statistics about cybersecurity attacks, it is AI’s turn. As AI systems proliferate across industries and into everyday activities, the tracking and tallying of incidents shows that AI’s risks are growing more layered, global, urgent and numerous.

As of July 2025, the non-profit AI Incident Database (AIID) has tagged and categorized more than 1,140 publicly reported incidents across 23 types of AI harms (based on 4,724 reports). Since the start of May 2025, AIID has added 57 new incident IDs, AIID editor Daniel Atherton told the Cybersecurity Law Report. Additionally, the Organisation of Economic Co‑operation and Development (OECD) has an automated tracker that has added an average of approximately 330 AI incident reports per month to its database in 2025 to date.

Nonpublic incidents are beginning to surface, too. In April 2025, the non-profit MITRE, which has long managed the U.S. government’s cybersecurity vulnerabilities database (CVEs), launched an incident‑sharing initiative to accept confidential reports of manipulations, tampering, model jacking and other malicious acts affecting AI systems. “We’re trying to be a third-party safe space,” said Christina Liaghati, department manager for Trustworthy and Secure AI at MITRE. “The data about incidents is very difficult to get outside of organizational silos” without an organization like MITRE trying to standardize and grow reporting, she told the Cybersecurity Law Report.

If AI is truly to be a solution for companies and the world, business leaders and corporate boards likely will need to hear more stories about AI’s problems. “The ability to manage potential incidents is essential,” said Douglas Robbins, vice president of MITRE Labs, in a statement. “Standardized and rapid information sharing about incidents will allow the entire community to improve the collective defense of such systems and mitigate external harms.”

This article shares observations by Atherton and Liaghati on the maturity of AI incident tracking, how to define what counts as an AI incident, trends in adverse events and benefits for companies that report their incidents.

See “First Independent Certification of Responsible AI Launches” (Apr. 12, 2023).

AIID Tracking and Trends

The AIID, run by the Responsible AI Collaborative and edited primarily by humans, has operated since 2018. The 1,140 (and counting) reports include 249 incidents that occurred before 2020.

What the AIID Tracks

AIID’s collection of incidents is based almost fully on published reports submitted by contributors, individuals and automated searches. The site is browsable, providing a discovery tool that filters and displays incident records in spatial, table and list views.

AIID classifies incidents by the domains of risk involved (e.g., discrimination and AI system safety), the AI use or goal, the sector(s) of deployment, and whether outcomes were expected or unexpected. More granularly, its descriptions of incidents refer to three taxonomies of detailed AI harms, which cumulatively sort AI failures into 65 subtypes.

AIID is a lagging indicator of emerging AI problems because it tends to compile only incidents exposed in news reports, and not direct reports from companies that experienced an incident, Atherton stressed. “Editorial bandwidth and resource constraints” limit its comprehensiveness, he cautioned, adding that “incident count is just a focused snapshot of the available reporting on an overwhelming reality that is not and cannot be fully reported through current means.”

In addition to incidents, AIID has begun including public reports of issues and vulnerabilities with AI use.

See “Dos and Don’ts for Employee Use of Generative AI” (Dec. 6, 2023).

Trends in Incidents

Three CEO Deepfakes Drew Alarm

Impersonations of corporate executives and other leaders using AI-generated video and voice have increased. In July 2025, the voice cloning of U.S. Secretary of State Marco Rubio for diplomatic communications drew attention in C‑suites, joining the following three earlier incidents:

  • Arup Group was scammed out of $25 million via a deepfake video call impersonating its chief financial officer (Incident 634).
  • Ferrari faced a targeted attack using a voice clone of CEO Benedetto Vigna (Incident 966).
  • WPP, the advertising giant, thwarted an attempt involving AI voice cloning and YouTube footage of its CEO (Incident 983).

“These three stories constantly get repeated” and invoked as warnings to businesses, Atherton said. The Arup case is cited frequently because the amount is quite astounding for a theft using a deepfake, he added.

Romance, Crypto and Celebrity Scams Dominate

Other scams using AI-generated clones prey on emotions and interest in fame and money. “We’re seeing a massive uptick in romance scams, celebrity impersonations and cryptocurrency fraud,” Atherton noted.

AI Slop Becomes an Ambient Threat

“AI slop” refers to the surge of low-quality, misleading or fake content that has become “part of the ambient reality that we live in,” Atherton explained. For example, after the Air India crash in 2025, AI-generated videos and images circulated widely, confusing the public and reportedly misleading investigators (Incident 1125).

The AI fakes “diverted resources, time and energy away from what actually occurred,” Atherton reported. AI slop creates moments of “epistemic ambiguity” that blur the line between real and fake, reducing trustworthiness, particularly in high-stakes environments, he elaborated.

Journalists rarely identify the AI tools used, which does not help combat scams and slop. It remains a big data point that is unanswered. “In many cases, as the editor, I simply have to say ‘unknown deepfake technology developer,’” Atherton lamented. Incident 1128 was a welcome change because journalists spotted Veo3, the video cloning tool, imprinted on the evidence, he enthused.

See our two-part series on phishing messages: “As Email Scams Surge, Training Lessons From 115 Million Phishing Messages” (Mar. 30, 2022), and “How to Measure Whether Your Company Is Ready to Catch Lots of Phish” (Apr. 6, 2022).

Unchecked Hallucinations in Law and Government

Results from AIID demonstrate that AI simulations are passing as authoritative proof. In Norway, a municipal report containing fake citations prompted the closure of schools and kindergartens (Incident 1009). “If someone creates a document that assumes nobody reads it carefully, the consequences can be very real,” Atherton warned. Judges are on the lookout, at least. In February 2025, a court fined the lawyers for MyPillow CEO Mike Lindell for submitting a filing with 30 large language model (LLM)-fabricated citations (Incident 1145).

Chatbot Sycophancy and Mental Health Risks

Another emerging concern is the psychological impact of LLMs on users. “By default, AI systems are becoming integrated into our everyday lives,” according to Atherton, a convergence shown by Incident 1106 in June 2025, which gathered reports of users becoming delusional after prolonged interactions with chatbots. These systems, designed to validate and reassure, can mirror users’ thoughts back to them in ways that reinforce unhealthy beliefs, which researchers call sycophancy, he noted.

See “Go Phish: Employee Training Key to Fighting Social Engineering Attacks” (Aug. 9, 2023).

More Consumer Complaints

AIID receives some reports about other companies’ incidents from individuals. In many of those cases, the person filing the report indicated to the Responsible AI Collaborative that they already had shared the same incident information with the relevant company, Atherton pointed out.

OECD’s Automated Incident Tracking

The OECD runs another, mostly automated, database called the AI Incident and Hazards Monitor (AIM). AIM has a collection of incidents similar to AIID. It has been adding an average of 337 incidents and hazards per month, captured by web scraping international news reports. Once it captures a list of incidents, LLMs evaluate their relevance and tag reports. AIM labels issues and vulnerabilities as “hazards.” While over 30 experts have helped set parameters for OECD’s classifications, a browse reveals that some of the automated reports on hazards miss the mark, describing AI use only, not misuse or risks.

AIID also uses automated crawling for incidents but finds false positives, Atherton said. As a researcher, his interest is the dynamics of public discussion of AI risks. However, facing the flood of AI fakery, he would not mind better automation. “I’m wading through the slop, in my wellies,” he added.

See our two-part series on a fake Zoom invite hack: “What Happened and Three Lessons” (Feb. 10, 2021), and “Eight More Lessons” (Feb. 17, 2021).

MITRE’s Approach to Information Sharing

As a leading cybersecurity research organization, MITRE has historically emphasized AI security, but now is broadening its “AI assurance” efforts. In April 2025, MITRE launched its AI Incident Sharing program to collect and analyze accidents and incidents.

An Initial Guide to AI Risks

In 2019, years before launching its AI Incident Sharing program, MITRE created the ATLAS matrix of adversary tactics, techniques and procedures (TTPs), which was modeled after MITRE’s ATT&CK framework used in cybersecurity. “We were starting to see these common patterns of incidents popping up,” Liaghati explained, “so we worked together with industry partners to start to characterize that into a standard.”

Each TTP in ATLAS is based on real-world case studies submitted by MITRE partner organizations and linked to an ATT&CK counterpart. “We don’t go out and scrape other resources,” Liaghati said.

Incident Sharing Launched

MITRE prepared for its AI incident information sharing initiative by holding sessions under Chatham House Rules, with as many as 50 organizations present at each, Liaghati explained. Given how fast attacks can pivot, “we want them to proactively share information with us as soon as they can,” she recalled.

Since starting the AI Incident Sharing program, MITRE has been receiving “weekly reports,” Liaghati said.

MITRE also encourages updates to reports, if possible. Companies may recognize, upon review of the incident’s forensics, that the failure that had some other root cause, Liaghati noted.

Only Participants Receive Full Reports

Unlike cybersecurity, where regulatory requirements often drive reporting, AI incident sharing remains voluntary. “It’s very much still a carrot approach,” Liaghati observed.

Only those organizations submitting to MITRE may receive access to shared indicators and the latest information. “If you want to be part of this trusted community group so you can improve your own security posture with data-driven risk intelligence, you have to submit” either incidents or demonstrated vulnerabilities, Liaghati explained. Submissions must reflect results from a “real-world deployed system or deployable system, or an actual attack on an operational system,” she clarified.

MITRE’s role as an honest broker is central to its approach. “There aren’t that many entities who can take an objective position,” Ozgur Eris, director of MITRE’s AI Innovation Center, told the Cybersecurity Law Report. “We’re aggregating information, making sense of it, and then sharing it with the people who can act on it,” offering an attractive benefit for participants, he explained.

For the broader public, MITRE has posted 32 case studies about AI incidents, Liaghati shared.

See “Can the Cybersecurity Industry Improve Cooperation to Beat Threats?” (Jan. 13, 2021).

Will Top AI Companies Participate?

The strength of MITRE’s initiative depends on the biggest AI developers participating. MITRE is engaging with them. For example, Liaghati shared, while MITRE is not directly part of the Frontier Model Forum, launched by major AI developers with its own information sharing effort, “there’s a lot of overlap in the groups involved.”

MITRE has encouraged the LLM giants to share information with its new network, particularly when they do not have mitigations for active risks. Even if an LLM company opts not to announce vulnerabilities publicly out of fear of “handing an instruction manual to an attacker,” at least, “in some cases, it is better to engage with a trusted, protected group.”

See “Welcome to the GPT Store – and Its Three Million Security Uncertainties” (Mar. 27, 2024).

Defining an Incident

One of the thorniest issues MITRE faces is defining what counts as an AI incident. “This is definitely something that a lot of the community is still struggling with,” Liaghati observed. While many incidents involve security, MITRE is gathering information involving broader AI concerns, such as performance failures, reputational risks and interoperability issues in agent-based systems.

“We’re trying to make the incident database flexible across the range of assurance risks,” Liaghati said. This includes concerns like “verifying interactions between systems, logging those interactions and ensuring human-in-the-loop oversight,” she elaborated.

MITRE welcomes reports of not just malicious attacks, but also red-teaming exercises and system failures. Companies are “getting better at defining incidents and at defining vulnerabilities,” and more “have deliberately gone deep on AI security,” Liaghati reported.

See our two-part series “What the AI Executive Order Means for Companies”: Seven Key Takeaways (Nov. 8, 2023), and Examining Red‑Teaming Requirements (Nov. 15, 2023).

Private Sector Drives Reporting

Most of the incident reports MITRE receives come from the private sector. “Industry has leaned in really quickly in deploying LLMs – and sometimes in really naive ways,” Liaghati observed.

“There’s understandably a lot more risk aversion and balanced approaches in government use cases,” Liaghati noted. MITRE is working closely with government sponsors to develop incident response frameworks, but practices remain very idiosyncratic, she said.

See “Prioritizing Public-Private Partnerships in an Increasingly Complex Regulatory Environment” (Mar. 2, 2022).

Tracking New Tactics

MITRE updates the ATLAS matrix of TTPs twice a year. Its Mid-2025 update added 19 tactics, many involving generative AI and supply chain vulnerabilities. New attack vectors have been detected during the time frame when an AI giant retrains its popular LLMs for updates, Liaghati said. “We’re continually seeing how [attackers] can take advantage of poisoning a dataset before somebody uses it,” she shared.

One case study, dubbed “ShadowRay,” offers “a really good example of supply chain attack vectors,” Liaghati continued. Attackers exploited software dependencies and a lack of authentication to steal an estimated $1 billion in computing power from companies’ AI systems.

See our two-part series on how to manage AI procurement: “Leadership and Preparation” (Sep. 18, 2024), and “Five Steps” (Oct. 2, 2024).

MITRE Shares Mitigations, Too

MITRE is trying to link all its tools. “Incidents are reactive datasets, whereas vulnerabilities are very proactive,” Liaghati noted. MITRE is updating its AI risk database in July 2025 and refining the Atlas case studies.

Most practically, MITRE publishes a roster of preventive mitigations, which are security concepts and technologies, that companies should consider. “We’re not just waving the flag so everybody should freak out. No, instead, let’s capture these problems so we can understand them and then mitigate them wherever possible,” Liaghati emphasized.

See our two-part series on managing legal issues arising from use of ChatGPT and Generative AI: “E.U. and U.S. Privacy Law Considerations” (Mar. 15, 2023), and “Industry Considerations and Practical Compliance Measures” (Mar. 22, 2023).

The Path Ahead for AI Incident Tracking

While MITRE’s AI assurance work remains in an early stage, momentum is building. “We started ATLAS with about 12 industry partners,” Liaghati said. “We now have over 150 organizations involved.”

The goal is to build a shared understanding of AI risks and a collective defense against them. “We’re trying to get the standardized information out there so people can better assure and secure their systems,” Liaghati emphasized.

The details in the MITRE and AIID databases are primarily handy for companies’ technical and AI development teams, but those teams will need to educate AI governance teams and, eventually, top executives about the broad types of incidents and accidents that have occurred and been documented.

For now, MITRE’s case studies provide educational material for companies’ AI teams. On the AIID website, Atherton’s team posts a bimonthly summary of incident trends.  

Both MITRE and AIID are positioned to capture emerging AI trouble. As more companies participate in MITRE’s group, it likely will gain insights into the dark side of the rollout of agentic systems. “In the AI security community, some have predicted that the rapid increase of agentic systems may strengthen security because the agentic systems can monitor each other,” Liaghati noted. Others are skeptical because of the market’s eagerness to add a barely tested technology.

MITRE will proceed methodically, Liaghati said. Agentic AI risks are new and “not as demonstrated as they need to be to [be] include[d] in the ATLAS matrix yet,” she noted. However, it is a safe bet that anyone wanting to know about AI agent troubles will, before long, find some details in MITRE’s case studies and in AIID’s incident reports.

State Laws

Connecticut and Oregon’s Revised Privacy Laws: Impact Assessments, Minors and More


The trend toward more robust and nuanced privacy protections continues to grow. In June 2025, Connecticut and Oregon enacted significant amendments to their comprehensive consumer data privacy laws – joining Montana, Utah, Virginia and Colorado, all of which also made major revisions to their laws in 2025.

This second installment of a two-part article series examines some of the key changes the Connecticut Data Privacy Act (CTDPA) and the Oregon Consumer Privacy Act (OCPA) amendments introduce, including a new and unique impact assessment obligation, privacy notice requirements, prohibitions on the sale of certain data types, and heightened protections for children and minors. With insights from McDermott Will & Emery, Hintze Law, Reed Smith and Orrick, it also provides practical compliance measures that companies should consider taking before the revised provisions go into effect.

Part one covered the amendments’ broader threshold and scope, new and expanded definitions of key terms, and enhanced consumer protections.

See “Connecticut AG’s Report Reveals Privacy Enforcers Reaching Deeper Into Their State Laws” (Apr. 30, 2025).

Connecticut Adds New Impact Assessment Requirement

Pursuant to SB 1295, controllers that engage in profiling for the purposes of “making a decision that produces a significant effect concerning a consumer” must also conduct an impact assessment. Impact assessments will be required for processing activities created or generated on or after August 1, 2026, and will not be retroactive.

The impact assessment requirement associated with profiling “ensures that there are checks and balances in place when [consumer-related] decisions are made so that a company actually is going through the steps of looking to see what data is used, how it is used, the results of that data, and providing guardrails to ensure that it’s being used properly in accordance with the law, but also fairly from the eyes of the consumer,” Reed Smith partner Sarah Bruno told the Cybersecurity Law Report.

It is fair to say that the Connecticut Legislature is attempting to accomplish the same objective as a data protection assessment, but for a different purpose, specific to profiling, McDermott Will & Emery partner David Saunders told the Cybersecurity Law Report.

What to Include

To the extent “reasonably known by or available to the controller,” the impact assessment shall include:

  • a statement disclosing the “purpose, intended use cases, and deployment context of, and benefits afforded by” the profiling;
  • an analysis as to whether such profiling poses any “known or reasonably foreseeable heightened risk” of consumer harm and, if so, the nature of such heightened risk, and the steps taken to mitigate such risk;
  • a description of the main categories of personal data processed as “inputs” for purposes of profiling, and the “outputs” such profiling produces;
  • an overview of the main categories of personal data used to customize such profiling, if any;
  • any metrics used to evaluate the “performance and known limitations” of such profiling;
  • a description of any transparency measures taken, such as disclosures made to the consumer that profiling is occurring; and
  • a description of the “post-deployment monitoring and user safeguards” provided, such as the “oversight, use and learning processes established by the controller to address issues arising from such profiling.”

The CTDPA amendment’s list of what an impact assessment must include mirrors similar provisions in Colorado’s SB 24-205, which requires, in part, that deployers of AI systems conduct impact assessments for “high-risk” AI systems. The Connecticut AG also can require controllers to disclose and produce an impact assessment when “relevant to an investigation,” SB 1295 states.

See “How to Address the Colorado AI Act’s ‘Complex Compliance Regime’” (Jun. 5, 2024).

Rebuttal Presumption for Controllers

SB 1295 entitles data controllers to a rebuttable presumption that it used “reasonable care” if it complied with the data protection impact assessment (DPIA) and impact assessment requirements.

Connecticut’s New Privacy Notice Requirements

The CTDPA amendment contains new prescriptive requirements that controllers must include in their privacy notices, including:

  • the categories of third parties to whom data is sold;
  • whether any processing of personal data for targeted advertising is done;
  • whether the controller “collects, uses or sells” personal data for the purposes of training large language models;
  • whether any personal data has been sold to a third party for targeted advertising; and
  • the most recent month and year that the privacy notice was last updated.

In alignment with other state privacy law requirements, new language also has been added to clarify that the controller must make the privacy notice available through a “conspicuous hyperlink” that includes the word “privacy” on the website’s homepage and on app store pages. The CTDPA amendment further requires that the privacy notice be made available in different languages and be “reasonably accessible to, and usable by, individuals with disabilities.” The Montana Consumer Data Privacy Act (MCDPA) contains similar provisions.

Data controllers are not required to provide a privacy notice specific to Connecticut, so long as the controller provides a “generally applicable privacy notice” that meets the state law’s requirements.

Companies will have to revisit and amend their privacy notices in light of these new disclosure requirements. However, because such review is already a regular practice for many, it should not be too much of a “heavy lift,” Saunders predicted.

The CTDPA amendment also requires that if a business makes any retroactive material change to its privacy notice or data privacy practices, consumers must be notified of any personal data to be collected after the effective date of the material change. For personal data collected before that change is made, the data controller must provide an opportunity for consumers to “withdraw consent to any further and materially different collection, processing or transfer of previously collected data.”

Implications of Oregon’s Prohibitions on the Sale of Geolocation Data on Adtech

Many state data privacy laws include enhanced protections around location data. When the amended OCPA, through HB 2008, takes effect it will prohibit the sale of precise geolocation data (within a radius of 1,750 feet) of the present or past location of an individual or that individual’s device. “This is the first law that will specifically prohibit sales of precise geolocation data,” Hintze Law partner Sam Castic told the Cybersecurity Law Report.

Maryland’s Online Data Privacy Act of 2024, which takes effect in October 2025, has a similar provision, but is more general in scope, banning the sale of all sensitive data, including “precise geolocation data” (within a radius of 1,750 feet). It further prohibits the sale of the data of minors under 18 years of age.

The prohibition on the sale of geolocation data likely will have implications in the digital advertising space. There are many adtech providers that engage in location-based targeted advertising to trigger ads when somebody is near a particular retail location. So, some location-based targeted advertising practices may need to cease in Oregon and Maryland, Castic said.

This does not mean that all advertising based on precise geolocation data will need to stop, Castic clarified, as long as the data is not shared with third parties triggering a data sale. At least as it concerns Oregon and Maryland, companies can collect and use location data “as long as it’s done only internally, and they’re getting consent, but not sharing the data in a way that constitutes a sale,” he said.

“What some brands certainly will do is just ringfence these two states and say, ‘When we detect a precise geolocation in Oregon or Maryland, we just won’t do targeted advertising,’” Castic suggested.

See “How to Adjust to the FTC’s Crackdown on Sensitive Location Data” (Jan. 8, 2025).

Enhanced Protections for Children and Minors

Companies are having to grapple with the “very disparate approaches” across multiple states’ comprehensive data privacy laws concerning protections pertaining to children and teens. This patchwork quilt of state laws is “getting very complicated,” Castic noted.

Revised Definition of Heightened Risk of Harm to Minors

In the CTDPA’s first round of revisions in 2023, enhanced protections were added for minors, requiring controllers that offer online services, products or features to minors to use “reasonable care” to avoid a “heightened risk of harm to minors.” The federal Children’s Online Privacy Protection Act (COPPA) and the MCDPA incorporated similar provisions into their privacy laws. Connecticut’s provisions took effect in October 2024, while Montana’s and Colorado’s provisions go into force October 1, 2025.

Under the previous version of the CTDPA, “heightened risk of harm to minors” referred to the processing of minors’ personal data in a manner that presents “any reasonably foreseeable risk” of:

  • any unfair or deceptive treatment of, or any unlawful disparate impact on, minors;
  • any financial, physical or reputational injury to minors; or
  • any physical or other intrusion upon the solitude or seclusion, or the private affairs or concerns, of minors if such intrusion would be offensive to a reasonable person.

Under SB 1295, the definition of “heightened risk of harm to minors” has been limited by adding the word “material” before “financial, physical or reputational injury” and “physical or other intrusion.” The definition also has been expanded under SB 1295 with the inclusion of the following:

  • any physical violence against minors;
  • any material harassment of minors on any online service, product or feature, in which harassment is “severe, pervasive or objectively offensive to a reasonable person”; or
  • any sexual abuse or sexual exploitation of minors.

Profiling

The data privacy laws of Connecticut, Montana and Colorado require companies to conduct a DPIA if they collect minors’ data and if a heightened risk of harm to minors exists. Through Senate Bill 1295 (SB 1295), Connecticut has gone a step further, requiring companies to now also conduct an impact assessment, in addition to a data protection assessment, when engaging in profiling that involves the personal data of minors.

Requirements for Eliminating Heightened Risk of Harm to Minors

If a controller, through conducting a DPIA or an impact assessment, determines that the online service, product or feature that is the subject of such assessment poses a heightened risk of harm to minors, it must “establish and implement a plan to mitigate or eliminate such risk.”

The AG may further request that a controller disclose its mitigation or elimination plan if it is relevant to an investigation. The controller will have no more than 90 days to produce its plan after being notified it must do so.

Prohibitions on the Sale of Minors’ Personal Data

Under the previous version of the CTDPA, companies could sell minors’ personal data and engage in targeted advertising if they obtained the minors’ opt-in consent, or parental consent for children under the age of 13. Through SB 1295, the CTDPA now establishes an outright ban on the sale of minors’ personal data, including for targeted advertising purposes.

“That’s a very strict requirement and presumably intended to protect minors from the perceived harms that can come when their data is sold, or when they receive targeted advertising,” Castic posited.

While not as restrictive as Connecticut’s requirement, a new amendment to the OCPA, through HB 2008, now prohibits controllers from selling the personal data of consumers under 16 years of age. This prohibition applies whether consent is provided or not. Any processing of the sensitive data of children under the age of 13 will need to comply with COPPA.

Prohibition on the Collection of Minors’ Geolocation Data

Under SB 1295, controllers are prohibited from collecting “precise geolocation data” unless the data are “strictly necessary” for the controller to provide such online service, product or feature. In that case, use of the data that is collected must be limited to the “time necessary to provide” the service. The controller must also provide the minor with a “signal” indicating that such precise geolocation data is being collected, which must be provided for the entire duration of the collection period.

See our three-part series “Children’s Privacy Grows Up”: Examining New Laws That Now Protect Older Teens (Jan. 15, 2025), FTC Amends COPPA Rule and Targets Data Sharing (Jan. 29, 2025), and “Children’s Privacy Grows Up: Seven Compliance Areas for Protecting Teens” (Feb. 12, 2025).

Compliance Measures

The amendments to the CTDPA and OCPA require companies, especially small companies that previously were not in scope, to make immediate operational changes to their privacy policies and procedures.

As an overarching consideration, companies that ignore consumers’ rights will create heightened enforcement risk for themselves, Bruno noted. State AGs may bring an enforcement action “if they feel consumer rights aren’t being respected, opt-outs are not flowing through properly, or data subject access and deletion requests are not being complied with,” she cautioned.

Understand and Monitor Data Flows

To comply with privacy laws, especially where there is sensitive or high-risk processing, it is important to “have a good understanding of the data that you collect,” said Bruno. “That’s step one: knowing the data.”

To gain that data knowledge, “companies should conduct a data inventory or audit of the data that they collect, store and use. The inventory should consider the purpose for the use, the amount of data that is required to fulfill that purpose, and the length of time the data is needed to fulfill that purpose, as well as the source for the data,” advised Bruno. The inventory also should “include the third parties that have access to the data and reference the contracts that may guide that access.”

From there, Bruno continued, “a data map can be built so that the organization can identify the flows of data across entities, teams and third parties (as well as the jurisdictions involved). The systems involved in the use of the data could also be identified.”

Understanding data flows can be time-consuming and challenging and “where a lot of companies get stuck,” Bruno observed, especially bigger organizations with a global footprint that tend to have a lot of inputs and outputs of data. The process “will involve input from stakeholders across the organization (marketing, legal, HR, IT, information security and procurement, as well as R&D and product developers). The knowledge gained, however, “ultimately will help in conducting effective DPIAs and impact assessments,” she said.

Particularly with the CTDPA and the OCPA now requiring companies to provide a list of third parties, understanding data flows is “a really important step,” Bruno added.

Review Protections for Children and Teens

With various state data privacy laws imposing a wide array of enhanced protections for children and teens, companies have to navigate numerous consent obligations, limitations on targeted advertising and restrictions on the sale of minors’ personal data.

“One way to navigate the patchwork of state rules is to build a single child-first approach that universally applies the strictest common requirements, such as treating anyone under 18 as needing heightened protection, limiting data uses to what is strictly necessary, and avoiding any sale or targeted advertising of children’s or teen’s data without a clear opt-in for parents,” Orrick partner Shannon Yavorsky proposed. Additional mitigating practices, she told the Cybersecurity Law Report, “would set high privacy defaults for known minor users, such as disabling tracking, profiling, precise geolocation and targeted ads by default.

Regarding compliance with the new provisions of the Connecticut and Oregon data privacy laws, in particular, companies also should identify where the data of children and teens is collected, stored and shared, Yavorsky advised. Practical steps to accomplish that, she elaborated, include:

  • mapping data flows, which “entails cataloging all touchpoints – registration forms, cookies, SDKs and APIs – where minors’ data enters your systems”;
  • enabling age flags, “which can be done by incorporating birthdate or demographic fields and ensuring systems tag and track youth data”;
  • running system audits, which can be done using “automated scanning tools to flag data storage locations and classifications involving minors”; and
  • conducting vendor due diligence, which “enables the company to survey all third-party partners to confirm if they receive or process youth data and how they handle and protect such data.

Additionally, “regular exercises that walk through a child’s data journey from collection to deletion can help reveal gaps before they become enforcement problems,” Yavorsky said.

The lack of age information of end users can be a challenge for companies, noted Yavorsky. “To the extent that a company is targeting minor users in its products and services, one solution could be to add age checkpoints or inference methods to appropriately categorize and segment minor users,” she suggested. Furthermore, “there may be blind spots where youth data is processed unknowingly, particularly by third-party vendors. Companies should ensure that they are enforcing vendor compliance through annual questionnaires and audits and updating vendor contracts, as needed, to expressly address requirements related to minors’ data,” she added.

See “The Practical and Legal Complexities of Online Age Verification” (Jun. 21, 2023).

Revisit and Revise Privacy Notices

Privacy notices should be revisited to ensure they align with all the new prescriptive requirements that controllers must now include in compliance with the CTDPA. “Privacy policies might need to be updated to clearly explain new consumer rights, data uses, and protections for children and sensitive data,” Yavorsky said.

In particular, updates may be needed to address opt-in mechanisms, and “an explanation of age-based restrictions will be helpful, such as indicating that the company does not engage in profiling, advertising or location tracking for underage users,” Yavorsky suggested. Further updates, she added, “could include disclosure of rights to delete youth data, request assessment outcomes, and revoke consent in addition to clarifying sensitive data classifications (especially for teens’ health or neural data).”

With respect to how to word relevant privacy policy provisions, “complex legal language may confuse teens, so it is important to simplify and use age-appropriate plain-language phrasing,” advised Yavorsky. Companies should be clear in their policy about what special considerations or processing limitations may apply to the processing of minors’ personal data. Lastly, if the company is knowingly processing minors’ data in a manner that requires consent, the policy should clarify the party from whom consent must be received (i.e., parent/guardian or teen) and what constitutes “verifiable parental consent,” if applicable.

The privacy policy should clearly state why the company collects youth data (e.g., account creation, safety features) and limit uses strictly to those purposes. Additionally, it may be helpful to explicitly state the uses for which minors’ data will not be processed (e.g., marketing) unless separate consent is obtained.

Adjust Vendor Contracts

In some cases, vendor contracts may need to be revised. Companies should “update contracts with vendors and partners to require compliance with new privacy laws, including obligations in relation to data sales, profiling and children’s data,” Yavorsky advised. Updates should “include explicit provisions that address the processing, sale, and sharing of personal data, and specifically define personal data as mandated by new state privacy laws,” she added.

For example, Yavorsky elaborated, “agreements should explicitly prohibit vendors from selling or targeting minors’ data for profiling or advertising. Contracts should also clearly define the permitted purposes for data use, prohibit unauthorized sales or sharing, restrict profiling activities, [outline] consent requirements and include obligations to comply with consumer rights, especially when complying with applicable state youth privacy laws.” Certain contracts “could require vendors to implement their own age verification systems, and permit companies to audit the vendor’s data privacy and sharing practices to ensure compliance with the company’s policy and relevant laws,” she said.

See “Expedia and Lululemon Privacy Pros Discuss Scaling Vendor Contracting for New Privacy Laws” (Apr. 19, 2023).

Create an Impact Assessment Checkpoint

To comply with the new Connecticut and Oregon regimes, “companies should consider implementing a DPIA checkpoint, ensuring that assessments specifically address risks to children and sensitive data, and document any mitigation steps that they have taken to address these risks,” advised Yavorsky. “As part of this process, companies should implement procedures to prevent minors’ data from being sold or used for targeted advertising purposes, and to prevent the collection (and subsequent sale) of precise geolocation data from minors.”

Mitigate Location-Based Advertising Risk

In general, “companies should obtain explicit, informed consent from users before collecting or processing precise geolocation data,” and they should “take special care if they serve states whose laws prohibit selling precise geolocation data or if their user audience includes children,” advised Yavorsky.

“To the extent a company knowingly collects minors’ personal data, companies should ensure that there are processes in place to prevent it from collecting the precise geolocation data of such minors to comply with the amendments to the Connecticut law, and to prevent the inadvertent inclusion of such data in any location-based advertising segments,” Yavorsky instructed. In particular, “companies should regularly audit all location data usage in account features, marketing and advertising, which means ensuring that the legal team is part of marketing decisions related to the processing of such data. Additionally, companies should conduct audits to ensure that minors’ data (particularly location data) is not being collected inadvertently,” she said.

As a mitigation measure, Yavorsky continued, “companies should implement an opt-in process for precise geolocation data collection and use, depending on the specific service or feature and, for minor-specific features, prohibit the practice of precise geolocation data collection entirely.” They also can “limit the IP addresses included in location-based advertising segments to exclude those states where the sale of location data is expressly prohibited (e.g., Oregon),” she added.

Moreover, as noted above, “companies should clearly disclose their use of precise geolocation data in their privacy policies, explicitly detail the purposes of processing such data and provide a clear means by which users can withdraw their consent to such processing,” Yavorsky said.

See “How to Adjust to the FTC’s Crackdown on Sensitive Location Data” (Jan. 8, 2025).

Chief Compliance Officer

Compliance Analytics Can Provide Strategic Insights for the Whole Company


Building a meaningful data analytics system for a company can be a multi-stage process with numerous challenges along the way. However, a well-functioning system can provide strategic insights that help the whole company to thrive.

This article distills insights from compliance team leaders at American Express Global Business Travel (GBT), shared during a session at the 2025 Society of Corporate Compliance and Ethics Conference on Data Analytics for Compliance Programs, about how the team built an analytics program that generated insights that created value for the whole company.

See “A Step-by-Step Approach to Upleveling Compliance Analytics” (Jul. 9, 2025).

Getting the Data in Order

The GBT compliance team’s process of building a data analytics program revealed lessons for any company doing the same.

Starting Small Is Alright

Because companies have different levels of scope and sophistication when it comes to data analytics, compliance teams should not be afraid to start small and slowly build up.

“Data-driven programs come in all shapes and sizes,” McKenzee Huber, regional compliance manager at GBT, stressed. What is important is that compliance teams should be doing something with the data they have on hand, “even if it is something simple,” she said.

“Things are changing so fast in this space that getting started, building out data collection and starting the report muscle is going to be very important,” James Griffin, GBT’s vice president for risk and compliance Americas and global compliance operations, agreed.

Assessing Risks

No matter the sophistication of a compliance program, risk assessment is a good place to begin coming to grips with compliance data.

The most common compliance risk areas, according to Huber, include meals, gifts and entertainment; conflicts of interest; and third-party investigations. These are typically hot topics in risk assessment, she said.

Beyond those, companies may have industry- and business-specific risks. For instance, some sectors have regulatory risks that need to be addressed, while companies that handle customer data need to be up to date with GDPR guidelines and cybersecurity risks, Huber advised.

Risk assessments can take many forms and can include both broad ranging surveys and targeted conversations with managers on the ground to get a real sense of their concerns, Huber suggested. Once risk areas are identified, a company should align its compliance data strategy with its risk profile to prioritize its highest risks, she said.

See “Unifying Risk Assessments: Breaking Silos to Enhance Efficiency and Manage Risk” (Jan. 29, 2025).

Mapping the Data Ecosystem

To understand where compliance-related data dwells inside and outside of the organization, it is helpful to generate a data map that shows which data resides in which systems.

Data sources inside the organization include training management, finance and payment systems, Griffin explained. External data sources could include third-party partners that provide alerts such as regulatory change insights, he said.

Compliance officers may find that the process of data mapping highlights places where the company does not yet have good data coverage, according to Griffin. They can then make sure that, later, “the data is in a better state,” he commented. If there are areas identified without any data, the team can “put in place a plan around potentially reviewing that again in the future as a possible data source,” he observed.

See “California AG Opinion Hands Companies New Tasks for AI, Data Maps, Marketing” (Apr. 13, 2022).

Filling Data Gaps

The compliance team often encounter hurdles when they seek data from elsewhere in the company, Griffin noted. Compliance officers can take this as an opportunity to seek a cross-functional dialogue with the knowledge experts with that data. “Going out to the data owners and starting to have conversations about how they use the data, how they keep the data,” he said, will help “build towards an understandable ecosystem.”

Compliance teams should articulate specifically the data they are seeking from colleagues elsewhere in the company. Conveying those specific needs can require giving the data owner clear explanations of the concerning points that the data might be revealing. “Emphasizing the regulatory importance” of such data is helpful, as is going “up the chain” to garner the CCO’s backing in seeking the data, Huber said.

Compliance professionals should be specific about what they are asking for, Griffin advised. In the process of trying to articulate specifically why the data is relevant, compliance professionals can come to a clearer realization in their own minds of what this data indicates, according to Huber.

When pulling data from a part of the company where the compliance team is not managing the data, compliance professionals may find the data is inconsistent and hard to process, Huber noted. It is then left to compliance – not the department that maintains the data – to “cleanse” it to ensure consistent and comprehensible usage, she said.

See “Thoughts From DOJ Experts on Using Data Analytics to Strengthen Compliance Programs” (Jul. 17, 2024).

Organizing the Data

Once data is mapped, the next step is organizing the most relevant elements into a data reporting system or dashboard that enables a compliance team to provide real-time insights for the overall organization’s risk position.

Using Dashboards to Present Data

Companies have different levels of sophistication in developing a dashboard, and an all-embracing technical solution is an elusive goal. Nobody should feel inadequate for not having a compliance dashboard that automatically combines all of a company’s systems and data insights, Huber reassured the audience.

It is important for a centralized location to exist for the purposes of presenting to the board or CCO, but data will arrive in different forms requiring different types of manipulation by the compliance team, Huber said. Only a portion of the data may arrive in an automated manner, so compliance professionals can find themselves managing some of it manually, she emphasized.

Different data will be relevant for different operational purposes, Griffin pointed out. “What a compliance operations manager may be interested in, with respect to investigations, may look very different from what a chief risk or compliance officer is interested in from a holistic program perspective,” he offered as an example.

Off-the-Shelf Options

In recent years, impressive third-party compliance technology packages have become available that come with dashboards, Griffin noted. However, these pre-built dashboards are usually aimed at frontline compliance people. “Those dashboards do not necessarily have the same type of reporting capability that is material to the next level up or the level above that,” he warned.

Business intelligence products that offer data visualization and analysis, such as Power BI and Tableau, are useful to compliance staffers, according to Griffin. These products offer helpful technology with which compliance professionals can “think outside the box” on consolidating data points. Compliance specialists can “use the tools that are being used across the enterprise” to produce some of the compliance data analytics and reporting capabilities, he commented.

Many out-of-the-box compliance tools focus on the reporting end, but are best used in conjunction with another tool, such as Power BI, to produce good dashboarding, according to Huber. She dubbed this strategy a “consolidation” of tools, saying the common compliance tools should not be seen as “the one stop shop when it comes to dashboarding.”

See “Compliance Survey Finds Data Management Challenges, Rising Costs and Increasing Uptake of RegTech” (Sep. 14, 2022).

Setting Thresholds and Indicators

As a compliance team sets up its compliance dashboard, thresholds that trigger warning indicators are an important feature.

Qualitative and Quantitative Measures

The team should set both quantitative and qualitative benchmarks for compliance performance. As an example of the balance between quantitative and qualitative metrics, Griffin contrasted the numbers of vendors onboarded with the risk assessments related to those vendors.

See “Compliance Program Implementation: Compliance Calendars and Testing” (Jul. 24, 2024).

Leading and Lagging Indicators

Distinguishing between leading and lagging indicators is important as thresholds are established for the points at which an indicator starts to show a material issue that requires investigation.

Leading indicators are ones that help predict future success, Griffin explained, and include training completion rates or numbers of onboarded vendors. These give insight into how the business is likely to perform in the future.

Lagging indicators are backward-looking clues about performance, he added. These include any reports that have emerged from investigations.

Establishing the Thresholds

Different thresholds should determine the point at which an issue should come to the attention of the chief risk compliance officer, an enterprise risk committee, a C‑suite level committee or, ultimately, the company board, according to Griffin. Thresholds are likely to need adjusting over time, not only as a business grows, but also as regulations change. It is a “a good, healthy practice” to decide upon the regularity with which to review thresholds, he said.

Thresholds set by regulators are a good guideline for where a company should establish its own thresholds in a respective area, Huber pointed out. A company could choose to set its own threshold below the regulators’ threshold triggering a mandatory disclosure, she suggested. That would let management proactively mitigate the risk in question, she noted.

An Iterative Process

Thresholds may need to change based on the sheer volume of data that a program collects as time passes, Griffin shared. As the compliance program gets more sophisticated and its data feeds and analytic capabilities grow, the team is likely to develop “a more fine-tuned ability to see where a particular item becomes more material,” he said. Thresholds can thus be adjusted as the team gains deeper insight into “how different metrics play in and compare to one another.”

This process of setting thresholds should be repeated at clearly established intervals of time, Griffin said. “All of this needs to be iterative” because the development of the data analysis “is going to drive more and more nuanced insight,” he explained.

See “How eBay and PayPal Use Key Performance Indicators to Evaluate and Improve Privacy Programs” (Jan. 8, 2020).

Producing Executive Insights

With data organized and accessible, the compliance team can identify insights to help the company as a whole manage risks, beyond compliance. With a refined system, it can identify patterns, trends and emerging risks.

In this way, compliance programs can shift from a reactive position to managing risks proactively. “As compliance leaders, we need to be able to drive action in our businesses and within our functions where there are material risks,” Griffin said.

Making Sense of Patterns

The goal of assembling data into a dashboard is to allow the compliance team to identify patterns and then interpret those patterns for company leaders. This requires a broader understanding of what is going on with the compliance program.

In some cases, a deteriorating trend in indicators may actually suggest that the company is doing better, Huber noted. For example, if a company has been pursuing a speak-up campaign, resulting in an influx of reporting and a greater awareness of hotlines, the monthly number of investigations may rise to a brief peak and eventually settle at a level higher than before the campaign. Thus, context shows that a trend that may appear negative in fact reflects improvements in the company.

Seeking Insights From Business Managers

With the importance of context in mind, compliance professionals should remember that business leaders may be able to provide insights into data trends.

Compliance professionals should see their relationship with the business leadership as a two-way street, with business leaders both helping to explain issues and then driving change to course correct if necessary, according to Huber. “Getting those business insights is going to be really key to having those conversations about how to mitigate the risk.”

If a data point shows a noticeably worrying tendency, a business leader may have no problem explaining the odd trend based on their knowledge of business, Huber said.

The compliance function must be “talking to business leaders across the enterprise about how they are using data,” including any new tools they are using, Griffin stressed. When such conversations happen, compliance professionals sometimes “find out about new tools that the business is using from a data perspective that [they] may not have known about.” The compliance team can then incorporate and leverage those insights, he noted.

Sharing Insights Back to Departments

Company departments that provide data to the compliance function are often gratified to learn that the data proved helpful for company-wide goals. Thus, it is a good idea for the compliance team to “share those results back to the function providing the data, to help them see the ‘why’ behind the request for the data,” Griffin said.

However, Griffin cautioned, what can be shared with other departments has its limits. Discretion is required if the data includes “confidential information that leads to detection of a violation or some other type of potentially confidential or privileged case,” he said. However, simply telling a company function that data it provided gave the C‑suite more insight will show that function it is “a part of the overall compliance and risk management effort of the enterprise,” he maintained.

When reporting findings to senior leaders, the compliance team should highlight other departments within the company that have helped with data, Griffin suggested. “Those things can be incredibly helpful for building the overall data ecosystem and helping everyone to see value in what is being produced,” he explained.

Celebrating wins is also helpful. For example, if the volume of negative incidents has gone down in a department following a training campaign, that data trend should be reported back to the department with a commendation for a job well done, Huber said.

See “How Combining Approaches to Data Analytics Can Yield Powerful Insights” (Mar. 16, 2022).