Artificial Intelligence

AI Agent Security: What CISOs and GCs Need to Know to Defend the Enterprise


Companies’ adoption of AI agents adds autonomous workers throughout their systems. The agents read, write, schedule, log into accounts, hop through workflows and call on a list of tools in cloud platforms to finish their tasks. These powers drive the convenience – and the risk. Agents are “really enthusiastic interns” with “very little context, a desire to do a great job, and an infinite amount of energy,” Barndoor AI CEO Oren Michels told the Cybersecurity Law Report. “Without guardrails, they will make assumptions,” and, absent precise restrictions, “they will make it up.”

Like human employees, these unpredictable AI workers handle internal company data and interact with external websites. Their adoption by 80 percent of companies has led to incidents, according to reports. At Amazon, an internal coding agent decided to dismantle and rebuild a deployment environment, forcing offline parts of the AWS cloud service for 13 hours. At another company, a basic “email and calendar” agent uploaded sensitive inbox contents for a reply to an outside sender’s calendar invite, recalled Liv Porter, head of solution engineering at Gray Swan. The agent stuffed 10 emails’ contents – including credit card numbers and an offer letter – into the invite’s agenda field, mishandling a simple instruction, she told the Cybersecurity Law Report.

Beyond AI agents’ unpredictability, they introduce other threats distinct from those arising with traditional software. Agents use a new communication protocol (the Model Context Protocol, or MCP) that hackers may exploit and organizations’ engineers must learn to assess for security. Attackers could try to hijack agents’ activities or poison their memory. In multiple ways, agents expand the attack surface that organizations must defend.

This article, the second in a two-part series on real-world security for AI agents, provides an action plan for CISOs and lawyers to strengthen security and reduce risks around agentic AI, with expert perspectives from agent security specialists at Barndoor, Gravitee, Gray Swan, Skyflow and ZwillGen. It lays out practical steps and considerations for inventorying and tracking agents, protecting data, implementing human oversight and adjusting other cybersecurity controls for agents. Part one discussed three survey-based studies that showed widespread adoption of agents, immature controls, frequent incidents, and limited monitoring and tracking by companies.

See “Restricting Super Users and Zombie IDs to Increase Cloud Security” (Jul. 31, 2024).

Newfangled Threats, New Agenda for CISOs

AI agents pose several security challenges that CISOs must address – some are novel and unfamiliar, and others are variations on existing cybersecurity tasks. Security and legal teams have good justification for experiencing “fear, uncertainty and doubt” over agent threats throughout 2026. Informed explanations and analysis of agent security issues have only begun to spread to CISOs, while security tool vendors are advertising that companies need a new security stack for AI agents.

For CISOs and GCs to effectively set an agenda for securing and governing agentic AI, their teams should grasp upfront three key changes that agents bring along with their promise of automated productivity: the MCP, continuous monitoring and cyber tool adjustments.

See our two-part series on securing emerging technologies without hampering innovation: “Private Sector Challenges” (Mar. 9, 2022), and “Government Initiatives and How Companies Can Adapt” (Mar. 16, 2022).

MCP Servers, the New Communications Protocol

Anthropic created the MCP in late 2024 to create a standardized method for AI agents to access tools and other external locations. Agents build their operational power by accessing the world of tools for executing tasks, e.g., Salesforce for customer relationship management, Asana for project oversight and HubSpot for marketing. In completing tasks, agents’ reach likely will extend to many application programming interface gateways (APIs), plugin features, databases, cloud resources and messaging services, lots of which are already difficult to secure.

The development of a new standard mechanism for AI agents’ interactions, after decades of web protocols enabling communication between parties, has helped engineers embrace AI agents and greatly accelerated enterprises’ adoption of them. The AI Agentic Foundation now runs the MCP’s development as an open-source project supported by several large technology companies.

The MCP standard makes tool discovery and use easy for agents. An MCP server is a control layer wrapping an API that sets which elements of the API the AI model can access. It addresses the problem that AI agents and models lack software features to safely interact with APIs. Companies “just point the agent at a server and it is supposed to be able to read and understand which tools are available,” explained Jey Kumarasamy, a legal director in the AI division at ZwillGen.

One risk with MCPs is that some are unsanctioned, Michels noted. “There are a gazillion MCPs out there, some are available on GitHub for you to download. Most of them have malware” that might hijack the agent, he cautioned. “A Salesforce MCP might have 40 or 50 different tools that the agent can use” because “the Salesforce API is very, very powerful,” offering users many capabilities, he highlighted. Others are read-only.

Another risk is that many MCPs use a default posture of universal access, “here are all the tools,” not “here are these few tools, this way, with this data,” Michels said.

A structural pitfall with MCPs, Kumarasamy warned, is that “the servers could change at any time, and when it does change, the agent will automatically start using the new tools that are available before” the company is aware, which creates a persistent trust and diligence concern.

See “FTC Settlement Spotlights Security of APIs Proliferating Across the Internet” (Mar. 5, 2025).

Runtime Monitoring

Because of agents’ inherent unpredictability and structural trust issues like those with MCP servers, a top goal should be upgrading to continuous “runtime” monitoring to track and log agent activities. Ideally, an organization deploying dynamic autonomous AI agents would go beyond monitoring and have active controls providing alerts or circuit-breakers to stop unwanted practices by agents.

This oversight imperative is both practical and legal. To adhere to the E.U. AI Act’s record-keeping obligations under Article 12, companies may need to log for auditing all autonomous decisions in agent deployments.

While important, monitoring can be challenging. For one, some agent platforms lack native logging. Most companies probably are not yet using tools sufficient to track agents continuously. A survey by vendor Gravitee on agent-related incidents found that 70 percent of its respondents discovered incidents using a retrospective review of records, not live monitoring, the company’s chief product officer, Linus Håkansson, told the Cybersecurity Law Report.

Volume is another challenge for monitoring. AI agents, as unceasing operators, reportedly produce 10 to 20 times a human employee’s log events during active hours. Cybersecurity scanning tools on the market tout that they log every application running in a company’s systems – every file touched, command entered and network connection made. CISOs will need to research whether their endpoint detection and response service genuinely can capture AI agents’ chain of steps, from model prompts to cloud APIs to chats, documents and download sites.

See “Applying AI in Information Security” (May 15, 2024).

An Upgraded Security Stack

Do agentic systems need their own security stack? Many vendors will argue yes, as they are selling “AI” cyber tools, with terms like “AI Detection and Response.” These highlight, for example, that an agent’s detection surface goes beyond the traditional ports and processes to prompts, plans, tool calls, external links and function signatures.

Likely, some of CISOs’ typical cyber tools can help control autonomous AI. Securing agents will involve several longstanding cybersecurity measures, such as monitoring and restricted access to confidential data. Yet adjustments also will be needed to companies’ static rules to protect against agent risks.

One key area in which upgrades may be necessary is scanning AI agent outputs. “If your guardrails only detect potentially malicious actors coming in, then you’re going to miss a lot of the problems that your agent will exhibit,” Porter cautioned. Seemingly “internal” agents still may copy invites, tickets, documents or data into APIs or other external places, which data loss protection tools may not capture, she observed.

See “How to Select the Latest Cloud Security Tools and Platforms” (Aug. 21, 2024).

Action Plan for Defending and Harnessing AI Agents

GCs can help shape each company’s cyber agenda for agents by looking at liabilities. AI agents are already causing real incidents that expose the company to privacy and cybersecurity liabilities, including data leakage and unauthorized access to regulated or sensitive information. Weak visibility and identity management for agents undermine legal accountability and responses to regulators, particularly if organizations cannot fully audit agent actions or track credential use. Legal and compliance teams that lack insight into the agents’ sharing of data cannot establish proper protection for regulated data.

In the array of important steps to take to govern and secure agents set forth below, the following key themes can be found:

  • Treat agents as narrowly empowered contractors/interns with their own identities and permissions.
  • Keep sensitive data out of agents’ reach by default.
  • Control agents with explicit and testable instructions, and shape instructions to each agent’s tool permissions and contexts.
  • Monitor agents’ every action, both attempted and completed.
  • Build decisional boundaries for humans to approve riskier actions.

1) Inventory and Classify

Companies should establish a single repository for information about agents. “Start by cataloging what you have – AI agents, MCP servers, LLMs, etc. – both manually and automatically, using a system that monitors and captures their use,” Håkansson recommended. Map the tools they can call, their API connections and data sources that the agents touch, he added.

Companies’ AI agent use is bound to start in decentralized fashion. “Technology has a habit of entering in the back door,” Michels observed. “People who need to get their jobs done and need to compete and want to make their bonus and want to get promoted and want to have their companies be successful are going to figure out how to use these agents” before the IT department has a chance to be involved, he said.

In an inventory, for each agent there should be the following entries:

  • owners and users;
  • purpose;
  • identity designations;
  • data it can access;
  • level and duration of privileges;
  • who approves privileges;
  • plugins/APIs accessed;
  • downstream agent interactions or integrations; and
  • risk tier.

Companies should ensure central reviews of agents are collaborative and include the people who administer the various enterprise tools, Michels urged. “The CISO doesn’t know what ‘an opportunity’ is in Salesforce, let alone how to regulate who is allowed to update and change them,” he said.

Company teams can consider dividing up assessment of agents and policies concerning them. Security might approve identities, privileges, guardrails and monitoring, for example. Business or data owners might attest to the purpose, necessity and outcomes of the agents and accept the risks. Legal might approve regulated data permissions and logging policies.

See “Cybersecurity and Privacy Teams Join to Create Data Governance Councils” (May 4, 2022).

2) Establish a Policy for Each Agent

Companies should create policy IDs for each agent that articulate and encode behavioral contracts for the agent, Porter urged. Use plain‑language rules tied to the specific tools, data and agent paths, which will govern “what an agent can do. The more specific, the better,” she said.

Companies should choose more restrained tasks for agents until safeguards ramp up, which might take a year, Kumarasamy advised. For example, productive cybersecurity tasks for agents could include testing, security scans and patch proposals, he said.

Moreover, by default, agent actions and data access should be framed as “allow lists,” not blacklists, Skyflow product lead Joe McCarron recommended.

Companies also should only grant agents “read-only” rights to production systems and should not allow them to “write” or update their code, Kumarasamy advised.

Agents are excellent at calling on tools yet brittle at boundaries, so companies should block their ability to create a new account, Michels suggested. For example, he elaborated, an agent working on an employee user’s customer management tasks might reason, “You had a call with IBM. I don’t see an account for IBM. I’ll make another one.”

When setting an agent’s policy, overseers can define “never events,” like forbidding its ability to act without seeing a confirmatory email, Porter noted. “If the agent sees that it can’t behave in certain ways, in certain contexts, with certain tools, and without certain information, it will comply,” she said.

See “Strategies for Managing the Intersection of Cybersecurity and New Technologies” (Dec. 9, 2020).

3) Distinguish Agents in Identity Management

In many enterprises, agents inherit the user’s full identity and credentials. Then oversight teams decide they “want the user and the agent to have different levels of access,” McCarron told the Cybersecurity Law Report. To avoid problems, organizations should issue specific identities to agents in their systems. If an agent acts with a user’s full rights, it might bypass the company’s privilege restrictions, he noted. When the agent has its own identity, traceability improves.

Companies should consider using short-lived and tightly scoped credentials for agents, Kumarasamy suggested. They can also contain “permission sprawl” by including rotating credentials to keep an agent’s identity from lingering, and to regularly schedule the decommissioning of non-human identities. These steps can help contain incidents.

See “Checklist for Building an Identity-Centric Cybersecurity Framework” (Nov. 3, 2021).

4) Restrict Access to Data

“The number one concern that we have been hearing about from clients is the risk of data exfiltration – how do you prevent an agentic AI tool from accessing something and then uploading it to some random website,” Kumarasamy reported. “There aren’t really great ways to fix this unless one builds from the ground up very hard constraints on the agent,” he said.

To help manage the risk of exfiltration, companies should “expose as little data as possible to the AI,” McCarron urged. “Stop proliferating data, stop duplicating PII, stop spreading PII throughout your system,” he advised. Some tools can replace raw PII/PHI with non‑sensitive tokens, leaving agents with visibility only to placeholders. Using placeholders, however, might reduce agents’ utility in many cases, Kumarasamy pointed out.

Another approach to implementing data restrictions would be to use just‑in‑time approvals to use PII, but it could be difficult to implement to users’ expectations for ease and speed, Kumarasamy said. For example, McCarron shared, companies can set tools to reveal the actual sensitive values only at a moment of legitimate action, like when the agent submits a credit card.

The bottom line: sensitive data only should be revealed to agents under strict policies and ideally after considering the context.

See “Getting Used to Zero Trust? Meet Zero Copy” (Mar. 1, 2023).

5) Set Human Decision Checkpoints to Halt the Agents

Companies should evaluate where to program in human-in-the-loop gates to halt agents. High-impact actions such as financial transactions, configuration changes or data exports should require additional validation from a human.

The MCP gained an “Elicitation” feature in June 2025, which can stop the AI agent and request explicit user approval before its workflow continues, Kumarasamy highlighted as a significant development for AI governance. However, MCP creators must declare this capability to make elicitation requests to humans during the MCP’s setup, not later, so widespread use is vital.

Full user approval before actions can be unwieldy and limit agents’ usefulness, McCarron cautioned, so determining which data or actions require specific approval will be a regular discussion point for overseers.

6) Increase Logging and Observability

Companies should log agent attempts as well as actions, Michels recommended. “Safety starts with visibility,” he said, noting audit trails should show when agents tried actions, but policy or privilege blocked them. Understanding why an agent reasoned that an action was appropriate helps the security team work to adjust and ultimately harden AI security.

Companies monitoring for anomalous behavior also should see if they can program circuit breakers, rate limits or kill switches to shut down any unwanted behavior that is detected.

The hope in 2026 is that security programs will evolve to include runtime oversight tools that will evaluate agents’ intent, context and risk before they execute their actions. Currently, deployments have run a handful of agents under human supervision, but adoption of hundreds is expected before 2028, which means that “humans are not going to be able to watch this closely,” Michels cautioned. “The management of all this has to be run by agents themselves,” he said.

See “Using RegTech to Enhance Compliance” (Jun. 30, 2021).

7) Adjust Procurement for the Frenetic Market Around Agents

Many platforms are advertising that companies need tools for building, managing, securing, integrating and streamlining agents, and easing their interactions with other agents.

Amid much hype about agents, buyers at companies must beware of falling prey to “one prompt, job done.” As a tester for AI agents heading to market or into production in enterprises, Porter has replicated sensitive-data exfiltration flaws across multiple AI agentic tools. But when some startups were alerted about the discovered leakage, they did not patch their products before selling them on the market, she lamented.

See “Contracting With Vendors to Mitigate Third-Party AI Risk” (Feb. 18, 2026).

Artificial Intelligence

Connected Cars: The Legal Landscape


Connected cars are governed by a mosaic of laws, regulations and guidelines that present complex compliance challenges. At the federal level, original equipment manufacturers (OEMs) need to look to FTC enforcement actions and agency pronouncements for guidance. At the state level, OEMs must navigate a patchwork of laws that can overlap but also vary in important respects. Meanwhile, the E.U. has its own regulations and guidelines that can impose differing requirements.

This second article in a four-part series on connected cars presents an overview of the applicable legal framework. With supplemental context from the Cybersecurity Law Report, it distills insights shared by Morrison Foerster partners Marian Waldmann Agarwal, Alex van der Wolk and of counsel Jonathan Newmark during a firm program.

Part one of the series covered FTC enforcement activity related to connected vehicles. Part three will provide practical advice for navigating privacy issues raised by connected cars, including notice and consent; and part four will discuss the cybersecurity issues that connected cars present.

See “Examining Newly Released Privacy and Security Guidance for the Fast-Driving Development of Autonomous Cars” (Oct. 5, 2016).

Data That Connected Cars Collect

Connected cars collect a wide array of data. Telematics systems can track vehicle speed, acceleration, braking, mechanical diagnostics and geolocation, Newmark noted. Infotainment systems, in turn, can capture synchronized phone data, including call logs, contact information, text messages and app usage, he added. In addition, in-cabin systems using cameras, microphones and biometric sensors can record gaze, facial expressions and voice commands. This data can “expose deeply personal behaviors” and be used for targeted advertising, dynamic pricing or law enforcement, he said.

U.S. Laws

The U.S. regulatory framework for connected cars includes both federal and state law.

The FTC Act

The “primary federal consumer protection law in the U.S., including in the privacy space,” is the FTC Act, which “prohibits generally unfair or deceptive acts or practices in commerce,” Newmark noted.

In a post on its Technology Blog in May 2024, “the FTC highlighted three key data collection practices and concerns specific to connected vehicles,” Newmark highlighted. “First was the collection of geolocation data. The FTC noted that the collection, use and disclosure of location can be an unfair practice in certain instances,” he said. “Second is the disclosure of sensitive information beyond specified purposes,” he noted. Companies must use sensitive information to which they have legitimate access “only for the reasons they collected that information,” and the “surreptitious disclosure” of such information “can be an unfair practice,” the FTC said in its blog. “Third, the FTC highlighted the use of sensitive data for automated decisions,” Newmark added. Specifically, he elaborated, the FTC said that using sensitive data for automated decision-making can be “unlawful” and noted that “companies that feed consumer data into algorithms may be liable for harmful automated decisions.”

Designed to align with FTC guidance, Newmark continued, leading automakers adopted their own principles for vehicle technology and services. Led by the Alliance for Automotive Innovation, these Consumer Privacy Protection Principles (Privacy Principles) promote “transparency, choice, respect for context, data minimization, de‑identification, data security, data integrity [and] accountability,” Newmark said. They also apply to information that is linked or linkable to a specific vehicle in addition to the owner, he noted. Furthermore, he added, they state that geolocation information, biometric data and driver behavior information all warrant “enhanced protections.”

State Laws

There is a patchwork of state laws that govern the privacy obligations of car companies.

Consumer Privacy Laws

There are a “growing number of state privacy laws in the U.S. that relate to the collection, use and disclosure of consumer personal data,” Waldmann Agarwal observed.

These laws generally define “personal information” broadly. The CCPA, for example, defines “personal information” as information that “identifies, relates to, describes, is reasonably capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or household.” Given this broad definition, much of the information collected by connected cars can be considered personal information if it can identify a person or reasonably be linked with them.

The consumer privacy laws implicate several obligations to consider with respect to the collection of data by connected car companies and the use of that data, Waldmann Agarwal noted, including:

  • what notices to provide and when, such as at collection;
  • how to handle personal data and sensitive data, such as geolocation and biometric data; and
  • individual rights, including opt-ins and opt-outs for the collection and use of consumer data and access rights.

State laws generally distinguish personal data from sensitive personal data and impose heightened obligations on companies with respect to the latter. The two are not always defined uniformly across laws. The CCPA’s definition of “sensitive personal information,” for example, is broader than most other U.S. state laws to include more than precise geolocation and biometric information. It also includes information related to account log-ins, financial accounts, debit or credit card numbers in combination with any required security or access codes, passwords, credentials allowing access to an account, government IDs (e.g., Social Security, driver’s license, state ID or passport numbers), philosophical beliefs and union membership.

U.S. states also differ in how sensitive personal data should be handled by focusing on either strict opt-in consent (requiring permission before processing) or flexible opt-out rights (requiring only notice unless a user stops them). Most states require opt-in, while California, Utah and others offer consumers the right to “limit the use” of their sensitive data or “opt-out” options.

See “Compliance Takeaways From the CPPA’s Enforcement Action Against Honda” (Apr. 30, 2025).

AI and Automated Decision-Making Laws

Connected car data also implicates a growing set of state AI laws that govern “high-risk” AI use cases, as well as the state consumer privacy laws that address profiling and automated decision-making, Waldmann Agarwal noted.

Colorado’s Consumer Protections for Artificial Intelligence (Colorado AI Act), for example, which is scheduled to enter into effect in June 2026, will impose requirements on developers of high-risk AI systems, Waldmann Agarwal said. High-risk systems are those that “make or are a substantial factor in making a consequential decision that has a material, legal or similarly significant effect on the provision or denial to a consumer, or costs or terms” of products, including financial or insurance products, she explained. Connected car technologies fit squarely within this definition because modern vehicles rely heavily on automated analytics and machine learning models that directly influence consumer-facing decisions.

In addition to those AI-specific obligations, some connected car data practices trigger the state consumer privacy laws governing profiling, Waldmann Agarwal continued. Profiling is generally defined as “the automated processing of personal data to evaluate or predict the person’s economic situation, their health, their preferences, their behavior, location or movements” and offer consumers the right to opt out of some profiling, she explained. State privacy laws require data controllers to conduct data protection assessments where profiling presents a “reasonable, foreseeable risk of harm to consumers,” namely “any kind of unfair treatment or potentially physical injury,” she elaborated. State law profiling requirements can apply when a connected vehicle is used to generate driver scoring or risk profiles, predict behaviors or routines, or determine insurance rates.

California’s forthcoming regulations on automated decision-making technology (ADMT) also have major implications for connected car manufacturers, telematics providers, insurers and in-vehicle service platforms. The ADMT regulations define ADMT broadly to include “any technology that processes personal information and uses computation to replace or substantially replace human decision making,” Waldmann Agarwal explained. This directly captures many analytics uses in connected vehicles. Under the new regulations, which are scheduled to go into effect on January 1, 2027, businesses will have to notify consumers of ADMT prior to use and may have to offer them an opportunity to opt out of such use, as well as “provide information on the logic and the outcome of these decisions,” she said.

In use cases that are high risk or involve profiling, “the company may need to make decisions to ensure that there’s a risk assessment, a human in the loop, if appropriate, that there’s testing for things like bias and accuracy, that they’re looking at data minimization and that they’re able to explain the outcomes to the user,” Waldmann Agarwal advised.

“Key state-level expectations would include mitigating any bias in a product, providing the consumer with notices and explanations, making sure that the use of the AI is accurate and fair, and having a governance program to show that you’re handling in-house and third-party AI systems that are used,” Waldmann Agarwal emphasized.

See “Updating Compliance Programs to Address the CPPA’s Regulations on ADMT and Risk Assessments” (Sep. 17, 2025).

Biometric Privacy Laws

A “growing body” of laws, including the Illinois Biometric Information Privacy Act, regulates the use of facial recognition, iris scans, fingerprints and other biometric identifiers, Newmark said. These laws “typically require notice to individuals before collecting biometric data,” he noted.

The use of “in-cabin monitoring, dashcams, driver facing cameras and audio recording” by connected car companies “raises questions about consent, expectations of privacy, and who owns or controls the data, especially when it comes to shared vehicles, ride hailing or rental fleets,” Newman maintained.

See our two-part series on legal and ethical issues in the use of biometrics: “Modality Selection, Implementation and State Laws” (Feb. 21, 2024), and “FIDO, Identity-Proofing and Other Options” (Feb. 28, 2024).

Anti-Stalker Laws

Some states, including California, have passed anti-stalker laws aimed at protecting survivors of domestic violence from being harassed by another person through their access to the victim’s connected vehicle service, Waldman Agarwal stated. Senate Bill 1394, which became effective July 1, 2025, requires car manufacturers to allow victims to request the disconnection of access and terminate access within two days of any such request, she noted. Starting in January 2028, vehicles must have the ability to indicate whether someone outside the vehicle has accessed the connected service and permit the disabling of location access from within the vehicle, she explained.

In addition to the state laws, the FCC has issued a notice of proposed rulemaking to protect domestic violence victims, Newmark noted. The state laws “really represent a shift in privacy, not just as a matter of consent, data minimization or control over personal data, but as a tool for safety and autonomy,” he stressed.

State Consumer Health Privacy Laws

State consumer health privacy laws can apply to connected cars when the data they collect meets the statutory definition of consumer health data. For instance, the laws “apply to biometric data and geolocation data,” Newmark stated. In Washington and Nevada, for example, these laws require companies to issue a “dedicated privacy notice” for any health-related information so that consumers understand how their data is collected, used and shared. Furthermore, explicit consent is required for selling health data and using it for targeted advertising, he said.

Another law, California AB 45 prohibits the collection, use, sale, sharing or retention of persons near family planning centers (unless the activity is done to provide goods or services requested by the individual) and restricts geofencing, that is the creation of virtual boundaries for purposes of data collection, near any healthcare facility, Newmark explained. This would apply to connected vehicles traveling near such healthcare facilities, he noted.

European Regulations

There is also an interwoven network of laws and guidelines that govern connected cars in the E.U.

GDPR

The GDPR is the E.U.’s principal privacy regulation. It defines “personal data” as “any information relating to an identified person or identifiable natural person,” that is, anyone who can be identified by name, identification number, location data, an online identifier or any factor “specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that person.” The GDPR also prohibits the processing of certain “special categories” of data, including health data and biometric data used uniquely for identifying a person, unless the consumer gives their consent or another enumerated exception applies.

Under the GDPR, a car company needs a legal basis for collecting and using personal data, which could be either the company’s legitimate interest or to perform under a contract with the consumer, but the EDPB’s main concern appears to be consent, van der Wolk said.

EDPB Connected Vehicle Guidelines

In 2021, the European Data Protection Board (EDPB) issued guidelines for connected vehicles and personal information, van der Wolk said. The guidelines characterize the use of connected car personal data in different ways, including data that is processed in the vehicle, data that is exchanged between the vehicle and connected devices, and data shared with other parties, he explained.

Two of the major issues highlighted by the guidelines are consumer control of data and information asymmetry between the company and the consumer, van der Wolk stated.

The guidelines are “pretty expansive” in their definition of personal data and consider data about engine wear and tear, [oil] pressure and oil status to be personal information “in most cases,” van der Wolk said.

The EDPB guidelines also recognize the GDPR’s requirement for consent to process special categories of data. “Biometric data . . . is a special category of data under GDPR,” van der Wolk noted, thus requiring consent prior to collection. Generally, the EDPB prefers that companies provide non-biometric alternatives to collecting biometric data, he said. In addition, it prefers biometric information to be stored locally on the device so that the company does not obtain or hold it, he stated.

Geolocation, on the other hand, is not special category of data under the GDPR and is not subject to the same consent requirements, van der Wolk explained.

E.U. Data Act

The debate over what information is personal or non-personal “may have become moot” to a certain extent because of the E.U. Data Act, which went into effect in September 2025 and creates powerful new rights for users of any connected product or related service to access and control the data those devices generate, van der Wolk opined. The definition of data is intended to be “intentionally broad” under the E.U. Data Act and encompasses both personal and non-personal data, including data generated as byproducts (such as diagnostic data) or data generated by non-use, he said.

Under the E.U. Data Act, consumers have a right to request a copy of all data that is generated by their connected product or related service, van der Wolk noted.

The act “makes the privacy argument somewhat moot” with respect to whether wear and tear or [oil] pressure data is personal information because consumers have the right to all of this information in a machine-readable format, he remarked. One potential limitation of the E.U. Data Act, however, is that it concerns raw data only and does not extend to inferred or derived data, he stated.

E.U. AI Act

The E.U. AI Act takes a “risk-based approach” to regulation, van der Wolk explained. “Most companies will focus on the high-risk category because that’s the category that’s not prohibited but that’s where the majority of the substantive obligations under the act apply to now.”

The E.U. AI Act very specifically provides that if an AI system is part of a safety component in a motor vehicle, it is considered to be high risk, van der Wolk said. However, the act also provides that if an AI system is included in a safety component, it is subject to specific product legislation that provides the substantive requirements, he explained. So “the [E.U.] AI Act pushes it to a different body of legislation for further regulation,” he noted. However, using an AI system for purposes other than as part of a safety component “will still fall fully under the E.U. AI Act,” he explained. One such high-risk item is biometric identification, he added.

See “Recent Developments and Upcoming Obligations Under the E.U. AI Act” (Feb. 4, 2026).

Global Enforcement

India Releases Rules for Its Digital Personal Data Protection Act


To operationalize the 2023 Digital Personal Data Protection Act (DPDPA), India’s first comprehensive data protection law, India’s Ministry of Electronics and Information Technology issued the law’s regulations in November 2025 (Rules) and announced the effective dates for them. The Rules detail requirements on consent and notice, breach reporting, special protections for vulnerable groups (including children and persons with disabilities), cross‑border data transfers, and the structure and authority of the Data Protection Board of India (the Board), which adjudicates disputes over alleged misuse of personal data. This article unpacks the DPDPA as well as the Rules, and includes practical compliance guidance for companies as discussed by experts during a TrustArc panel.

See “Update on Digital Governance in India and China” (May 21, 2025).

History

In 2017, the Supreme Court of India recognized privacy as a fundamental right, which led to India’s central government passing the DPDPA in 2023. The law replaced the Information Technology Act, 2000, which was previously the main Indian law governing e‑commerce and addressing cybercrime concerns.

The DPDPA left several rights, obligations and operational questions to be resolved later through subordinate legislation. Accordingly, in late 2025 the Ministry issued the Rules, although some of the Rules do not go into effect until November 2026 or May 2027.

The Rules that came into force in November 2025 mainly deal with the institutional setup of the Board, Bilal Mohamed, policy analyst at The Future of Privacy Forum, explained. Those that will go into effect in November 2026 deal with the registration and obligations of consent managers (as defined below). The rest of the Rules, which will come into force in May 2027, mainly deal with substantive compliance issues, including notice for consent, reasonable security safeguards, the rights of data principals (as defined below), breach reporting and disputes.

Scope

The DPDPA applies exclusively to digital personal data. It does not cover non-digital formats unless those records are subsequently digitized, Mohamed clarified.

The DPDPA also imposes extraterritorial jurisdiction, covering data processing that takes place outside of India if it involves the offering of goods or services to individuals in India. It does not apply to data processing of non-Indian residents occurring in India if such processing takes place pursuant to a contract with an entity outside of India (e.g., for a call center).

Unlike most other data protection frameworks internationally, the DPDPA does not apply to personal data provided voluntarily by the person to whom the data relates or by another entity under a legal obligation to publish the data. For example, it does not apply to an employer that shares data pursuant to an Indian law or a government that shares data as part of providing subsidies, issuing licenses or responding to an emergency such as a natural disaster or a public health crisis.

See “An Analysis of the Liberal and Strict Provisions in India’s New Privacy Law” (Sep. 6, 2023).

Key Definitions

The following are definitions of key terms in the DPDPA and the Rules.

Data Fiduciaries

Any person – which, under the DPDPA, includes both individuals and corporations – who determines the purposes and means of processing personal data is a “data fiduciary.” The meaning of this term is similar to “data controllers” under the GDPR, Mohamed explained.

Significant Data Fiduciaries

“Significant data fiduciaries” (SDFs) are designated by the Indian government, based on considerations including the volume and sensitivity of the data processed, potential impacts on India’s sovereignty, public order and other factors. Companies in the telecom, banking, finance and insurance sectors will likely be designated as SDFs, said Mohamed. Hospitals will likely be considered to be SDFs too, added Suresh Vijayaraghavan, CTO of The Hindu Group.

Processors

Any person who processes personal data on behalf of a data fiduciary is a “processor.” Data fiduciaries, not processors, remain liable for any breaches, Mohamed cautioned.

Data Principals

“Data principals” include any individual to whom personal data relates; or their parent or lawful guardian, in the case of an individual under 18; or their lawful guardian, in the case of a person with a disability. The meaning of this term is similar to “data subjects” under the GDPR, Mohamed explained.

Consent Managers

“Consent managers” are registered entities that allow data principals to give, manage, review and withdraw consent through a platform. They are not mandatory.

DPDPA Overview

The DPDPA generally sets out duties and rights in vague, general terms, while the actionable specifics of such duties and rights are generally set out in the Rules. One exception to this is the issue of verifiable parental consent – the DPDPA specifies those duties and rights directly. Key Rules and the DPDPA’s parental consent obligations are discussed below.

Rule 3: Notice for Consent Requirements

Under Rule 3, every request for consent must include a notice in a standalone document that can be understood independently of any other information. The notice must contain:

  • an itemized description of the personal data being collected;
  • the purpose of the data processing and a description of the goods, services or uses enabled by such processing; and
  • a link enabling the withdrawal of consent, the exercise of rights and complaints to the Board.

Rule 6: Reasonable Security Safeguards

Data fiduciaries must implement data security, access controls, monitoring, backup systems, contractual safeguards, and other technical and organizational safeguard measures.

Rule 7: Data Breach Notification

After any personal data breach, regardless of its materiality or impact, data fiduciaries must, without delay, inform all affected data principals and the Board of the nature of the breach – including its extent, timing and location – as well as its likely consequences, measures taken by the data fiduciary to mitigate the risks and any protective measures that the data principal can take. Within 72 hours, the data fiduciary must provide additional details to the Board, including an updated description of the breach, the circumstances leading up to it, risk mitigation measures, information about the person who caused the breach, remedial measures taken to prevent recurrence and a report on the notices sent to affected people.

Having to report all breaches, even minor ones, “could result in the [Board] being inundated with breach reporting and data principals having a sense of panic when they see multiple breach notifications,” Mohamed predicted. “Maybe down the line the regulator might provide guidance on how to determine what is worth reporting and what is not,” he posited.

The requisite notice must be provided in English and all 22 languages recognized by India’s constitution. “It is not clear what this would look like from a user interface design – whether all languages must be made available immediately or if it should be a toggleable interface,” Mohamed observed. “We have received verbal indications from the Ministry that this depends on how the data fiduciary wishes to do it. . . . But this is not clarified in the Rules or in the act.”

The DPDPA’s notice requirement overlaps with a pre-existing obligation to report breaches within six hours to India’s Computer Emergency Response Team, a national incident response center for cybersecurity incidents, Mohamed noted.

Rule 8: Log Retention

Data fiduciaries must retain logs related to the provisioning and removal of consent and logs related to role-based access for one year.

Section 9 of the DPDPA: Verifiable Parental Consent

Before processing personal data of a child or a person with a disability who has a lawful guardian, a data fiduciary must obtain verifiable consent from the parent or lawful guardian. In addition, data fiduciaries are banned from processing the PI of children in a way likely to cause harm. In particular, they may not engage in tracking children, behavioral monitoring of children or targeted advertising aimed at children. There are exceptions to this, such as for schools and childcare centers using tracking or behavioral monitoring for specified purposes. It is not clear, however, if such an exemption would apply to education-focused tech companies, Mohamed noted.

Rules 10, 11 and 12: Children’s Data and Parental Consent

Data fiduciaries must adopt appropriate technical and organizational verification measures when obtaining parental consent – the Rules do not require proof of the parent-child relationship, only that the person claiming to be a parent is over 18, Mohamed pointed out. Parental consent can be verified by the following methods:

  • reliable identity and age information already held by the data fiduciary, if the parent is already a user of the service or platform;
  • identity or age details voluntarily provided by the individual (e.g., by a government identification card or an electronic version); or
  • a virtual token mapped to identity details and issued by an authorized entity, such as a digital locker service provider recognized by the government of India.

See “How Companies Can Meet Growing Regulatory Scrutiny Around Sharing Children’s Data” (Feb. 11, 2026).

Rule 13: Additional Obligations for Significant Data Fiduciaries

SDFs must have a data protection officer based in India who represents the organization under the DPDPA, reports to the board of directors or equivalent, and serves as the contact point for grievances. SDFs also must have an independent data auditor to assess compliance with the DPDPA.

Moreover, SDFs must take certain measures, including periodic data protection impact assessments and audits, with any significant findings that could affect the privacy of users being reported to the Board.

Further, SDFs must use due diligence to ensure that “algorithmic software” is not likely to pose a risk to the rights of data principals. “It is unclear what algorithmic software means . . . and whether the rule refers to rights under the Data Protection Act or rights recognized under the Constitution,” Mohamed noted. “A little bit of clarity is probably required from regulators.”

Additionally, India’s central government can impose restrictions on SPFs that transfer certain kinds of personal and traffic data out of India. These restrictions are in addition to the sectoral restrictions on data transfers already authorized by the DPDPA, Mohamed explained.

Rule 14: Restrictions on Sharing Data With Foreign Governments

A committee created by India’s central government can restrict cross-border data flows for personal data requested by a foreign government. The composition of the committee and the requirements for restrictions will be specified at a later time, Mohamed said.

Rules 16 to 21: Data Protection Board of India

The Board is to consist of a chair and four other members. It shall function as a “digital office” and may adopt “techno-legal” measures to conduct proceedings. The Board is to complete inquiries within six months. Appeals from the Board are to be filed with the Telecom Disputes Settlement and Appellate Tribunal.

The DPDPA gives the Board authority to adjudicate disputes, impose penalties and ensure the registration of consent managers. Unlike most digital privacy authorities around the world, the Board has no rulemaking or guidance-issuing powers, Mohamed observed.

It is unclear where data fiduciaries should direct their interpretive questions – the Ministry or the Board, Mohamed noted.

How to Comply

The DPDPA and the Rules tell organizations what to do but not how best to do it, Vijayaraghavan pointed out. The biggest problem for organizations is knowing what actions to take to demonstrate compliance with the act, he observed. The answer, he suggested, can be found in International Organization for Standardization (ISO) 27701, which provides guidance on information management systems and relevant controls for both data fiduciaries and processers.

Implementing ISO 27701 Controls

In particular, Vijayaraghavan explained, ISO 27701 helps organizations to:

  • establish a privacy information management system;
  • define clear roles for PII controllers, joint controllers and PII processors;
  • implement processes for data minimization, consent, purpose limitation, data subject rights and breach management;
  • provide audit-ready evidence of privacy governance;
  • establish internal controls for data subject rights management (i.e., processes to change data based on requests from data principals);
  • manage consent and consent withdrawal;
  • handle data from children and persons with disabilities; and
  • establish proper access controls, encryption, secure development practices, logging and monitoring.

Implementing the controls within ISO 27701 is a powerful tool for demonstrating compliance with the DPDPA, as well as with the GDPR, Vijayaraghavan said.

It is important to keep in mind that certain sectors in India – specifically, banking, payments and telecommunications – have stricter data localization requirements under sectoral regulations, advised Joanne Furtsch, vice president at TrustArc.

Other Measures

There are several additional steps that companies should take to comply with the DPDPA, Furtsch recommended, including the following:

  • Data Mapping and Inventory: Identify and map all digital personal data.
  • Consent Management: Make sure that consent is specific, clear and informed. Provide easy consent withdrawal mechanisms.
  • Privacy Notices and Transparency: Update privacy notices to include disclosures required by DPDPA.
  • Security Infrastructure: Align with industry standards such as ISO 27001.
  • Breach Notification Procedures: Include in the breach response plan a two-step notification process: first, immediate notification to data principals and the Board; second, a more detailed notice to the Board within 72 hours. Be aware that the lack of a materiality threshold for breaches will likely mean a high notification volume.
  • Data Retention and Deletion: Set up procedures to retain personal data and logs for at least one year after purpose fulfillment, then securely delete unless retention is required by another law or rule.
  • Cross-Border Transfer Compliance: Monitor government notifications for blacklisted countries and check for sector-specific data localization requirements.
  • Children’s Data Protection: Collect verifiable parental consent for data principals under the age of 18. Targeted advertising and behavioral monitoring should be banned, unless an exception applies.
  • Governance and Accountability: Designate a contact person, maintain records of data processing actions and implement privacy governance measures. Additional requirements apply to SDFs.
  • Vendor and Processor Management: Review contracts to make sure that processors meet security standards, process data only as instructed, maintain appropriate safeguards and notify of breaches immediately.
  • Training and Awareness: Train teams on DPDPA requirements.

Key Differences Between the DPDPA and the GDPR

Companies experienced in dealing with the GDPR – or the CCPA – have a head start in terms of complying with the DPDPA, Furtsch said, but will still need to make India-specific adaptations. There are several ways in which the DPDPA and the GDPR differ, she noted.

Scope and Application

The DPDPA covers digital personal data while the GDPR applies to both digital and non-digital personal data.

Legal Basis for Processing

The DPDPA requires data processing to be based on either consent and “limited legitimate purposes,” while the GDPR requires consent, performance of a contract, legitimate interests or other forms of authorization.

Data Localization and Cross-Border Transfers

The DPDPA uses a negative “blacklist” approach while the GDPR uses a positive list model based on either adequacy or appropriate safeguards.

Sensitive Data

The DPDPA does not create separate sensitivity classifications and treats all personal data uniformly, unlike the GDPR’s special categories.

Consent Managers

Unlike the GDPR, the DPDPA provides for consent managers – who act only on behalf of data principals, not data fiduciaries – to help data principals give and withdraw consent.

Future Developments

The Ministry is unlikely to issue new Rules, Mohamed predicted. However, it is likely to issue FAQs, which are not as legally binding as the Rules but will provide an indication of how the Ministry interprets the DPDPA and the Rules. FAQs “provide a little bit more operational clarity on how data fiduciaries can comply,” he said.

There exists a “very serious constitutional question” as to whether state governments in India can develop their own data privacy laws or guidance, Mohamed continued. IT regulation broadly falls at the federal level, and Indian states typically have not regulated personal data in the ways seen in the U.S., with its patchwork of state privacy laws, he noted. Indian state have created rules for how they can share their own government datasets with the public. “Seven to eight states in India have done that in the last decade where they set down the manner in which open data sharing can happen,” he explained. But those policies do not impose privacy obligations on companies, and “I do not expect states in India to be regulating personal data like how we have in the U.S.”

See “Tips From Big Tech Leaders on Navigating Global Privacy Regulations” (Dec. 3, 2025).

People Moves

Goodwin Welcomes Three Cybersecurity, Privacy and Technology Partners to Launch Orange County Office


Richard Grabowski, John Vogt and Ryan Ball have joined Goodwin as partners to establish its new office in Newport Beach, California. Arriving from Jones Day, the trio is part of the firm’s complex litigation & dispute resolution group, with litigation practices focused on cybersecurity, privacy, technology, trade secrets and consumer financial services.

Grabowski has served as lead counsel on more than 100 federal and state consumer class actions in a variety of jurisdictions. He has represented clients in a range of industries on a variety of matters, including Fair Credit Reporting Act (FCRA), Telephone Consumer Protection Act (TCPA), Credit Repair Organizations Act and Unfair Competition Law claims. He has also led responses to parallel government investigations, including by the FTC, Consumer Financial Protection Bureau (CFPB) and multi-state AGs, with a focus on complex consumer and data matters.

Vogt represents clients in investigations by state AGs and federal regulators arising out of cyber incidents and advises them on their data privacy obligations. He has defended hundreds of class actions involving data breaches, tracking technologies, biometric and dark-pattern claims. Additionally, he regularly litigates claims under state and federal wiretap laws involving the use of chat bots, session-replay technology, pixels and web beacons; and defends class actions under state and federal privacy and consumer protection statutes, including the Stored Communications Act, Wiretap Act, FCRA, TCPA and Electronic Funds Transfer Act (EFTA).

Ball represents public and private companies in complex litigation and government investigations. His practice focuses on large-scale cybersecurity attacks, data collection and tracking technologies, dark patterns, unfair and deceptive practices, and other consumer protection issues. He regularly advises clients on emerging technologies, data collection and analytics, fintech, and related legal and regulatory risks. He has successfully led the defense of numerous nationwide class actions and mass arbitrations under myriad state and federal privacy statutes, including the Electronic Communications Privacy Act, Computer Fraud and Abuse Act, FCRA, TCPA, EFTA and California Invasion of Privacy Act. Additionally, he has represented clients in responding to dozens of government investigations, including from the CFPB, the FTC, the California Privacy Protection Agency, the New York Department of Financial Services and state AGs.

For insights from Goodwin, see “What to Know About the Sleeping Giant That Is the SEC’s Amended Reg S‑P” (Dec. 10, 2025); and “Unpacking the AI Risks Disclosed in 2025 SEC Filings” (Sep. 10, 2025).