Online Advertising

Navigating an Increasingly Risky Health Data Landscape: Why the NAI’s Factor Analysis Matters


Information that reveals or relates to a consumer’s health has become one of the most sensitive and closely scrutinized categories of PI in the modern privacy landscape. Once regulated primarily within traditional healthcare and insurance settings under HIPAA, the concept of “health data” has expanded, and now encompasses a wide range of information from consumer‑facing products and services, mobile applications, wearable devices, digital advertising, data analytics and digital content platforms. To date, dozens of laws have been passed across the United States seeking to regulate the collection and use of this broad category of information. However, fragmented approaches to defining and protecting it have left many companies confused, frustrated and potentially vulnerable to enforcement risks.

It is against this backdrop that the Network Advertising Initiative (NAI) released in early 2026 its Factor Analysis for Health‑Related Sensitive Personal Information (Factor Analysis) – providing a structured, contextual framework for classifying non-HIPAA health data based on five factors, and offering organizations with a disciplined guidepost to help reason through difficult classification decisions in an increasingly risky environment.

This article explores the regulatory evolution that has made health data compliance more challenging, explains why context‑driven frameworks like the NAI’s Factor Analysis are increasingly valuable and highlights additional best practices for making navigating the health data landscape manageable.

See “Addressing the Operational Complexities of Complying With the Washington My Health My Data Act” (Apr. 3, 2024).

The U.S. Health Data Framework

Over the last 30 years, the regulation of health data in the United States has undergone dramatic changes.

Evolution of Data in the HIPAA Context

In the late 1990s, HIPAA established protections for a narrow set of “protected health information” (PHI) created, collected, used or disclosed in the course of healthcare treatment or payment. Notably, HIPAA’s obligations only extend to “covered entities” such as hospitals, healthcare providers, counselors, pharmacies and health plans, and “business associates” that process PHI on behalf of covered entities. HIPAA limits the circumstances under which PHI can be disclosed to other types of entities, but once that occurs, HIPAA does not apply.

When consumer use of the internet and connected devices took off in the early and mid‑2000s, so too did the amount of data collected about consumers. This included information related to a consumer’s health, such as browsing history about a specific disease or diagnosis, data collected from wearable health trackers and purchase data relating to certain medications. This data generally falls outside of HIPAA and had received no specific regulatory protection – until, that is, state legislatures began passing their own comprehensive privacy laws, starting with the CCPA in 2018.

See “Lessons From the Continued Uptick in HIPAA Enforcements” (Feb. 8, 2017).

Heightened Protections Under U.S. State Consumer Privacy Laws

As of April 2026, more than 20 U.S. states have passed comprehensive privacy laws addressing the way consumer PI, including online data, is collected and used. These laws generally impose heightened protections on “sensitive” categories of personal data, including information and “inferences” made about a consumer’s health. A smaller number of states have passed health-specific privacy laws that impose even stronger restrictions on a broader set of data and prohibit the use of such data for advertising outright.

U.S. state privacy laws do not adopt a uniform definition of what constitutes health data. For example, Washington State’s My Health My Data Act defines “consumer health data” as any PI that identifies the consumer’s past, present, or future physical or mental health status, including data that identifies a consumer seeking any service to assess, measure, improve, or learn about their mental or physical health. Other states’ comprehensive laws focus on data that “reveals” a mental or physical health condition. The CCPA definition of health data includes any PI “collected and analyzed concerning a consumer’s health.”

These fragmented definitions have left companies that deal in consumer data paralyzed. Does browsing history related to aspirin reveal a consumer’s health condition? Does the fact that a consumer read an article about breast cancer indicate a medical diagnosis? Does purchase data for running shoes identify an individual seeking to improve their health? Many of the most challenging compliance questions arise at the margins, where certain data points may not obviously be health-related, but could be combined, analyzed or used in a way that triggers heightened legal scrutiny.

A well-known example is the Target pregnancy score case, where an investigative journalist revealed the retail giant assigned pregnancy prediction scores to consumers based on their otherwise non-sensitive purchase history and demographic data. With very little regulatory guidance to rely on, and facing significant financial, organizational and reputational risks, many companies are creating internal processes to classify data elements, with little ability to benchmark their efforts against their peers.

See “Examining the Washington Attorney General’s FAQs on the My Health My Data Act” (Sep. 13, 2023).

Market Impacts

The growing complexity of health data privacy regulations has practical consequences for organizations attempting to operate responsibly and compliantly.

Faced with uncertainty, some organizations take a cautious approach by treating broad categories of health‑adjacent data as sensitive. For example, some companies choose to remove third-party data collection technologies from any web pages that contain health and wellness products, like supplements or skin creams, to avoid any implication that it collects or sells PI “relating” to a consumer’s health. While this approach may reduce regulatory exposure, it can also create operational burdens, disproportionately limit data use and reduce the availability of information that consumers find valuable. Other organizations, however, underestimate the risk and classify data too narrowly, exposing themselves to enforcement actions, litigation and reputational harm.

The absence of clear guidelines to define the scope of non-HIPAA health data makes it increasingly difficult for companies to calibrate their risks. Various entities could reasonably reach different conclusions about the same data depending on the nuances of a specific analysis, adding more complexity to an already fraught landscape and leaving many companies unsure as to how to move forward.

See our two-part series on Washington’s aggressive health privacy law: “Right to Sue and Onerous Consent Obligations” (May 3, 2023), and “Ten Compliance Priorities” (May 10, 2023).

The NAI’s Response to Regulatory Uncertainty: A Compass, Not a Road Map

The NAI has long served as a self‑regulatory organization for the digital advertising industry, translating evolving legal requirements into practical standards for responsible data use. In continuing this work, the NAI’s Factor Analysis serves as a framework to guide determinations about when data is likely to be considered “health data” and when it is not.

Importantly, given the breadth and variation among statutory health data definitions and the diversity of data use cases, there is no single solution to defining health data. The NAI’s Factor Analysis does not provide binary answers. Instead, it provides a helpful framework for pointing companies in the correct direction rather than a prescriptive solution for coming to a definitive answer on data classification.

The NAI’s Five Factors

The NAI establishes five factors to assist businesses in evaluating whether a specific piece of information may be considered “health-related sensitive personal information,” taking into account the varying definitions across U.S. jurisdictions. These factors are described below.

Factor 1: Source

The first factor examines where the data originates. Source matters because it shapes expectations and risk. Data that originates in a context explicitly tied to healthcare or medical decision‑making may carry additional risk and sensitivity compared to data collected in a broader consumer context, particularly where attenuated from medical services or diagnoses. However, source alone is insufficient to identify health data. Consumer‑generated data, browsing data or purchase data could still be considered health data depending on other factors, including how it is combined or used. The source inquiry therefore serves as a starting point, not a safe harbor.

Factor 2: Content

The second factor focuses on what the data actually reveals, either explicitly or implicitly. Certain information such as diagnoses, treatment details, prescription data or identifiable medical conditions may clearly point toward health status. Other information may only become health‑related through additional inference.

This factor acknowledges that sensitivity falls along a spectrum. Data that directly identifies a specific health condition is generally more sensitive than data that merely correlates with health outcomes or interests. For example, the difference between buying low-sugar foods and buying diabetes medication is significant. Importantly, the Factor Analysis acknowledges that inferred health information can be just as sensitive as observed data, particularly where inferences are individualized, accurate and precise. However, by emphasizing content rather than labels, the analysis under Factor 2 encourages organizations to look beyond how data is categorized internally and instead evaluate what it would reasonably communicate about an individual if used or disclosed.

Factor 3: Use

The third factor acknowledges that even relatively benign data can take on heightened sensitivity if it is used for health‑related targeting, personalization or decision‑making. In the Target pregnancy score case for example, common purchase trends, when combined and analyzed, were used to identify and direct ads to apparently pregnant shoppers. However, taking the individual data points alone, the outcome is less clear. This use‑based perspective reflects how regulators have increasingly framed their analyses. Data that might otherwise appear innocuous can raise concerns when deployed in ways that meaningfully affect individuals’ access to health information, products or services. Conversely, certain uses may reduce sensitivity, such as when individually sensitive data is aggregated in applications that do not meaningfully impact a specific person.

See “Takeaways From FTC’s Orders Targeting Digital Health Companies” (May 8, 2024).

Factor 4: Consumer Expectations

The fourth factor centers on the consumer perspective, asking what a reasonable consumer would understand about the collection and use of the data based on the environment in which the data was collected, disclosures made to the consumer and certain norms associated with the product or service. In instances where a consumer would reasonably expect a heightened level of privacy, such as in relation to certain vulnerable classifications like reproductive health or pregnancy, data is more likely to be viewed as sensitive. However, it is important to note that consumer expectations are dynamic and highly fact-dependent, meaning this factor could weigh in favor of data being considered health-related in some circumstances but not in others.

Factor 5: Harm

The final factor examines the potential for harm that could result from the collection, use or disclosure of the information. Harm may be tangible or intangible and can include financial loss, discrimination, stigma or emotional distress. Even in instances where the risk under the previous four factors is low, the fifth factor is a reminder that sensitivity is ultimately about impact and optics.

None of the NAI’s factors is dispositive alone. Rather, organizations should evaluate how each of the factors interact when applied to their specific factual instance, considering their own risk tolerance.

Conducting an analysis under the NAI Factor Analysis may not provide black-and-white answers in many cases. Instead, organizations might find it more useful to classify low, medium and high risks to guide business decisions and risk mitigation strategies. This holistic analysis fits with how many organizations manage real‑world compliance risks and helps them avoid being underinclusive or overinclusive in their data classification strategy.

Practical Steps for Mitigating Health Data Risks

Just as no singular NAI factor stands alone, a factors-based analysis is not, on its own, a health data privacy program. Integrating the NAI Factor Analysis into existing privacy programs and processes will help make the framework more useful to business clients and provide evidence to demonstrate compliance with health data privacy regulations. To this end, below are some suggestions for incorporating the NAI Factor Analysis into a broader privacy program.

Update Data Review Policies

Organizations that decide to utilize the NAI Factor Analysis will also need to decide who will be responsible for applying it, which data will be analyzed and when. These details will look much different for a publisher that is managing the placement of advertising pixels on its website than they will for a data vendor that is classifying thousands of variables. Updating relevant policies will help clarify roles and responsibilities and demonstrate a systematic approach toward determining what is and is not health data.

Conduct Training

Applying the NAI Factor Analysis in practice involves a significant amount of judgment and discretion. If multiple employees are responsible for analyzing an organization’s data, it is possible that they will reach divergent results on similar data. Providing training can help establish consistency from the outset.

Repeat Reviews

Sensitive data reviews – whether or not they use the NAI Factor Analysis – are not one-time efforts. Statutory definitions of health data, industry practices and consumer expectation will continue to change. These are all ingredients in the NAI Factor Analysis, and the outcomes of such an analysis will likely change over time. Setting a cadence for review will depend on several considerations, including how much data is at issue (e.g., a simple mobile app versus a dataset containing thousands of variables), how rapidly an organization’s data changes, and what proportion of the organization’s data sits in a middle category of risk and is most susceptible to changes in industry and consumer norms.

See “Healthline’s Record-Setting CCPA Settlement Offers Lessons on Transparency and Opt-Outs” (Aug. 6, 2025).

 

Aaron Burstein is a partner at Kelley Drye. He advises clients on complex privacy, data security and consumer protection issues, drawing on deep legal knowledge and extensive government experience. He provides practical guidance on compliance with federal and state privacy, information security and marketing laws, helping companies adapt business practices to manage risk amid evolving regulatory requirements, including under the CCPA. He also counsels on privacy and security issues arising in transactions and emerging technologies such as connected vehicles and drones. Before entering private practice, Burstein served in the FTC’s Division of Privacy and Identity Protection and as senior legal advisor to Commissioner Julie Brill, where he helped shape U.S. and international privacy policy, rulemaking and enforcement.

Meaghan Donahue is an associate at Kelley Drye. She advises clients on privacy law with a particular focus on adtech, marketing practices, and compliance with state privacy laws and federal regulatory frameworks. She assists with privacy policies, contracts and disclosures, and has developed practical resources and guidance addressing sensitive and health data issues. Before joining Kelley Drye, Donahue worked at the NAI, where she developed self‑regulatory standards for online advertising, focusing on compliance and policy initiatives.

Judicial Decisions

Perspectives From Judges on Privacy Protections and Impact of AI in Federal Courts


What privacy issues keep federal judges up at night? What efforts is the judiciary taking to understand and integrate AI while addressing issues like AI-generated errors and deepfakes? To answer these questions and more, Michael Sussmann, a partner at Fenwick, spoke to Chief Judge James Boasberg of the U.S. District Court for the District of Columbia and Judge Allison Burroughs of the U.S. District Court for the District of Massachusetts at the IAPP Global Summit 2026. The judges shared their perspectives on balancing litigants’ concerns over privacy and confidentiality with the public’s right to open judicial proceedings; issues around government surveillance; how courts approach the complex technological issues they must address; how AI, other technological changes and cybersecurity concerns are affecting litigation and courts; and issues around class action and multidistrict privacy litigation. This article synthesizes their insights.

See “Litigation Landscape for Cookie and Tracking Technology Claims Brought Under Federal Privacy Statutes” (Jan. 21, 2026).

Balancing Privacy and Open Courtrooms

Evidence presented in the courtroom may contain sensitive medical or financial records or communications from protected relationships, said Sussmann. Courts must balance the desire to maintain litigants’ privacy with “the public’s right to open proceedings,” said Boasberg.

Sealing Records

When all parties to a proceeding agree to seal a document or evidence, it is temptingly easy for a judge to go along – but judges need to step back and determine whether it is appropriate to do so. “Just because [parties] want it sealed doesn’t mean it should be sealed,” noted Burroughs. In some instances, it may be possible to seal only a portion of a document.

Boasberg concurred, even though sealing a document when there is agreement among parties may result in one less dispute for a judge to deal with. The media may seek disclosures and information in high-profile cases, which tends to counterbalance the instinct to seal, he reasoned.

One example of this tension arose in the 2025 Meta antitrust case, which involved many tech companies that sought to protect what they deemed to be confidential business information, recounted Boasberg. It was necessary to clear spectators from the courtroom at many points throughout the trial to discuss those concerns. It was difficult balancing the parties’ desire to protect confidential information with strong public interest in the matter. The media were pressing for access, “even though both sides would have been happy to seal,” he observed.

In the average case, however, generally no one is clamoring for open proceedings, Boasberg added.

In the grand jury context, in the District of Columbia, where all applications to unseal go to the chief judge, explained Boasberg, there is a presumption that the investigations are sealed. However, there have been many cases, such as the Jack Smith investigation of Donald Trump, where the media sought access, he noted.

“The media really can be a gigantic pain in the rear end,” remarked Burroughs. However, they are usually only interested in high-profile matters. “And in those cases, they actually play a really useful role in balancing what the parties want kept private but the public has a right to see.”

Releasing Sealed Records

Courts often receive motions from the media to release sealed materials in matters of current interest, said Boasberg. However, the issue of whether and when to open old sealed cases needs to be addressed. One approach could be for judges to revisit sealed matters after a specified time. Another would be to require counsel to unseal cases or explain why they should remain under seal.

Government Surveillance

Judges routinely review applications for warrants, wiretaps and other forms of government surveillance, said Sussmann. In those cases, the judge is the only potential check on the government.

Statutory Safeguards

“The statutes that regulate things like wiretaps recognize that listening in on somebody’s private communications is one of the most intrusive things you can do,” noted Burroughs. There are many safeguards around the process. For one, the government must make a clear showing of need in a rigorous judicial process before gaining access. Consequently, “I think that, to the extent you guys are worrying about people listening in on your phone conversations, it’s probably not happening,” she remarked. Additionally, before the government can use what it learns from a wiretap, the parties that have been intercepted have the right to seek to suppress the evidence.

For use of wiretap statutes in privacy litigation, see “Lessons From the Trenches: Winning Strategies for Defeating Pen Register Lawsuits” (Jun. 12, 2024); and “Google’s Wiretap Cases Highlight Evolving Privacy Transparency Standards” (Jan. 24, 2024).

Foreign Intelligence Surveillance Court

Judge Boasberg served as chief judge of the Foreign Intelligence Surveillance Court (FISC), which is charged with approving applications for surveillance and other activities in national security matters, noted Sussmann. FISC is composed of 11 judges, all of whom are appointed by Chief Justice Roberts for a seven-year term. Justice Roberts also appoints FISC’s chief judge. Judges on the court are not relieved of any of their other normal duties and do not receive any additional compensation. Each judge is on duty in D.C. for about five weeks each year.

Although FISC is not secret, the work that it does is secret. FISC “has a website” where people can learn what judges are on the court and find some opinions, Boasberg emphasized.

The FISC judge on duty handles all government applications for search or surveillance warrants “seeking foreign intelligence information from someone who is an agent of a foreign power or an agent of international terrorism,” Boasberg explained. The warrants authorize the government to “search or surveil anybody in the United States or any U.S. person anywhere in the world.” Surveillance may include, for example, listening to phone calls, monitoring email, searching a home or installing a device in a car or phone. Permission is typically granted for 90 to 180 days. No permission is needed to surveil someone like Vladimir Putin, who is not in the U.S. and is not a U.S. citizen, he noted.

Although applications are ex parte, the court may appoint a lawyer as amicus to provide an outside view on the propriety of the application, continued Boasberg. There are also amici who assist the court with technical matters. In his experience, the hard calls did not involve whether probable cause existed – they involved determining “what on earth does this [surveillance] technology do and how does it even function?” During his tenure, the primary criticism of FISC was that it was a rubber stamp. Historically, however, FISC has required the government to modify about 20 percent of its applications prior to approval.

People understand why FISC does not reveal targets and what the government is looking for, according to Sussmann. Increased public visibility might be expected where the court makes a legal determination about how the law applies to a new platform or technology. When Boasberg presided, he pressed for more transparency on court decisions. Still, there often was little interest in publishing, noted Boasberg. “If the government comes to me and says, ‘we want to surveil this guy because we think he’s a spy and they submit all the materials,’ I’m just signing off. I’m not writing an opinion explaining, ‘yes, there’s probable cause,’” he said. Opinions are more likely issued in matters involving programmatic surveillance under Section 702 of the Foreign Intelligence Surveillance Act.

See “Emerging Cyber Threats and Defenses” (Jan. 24, 2024).

Technology and the Law   

The Law Will Always Lag Technology

Many statutes are old and it takes time to update them, said Burroughs. For example, an existing law that controls pen registers for monitoring traditional telephone dialing does not address cell site locators. Consequently, the government and judges have extrapolated from the statute. Some judges would like more specific statutory guidance, while others are content with the present state of the law, she observed.

Similarly, no statute addresses placing a locator on a car – and whether that constitutes a “search,” continued Burroughs. But even if it is not a search, “it is a huge invasion of somebody’s privacy to see where their car goes,” she opined. A person may not care if she is tracked going to the grocery store – but certainly might if going to an abortion clinic, for example.

The law “never has kept up and it never will keep up” with changes in technology, said Burroughs. The rules for search and surveillance are derived from the Fourth Amendment, and “you know when that [was instituted],” Boasberg remarked. The explosion in the amount of electronically stored data and the broad implementation of AI have both increased the gap between the law and technology, according to Burroughs. “If you do a search of somebody’s laptop, you’re going to get way more data now than you used to get,” she noted. Additionally, there are so many different types of technology with access to data.

Educating Courts About Privacy-Related Technology

Judges often rely on the parties to educate them on technology, noted Sussmann. Judges in the First Circuit typically do not bring in experts, but nothing prohibits them from doing so, explained Burroughs. The parties will bring in experts when needed. “[I]f you want me to understand [a particular technology that I am not familiar with], you should bring in an expert, and you should talk very slowly, and you should treat me like I’m a very smart second grader,” she remarked.

The mark of a secure judge is the ability to admit not understanding something, Boasberg opined. In Meta, both sides provided him – in open court – with a two-hour tutorial on the app technology involved in the suit.

Counsel should walk the court slowly through the technology, advised Burroughs. They should not presume the court has a scientific background. That is particularly true when presenting to a jury. Judges are “used to being bored,” but juries are not. “You have to remember that there’s an aspect of theater to it and you’re presenting to people. You need to be engaging,” she recommended. Presentations should be interesting and understandable.

Juries are amazing, continued Burroughs. “They really, really, really try to get it right.” Counsel should strive to give juries the tools they need to make good decisions. In civil trials, she permits juries to ask questions. Those questions confirm that attorneys often present material that is beyond the jurors’ comprehension. Even if attorneys are engaging, the presentation must still be very basic.

Judges often deal with subject matter they have had no experience with, said Boasberg. Some, for example, have never been involved in a criminal case. In some cases, a judge may pick up a brief and have no idea what counsel is talking about or what a particular regulation is intended to do. “And so what do I do? I’ll put that brief down and go to the other brief, which is the worst thing you as a lawyer ever want to have happen,” he quipped. “Put your arguments in context and make the technical simpler.”

See “Federal Judge Offers Advice on Litigating Data Privacy, Security Breach and TCPA Class Action Suits” (Apr. 27, 2016).

AI and Future of Litigation

The judiciary is talking about AI “all the time, at every level,” said Burroughs. “The private sector is way ahead of the courts on AI.” For example, private sector versions of Westlaw and Lexis have AI embedded in them. The version provided to courts does not. Six judges in the First Circuit are only now testing trial versions AI-enabled Westlaw. “We’re talking about it a lot and we have no answers at this point,” concurred Boasberg.

Hallucinations

Burroughs encountered a brief with a hallucinated case citation. She cut the attorney some slack because it was an overworked immigration attorney. Boasberg had two cases in which “not terribly overworked lawyers” filed briefs filled with hallucinated cites. In one of the cases, he requested that the attorney explain how it had happened, how the attorney would prevent it from happening again and awarded attorneys’ fees to opposing counsel, who had to respond to the brief.

Notably, two federal judges are known to have issued opinions dealing with hallucinated cites.

It is hard to imagine what rules regarding AI use in the legal system would look like, said Burroughs. A rule might require disclosure of AI use, or a certification that the attorney had confirmed that every citation is real, she proposed.

See “Tool or Third Party? Courts Differ on AI’s Role in Privilege and Work-Product Protections” (Mar. 18, 2026).

AI in Chambers

Going forward, judges are likely to use AI in writing opinions to supplement drafts provided by law clerks, opined Sussmann. AI could lighten workloads and free judges to focus their energies on the matters that need their attention most.

AI may be most useful for repetitive processes, according to Burroughs. For example, for many types of rulings, judges simply cut and paste text to explain a standard of review or the basic law under the Fourth Amendment. On the other hand, AI is not going to help with the matters for which “they’re paying judges the big bucks” – the exercise of judgment. There will be more AI in the judiciary, but it will not take over opinion-writing.

AI might also be helpful in repetitive administrative matters, such as social security and workers’ compensation appeals, where there is a huge backlog of cases, offered Boasberg. If AI were 95‑percent accurate, a litigant might say, “I’d rather take AI at 95 percent than wait three years for a judge.”

Concerns About Evidence

“Photographs and video are incredibly powerful pieces of evidence for juries,” noted Sussmann. “And week by week, the AI is just getting better and better.”

Concern about deepfakes goes beyond photos and videos and includes even documents, said Boasberg. But “it’s not up to the judges to look at those and say, ‘I think this might be a fake photo’ – that’s what you have the adversary system for.” Although a judge may ultimately have to decide whether a piece of evidence is legitimate, it is the adversary system that must bring it to the court’s attention.

See “Examining the Deepfake Landscape and Measures for Combatting Scams” (Sep. 3, 2025).

Data Breaches

In August 2025, the Administrative Office of the U.S. Courts acknowledged that its electronic case management system had been hacked. The Administrative Office is keenly aware of data security issues and working to ensure it protects all court data, said Boasberg. The court system is spending significant money and time to update systems and remediate vulnerabilities, added Burroughs, who spent six years on the IT committee of the judicial conference. “The trouble with government, and the courts in particular, is it’s all about money – and no one’s being funded at 100 percent of what they’re asking for,” she remarked.      

After the Administrative Office breach, the courts established special protocols for highly sensitive documents, which are now held in different places, explained Burroughs. After spending years getting rid of paper, the courts are now accumulating it again for specially sealed documents. But “nobody wants warehouses full of paper again, so we’re trying to deal with it,” she said. Some controls are already in place. For example, wiretaps are automatically destroyed after a specified number of years.

Privacy Litigation

Burroughs has supervised a massive multidistrict privacy litigation involving 300 different defendants. A central challenge in these cases is the allegation of wrongdoing. Consumers “often suffered no harm at all, but are worried about the potential for future harm . . . and you have lawyers who are making a fortune [off of litigating that issue] because you’re talking about literally hundreds of cases,” she noted. Against that backdrop, she said she will not approve any class action settlement that includes a “reverter,” which provides that everyone who joins the class receives a specific amount and any unclaimed funds go back to the defendant. That stance, however, can delay settlement, she acknowledged, as parties may wait to see how a few bellwether cases play out.

More broadly, privacy class actions can arise from a range of alleged conduct, and the underlying facts and harms can vary significantly. For instance, there are different types of data intrusion cases, continued Burroughs. “There’s a difference between a company that is actually engaged in serious malfeasance and a company [that] has lost track of the technology, or maybe been a little negligent about [patching vulnerabilities],” she observed. Additionally, plaintiffs are not all similarly situated. Some may have suffered material losses – but many others have not. Consequently, class actions may not be the most efficient way to resolve privacy issues, and class action reform may be needed.

See “Navigating Evolving Data Breach Litigation and Regulatory Risks” (Aug. 2, 2023).

SEC Enforcement

Small Firms Must Be Ready to Comply With Amended Regulation S‑P


Regulation S‑P (Reg S‑P) requires investment advisers, broker-dealers and other covered firms to safeguard customer information. In June 2024, the SEC amended Reg S‑P to require covered firms to establish written policies and procedures for responding to unauthorized access to or use of customer information, including procedures for providing timely notice to affected customers. The amendments also broaden the scope of information covered and require firms to document their compliance with the updated rules.

The amended Reg S‑P took effect for larger firms on December 3, 2025, and becomes effective for smaller managers - those with less than $1.5 billion in assets under management - on June 3, 2026, said Seward & Kissel partner Casey J. Jennings in a firm program on preparing for compliance with the amendments. Jennings, along with special counsel Erin Galipeau and attorney Katherine A. Agoglia, discussed the steps covered firms should take to ensure compliance with Reg S‑P, including incident response plans and third-party oversight. This article presents the key takeaways from the presentation.

See “What to Know About the Sleeping Giant That Is the SEC’s Amended Reg S‑P” (Dec. 10, 2025).

Critical Compliance Steps

Reg S‑P covers broker-dealers, investment companies and registered investment advisers – including private fund advisers, explained Agoglia. Exempt reporting advisers are not subject to Reg S‑P but are subject to the Consumer Financial Protection Bureau’s Regulation P and the FTC Safeguards Rule.

See “Complying With the FTC’s Amended Safeguards Rule’s New Reporting Requirement” (Jan. 3, 2024).

Identify Covered Data

Reg S‑P is designed to protect nonpublic PI, said Galipeau. Consequently, firms must identify what PI they hold and where they store it – including information held by service providers. Firms should also map how data flows through their organizations. A firm should assess:

  • what PI it receives about customers;
  • from whom it receives the information;
  • what PI the firm and/or its service providers retain; and
  • whether it receives PI from another financial institution about such institution’s customers.

After identifying relevant data, a firm should determine where the data is stored, what protections exist for such data and whether additional protections may be needed, continued Galipeau. The firm should also maintain a record of any contracts with service providers that address storage or protection of PI. Although Reg S‑P does not require data mapping, the first questions examiners will ask are, “How does the data flow through your work environment? How is it used and how is it stored?” she noted.

Adopt Appropriate Policies and Procedures

Galipeau explained that Reg S‑P requires covered entities to maintain policies and procedures reasonably designed to:

  • ensure the security and confidentiality of customer information;
  • protect against anticipated threats or hazards to the security or integrity of customer information; and
  • prevent unauthorized access to or use of customer information that could result in substantial harm or inconvenience to any customer.

A firm must review those policies and procedures periodically and update them as needed to reflect changes in regulations, business practices and technology, added Galipeau. It should document such reviews.

The SEC Division of Examinations is already examining larger firms’ compliance with the amended regulation, noted Jennings. On an initial compliance examination, an examiner would likely ask about the firm’s policies, whether the person responding understood what was in them and what steps the firm took to implement the policies, noted Galipeau.

Many law firms and compliance consultants have created policies and procedures to address the Reg S‑P amendments, and “a lot of them are largely pretty good,” observed Jennings. Advisers that fail to incorporate any of the required elements into their policies and procedures are at risk of SEC action. “The failure to even try is what could lead to an immediate enforcement action,” he cautioned. For less egregious issues, the SEC is more likely to issue a risk alert or other guidance based on examination findings instead of “going straight to aggressive enforcement,” he said.

For example, last year, the SEC issued a settled enforcement order against M Holdings Securities, Inc., an adviser with many independent branches, recounted Jennings. The adviser’s policy was to delegate Reg S‑P obligations to its branch offices, but the branch offices did nothing – and the head office knew it. The adviser was censured; ordered to cease and desist from violating certain provisions of Reg S‑P and Regulation S‑ID; and fined $325,000.

Develop and Implement an Incident Response Program

Amended Reg S‑P requires covered firms to develop, implement and maintain policies and procedures for an incident response program. This will be the biggest compliance hurdle, said Jennings. The overarching requirement is that an adviser’s program must enable it to “detect, respond to and recover from unauthorized access to or use of customer information,” he added. The requirement is premised on “the presumption that it’s not a matter of if you’re going to have a data incident – it’s when,” he observed. The SEC expects firms to plan for the inevitable.

The best way to build a program is to work backwards from a data incident to see how a particular organization will respond, advised Jennings. To that end, an organization can use tabletop or similar exercises. Each organization’s approach will be unique. For instance, a large retail adviser will have a much different approach than a small adviser with a few employees.

Assess, Contain and Control

In the event of an incident, the first call a firm makes should be to outside counsel to preserve confidentiality, suggested Jennings. The next step is to figure out what happened. Reg S‑P requires a firm to detect and assess the risk posed by a breach. Additionally, the firm must contain and control the attacker. A firm may limit further damage by, for example, changing passwords, resetting systems or moving to backups.

Notify Victims

For incidents involving unauthorized use of or access to “sensitive customer information,” amended Reg S‑P requires covered firms to notify the affected individuals. Such information includes “any component of customer information alone or in conjunction with other information, the compromise of which could create a reasonably likely risk of substantial harm or inconvenience to an individual identified with the information,” explained Jennings. Thus, an incident must compromise more than just a customer’s name or name and address – such as a name plus a taxpayer identification number or an account password.

When required, notice must be delivered within 30 days of discovering the incident, continued Jennings. The notice must:

  • be clear and conspicuous;
  • describe the incident, the types of information accessed, when it happened and how long it persisted;
  • include contact information for the adviser;
  • recommend that the individual review his or her account statements and report suspicious activities; and
  • be transmitted in a way reasonably expected to reach the individual.

Reg S‑P does not require a firm that suffers a breach to offer free credit monitoring – “but it’s not a terrible idea,” Jennings added. Notice is not required if a firm determines, after reasonable investigation, that the information is not likely to be used in way that would cause substantial harm or inconvenience. The firm should, however, document how it made that decision, he said.

Advisers typically retain a specialized incident response firm, noted Jennings. The forensic firm can provide a report confirming that no data was compromised and why. In some instances, counsel may be able to provide such a report. Incidents in which individuals are not at risk are usually fairly straightforward, such as a “fat finger” error in which someone mistakenly forwards an email to the wrong party. In those cases, especially when the recipient is a known counterparty or service provider that agrees to delete the email, a simple memo to file should suffice.

See “What Regulated Companies Need to Know About the SEC’s Final Amendments to Regulation S‑P” (Jul. 24, 2024).

Oversee Service Providers

Conduct Due Diligence and Monitoring

Another challenging new requirement imposed by amended Reg S‑P is the duty to oversee third parties that hold customer information, continued Jennings. It entails:

  • initial due diligence;
  • ongoing monitoring; and
  • having a good faith belief that the service provider will protect the information in its control and notify the adviser in the event of an issue.

Reg S‑P is not very prescriptive, commented Jennings. SEC staff recognized that the facts and circumstances will determine how an adviser approaches data security. Thus, due diligence and monitoring can take many different forms. For example, due diligence could include:

  • completion of a due diligence questionnaire by the provider;
  • review of the provider’s information protection policies; and
  • assessment of its data security practices.

Monitoring could, for example, involve an annual certification process or, for critical service providers, audit rights and onsite visits. Due diligence is not a one-time exercise. “It must be periodic,” Jennings emphasized.

Reg S‑P does not prescribe any contractual requirements, continued Jennings. It is, however, always good practice to have a written agreement with each third party that holds firm data and review those agreements periodically. Although not always possible, it is also best practice to add data protection requirements to third-party contracts.

See our two-part series on privacy and security provisions in vendor agreements: “Assessing the Risks” (Mar. 17, 2021), and “Key Data Processing Considerations” (Mar. 24, 2021).

Ensure Incident Notification by Third Parties

Reg S‑P requires a service provider to notify an adviser within 72 hours after the service provider suffers a data breach, added Jennings. Although Reg S‑P does not require the obligation to be written into a contract, “try as hard as you can to get a 72‑hour agreement in place” and document the efforts to do so. Of course, it will be very hard to get a written assurance from a huge organization like Microsoft or Amazon. Some advisers that are unable to secure a written assurance use a negative consent letter that sets out the firm’s requirements and says, “If you don’t tell us otherwise, we’re just going to assume that you do all this stuff,” she said. A firm can also rely, to some extent, on the success of a company like Microsoft in keeping data secure and its reputation. The efforts an adviser should make will vary by service provider.

See “Key Terms and Negotiation Issues in Data Processing Agreements” (Sep. 13, 2023).

Maintain Required Records

According to Agoglia, an adviser must maintain records of:

  • its policies and procedures addressing:
    • administrative, technical and physical safeguards for protecting customer information;
    • service provider oversight; and
    • proper disposal of customer information;
  • contracts with relevant service providers; and
  • documentation of any incidents involving unauthorized access to or use of customer information, including:
    • any response and recovery;
    • any notices sent to customers; and
    • any determination that notice was not required.

Advisers must retain the required records for five years, with the records being easily accessible for the first two years, noted Agoglia. Broker-dealers must maintain them in an easily accessible place for three years.

See “Compliance Challenges in Records Management” (Nov. 1, 2023).

Adhere to Privacy Notice Requirements

Amended Reg S‑P codifies existing guidance on annual privacy notice requirements, noted Jennings. A firm is not required to send an annual privacy notice if it only shares PI with unaffiliated parties in ways that do not require an opt-out and has not changed its privacy policies regarding PI since its most recent privacy notice, said Agoglia. If either of those criteria changes, the firm must resume providing the notice.

See “Disney Settlement Offers a Playbook for CA AG’s Opt-Out Expectations” (Mar. 11, 2026).

Best Practices

To ensure compliance with amended Reg S‑P, firms should, according to Galipeau:

  • conduct employee training;
  • test incident response procedures;
  • conduct risk assessments;
  • ensure service providers understand the new requirements; and
  • take advantage of the SEC’s small entity compliance guide and outreach videos, which are available online.

See “SEC Staff Discuss Regulation S‑P Amendments and Related Examination Processes” (Oct. 15, 2025).