SEC Enforcement

Six Steps to Address the SEC’s Trump Era Cyber Enforcement Priorities


The SEC has vowed its cyber police will still patrol the markets during President Trump’s second administration. On February 20, 2025, the regulator announced the creation of the Cyber and Emerging Technologies Unit (CETU), which replaces the Crypto Assets and Cyber Unit and will have seven new priorities.

Following the announcement, CETU seems to have handed fresh white hats to its attorneys and fraud specialists. Their mission is “to protect retail investors from bad actors in the emerging technologies space” and “to focus on combatting cyber-related misconduct,” the SEC declared.

Fraud enabled by AI and machine learning tops the list of seven priorities. Giving AI the lead billing fits the current technology moment, observed Debevoise partner Erez Liebermann. Deepfakes and other AI deceptions have turbocharged investment scams. “There is a tremendous ability now with AI to generate messages to mislead investors,” including in connection with schemes to cause investors to pump and dump stocks, he told the Cybersecurity Law Report.

This article discusses cyber and AI securities enforcement on the horizon, and distills six steps that companies can consider to prepare for CETU’s new priorities, with insights from SEC enforcement specialists at A&O Shearman, Davis Polk, Debevoise, Katz Banks Kumin, and Morrison & Foerster.

See “SEC Stresses Cybersecurity, AI and Crypto in Its 2025 Exam Priorities” (Dec. 18, 2024).

Six of Seven Priorities Highlight Fraud

The wording of CETU’s 2025 priorities resembles the Cyber Unit’s 2017 priorities announcement, from the first Trump administration, and CETU will be led by Laura D’Allaird, who served in the unit in 2017.

“The new unit returns to some of the core day-one objectives in 2017 that wound up getting pushed aside by crypto issues in subsequent years,” like stopping hackers from manipulating the markets, observed Davis Polk partner and former SEC Cyber Unit Chief Robert Cohen.

CETU also will focus on other classic issues. “Investment scams that use the promise of the latest or trendiest industry to attract retail investors is a perennial SEC enforcement issue,” Cohen told the Cybersecurity Law Report. Solar energy, crypto investments and cannabis are examples of areas that drew hype during the past three administrations, he noted.

Fraud Committed With Emerging Technologies

When announcing the priorities, Acting Chair Commissioner Mark Uyeda said CETU will facilitate capital formation and market efficiency by rooting out those who seek to misuse innovations. To achieve that goal, CETU seems set to target “the types of fraud that would undermine confidence in AI generally and broadly, rather than ramping up an industry-wide enforcement of AI governance and risk management,” A&O Shearman partner Andrew Tannenbaum told the Cybersecurity Law Report.

So far, the SEC has not charged any company for mild puffery about AI. In 2024, the SEC settled with Delphia and Global Predictions, each for falsely claiming that they used AI-enhanced tools for customers’ investment processes when they had not.

In another 2024 SEC enforcement action with an AI washing claim, still being litigated, the SEC charged QZ Asset Management for falsely stating that its AI analytics would bring customers exceptional investment returns. Additionally, the scheme allegedly involved multiple fraudulent and deceptive practices. The case continues in South Dakota federal court, awaiting service on defendants in China.

With pressure rising for companies to display AI prowess, the SEC will likely bring more AI washing cases in coming years. However, Cohen noted, likely “there would need to be strong evidence of fraud and a real intent to deceive about something that was clearly material to investors” for an AI washing case to move forward.

See “SEC Enforcement Actions Target ‘AI Washing’” (May 22, 2024).

Fraud Related to Social Media

Social media continues to attract investors. For example, in 2024, the SEC charged Abraham Shafi, the founder and former CEO of Get Together Inc., a privately held social media startup called IRL, for raising $170 million by hiding from investors its extensive spending to pump up the social media platform.

Hacking to Obtain Material Nonpublic Information and Takeovers of Retail Brokerage Accounts

In 2024, the SEC charged a U.K. hacker for allegedly infiltrating the computer systems of five public companies to grab material nonpublic information and trade the stock before the companies’ public earnings announcements, gaining approximately $3.75 million in profits.

When probing hacks of SEC-regulated entities and public companies, CETU primarily will be “looking through the bad-guy lens, as opposed to looking first at what companies could have done to prevent the hack or fraud,” Liebermann posited. The prior administration at times emphasized the latter concern.

CETU’s focus on the “bad-guy” aspects of “data theft and hacking suggests it wants to prevent interference with deals that might spur growth and innovation,” which aligns with the Trump administration’s goal of boosting U.S. companies’ competitive position in AI and cryptocurrency, Tannenbaum observed.

See “Present and Former SEC Officials Discuss Strategy, Testimony, Proffers and Negotiations” (Mar. 19, 2025).

Fraud Involving Blockchain Technology and Crypto Assets

In 2025, CETU’s priority is to stop blockchain “fraud.” In 2017, it more expansively targeted “violations” involving distributed ledger technology and initial coin offerings.

Commissioner Hester Peirce lamented in March 2025 that many scams are proceeding under crypto’s name. She stressed that the SEC will seek to shut those down to further facilitate a healthy investing market around genuine crypto offerings. In 2023, the SEC brought charges against Terraform Labs and its founder, Do Kwon, for orchestrating a multi-billion-dollar crypto asset securities fraud that incorporated misleading and inadequate statements about an algorithmic stablecoin and other crypto asset securities. In 2022, the SEC charged FTX and its CEO, Samuel Bankman-Fried, with securities fraud for myriad deceptions.

Peirce is chairing a Crypto Task Force to issue guidelines to enable a robust crypto market with accountability, so most observers expect a narrow SEC effort in pursuing crypto bad actors.

See “Protecting Against Crypto Theft” (Aug. 10, 2022).

Compliance With Cyber Regulations

“Regulation S‑P, Regulation S‑ID are still out there,” among many other requirements, Liebermann pointed out. Regulation S‑P, Privacy of Consumer Financial Information and Safeguarding Personal Information, requires broker-dealers, investment companies, registered investment advisers, funding portals and transfer agents to follow certain procedures when handling consumers’ data. Regulation S‑ID, Identity Theft Red Flags, concerns financial institutions’ identity verification efforts. Companies still must keep updated policies and procedures addressing these and other rules, he urged, adding, “there will still be examinations relating to them. They are still in the priorities of the SEC.”

See “What Regulated Companies Need to Know About the SEC’s Final Amendments to Regulation S‑P” (Jul. 24, 2024).

Steps to Take

With the newly formed CETU and its priorities announced by the SEC, companies can take the following steps to prepare for enforcement.

Review the Language of AI Disclosures

Amid market pressure for companies to show their AI capabilities, businesses “should not be aspirational in the way that they describe their AI use cases relative to how they actually use AI,” Debevoise partner Kristin Snyder recommended. Investment advisers have begun disclosing their AI use cases more often in Form ADV filings to put investors on notice of risks, she reported.

Disclosures about AI usage, Debevoise partner Charu Chandrasekhar added, will attract the most SEC scrutiny “when the disclosures involve – for instance – an investment adviser’s investment process or, for a public company, when the disclosures are central to the description of the firm’s business model.” For financial institutions, this might encompass AI used for portfolio management, valuation or data extraction, or AI use that is at the core of research on the selection or approval of investments, she said.

“The SEC obviously is very keenly focused on AI-related deception, and this scrutiny will be administration neutral, carrying over into the latest administration,” Snyder highlighted.

To investigate AI disclosure shortcomings, the Commission can deploy technological tools to crawl through filings to find AI buzzwords. “The SEC also has a pretty wide and broad legal toolkit to ground an action if there are false and misleading statements that a firm makes about its AI use cases,” including Section 206 of the Investment Advisers Act and the Marketing Rule, Snyder said.

See our two-part series on the SEC charging four companies for misleading cyber incident disclosures: “New Expectations?” (Nov. 20, 2024), and “Lessons on Contents and Procedures” (Dec. 4, 2024).

Review Escalation Policies for AI and Cyber

Once started, SEC investigations often involve reviewing a company’s incident response policies. “Information flow needs to be prioritized” because company decision-makers need to quickly learn when there is a situation with AI or a hack, Morrison & Foerster partner Haima Marlier told the Cybersecurity Law Report. “The most protective compliance measure that the company can take now is to have a robust, well-understood, functioning escalation process to show the SEC,” she stressed.

Cyber incidents carry well-known reputational and operational risks. But the market remains inexperienced with the risks of AI incidents. “There are just so many different technologies out there now,” Marlier noted. Although an AI problem may not ultimately pose a material risk financially or reputationally, companies should consider centralizing AI incident information to help ensure escalation to leaders. “Incident information flow is so important with emerging technology,” she said.

Keep Current With AI Governance Standards and SEC Risk Alerts

The SEC has expected a company’s compliance program to cover all of its risk factors and be tailored to its business. But AI governance standards are still emerging. “It is good to be mindful or aware of the governance principles in the [National Institute of Standards and Technology] AI Risk Management Framework [NIST AI RMF] because the staff potentially could take those into consideration,” as they have with NIST’s cybersecurity framework, Snyder noted.

The NIST AI RMF, Chandrasekhar observed, “can be a useful guide in designing AI governance, policies and procedures. But, ultimately, companies and firms must design programs that they can implement in practice, as opposed to adopting a blanket set of AI standards without thinking fully through implementation or applicability.”

Important AI governance actions for SEC compliance include:

  1. Having an Inventory of Where AI Is Being Used: “Just knowing where AI is deployed across the business can be pretty challenging for a lot of firms and companies,” Chandrasekhar reported.
  2. Establishing an Approval Process for AI That Works for the Organization and Is Right-Sized for Its Scale of AI Development and Use: One goal for compliance teams, Snyder advised, is sufficiently understanding the organization’s AI use such that its compliance team can “be comfortable with variations of a particular use case without having to go back through a full approval process.”
  3. Reviewing the Use of AI for Meeting Notes: Many employees have embraced AI summary tools to generate meeting notes for non-attendees. However, the AI’s outputs might trigger companies’ regulatory recordkeeping obligations.
  4. Following the Policies As Written: “The SEC has used its existing ruleset, including the Compliance Rule, to ensure that registrants are in compliance with any AI policies and procedures that they have,” Snyder said.

See our two-part series on emojis and video communications: “The Next Frontier of SEC Scrutiny?” (Oct. 2, 2024), and “Compliance Practices to Overcome Recordkeeping Challenges” (Oct. 9, 2024).

Review AI Treatment of Customer Information

Registrants are required to maintain policies and procedures to safeguard customer information. The Division of Examinations’ most recent priorities list, from December 2024, includes probing how “firms protect against loss or misuse of client information that may result from use of third-party AI models or tools.” Thus, the growing adoption of AI agents to query customer management records requires careful review.

Katz Banks Kumin partner Matthew LaGarde, who represents cybersecurity whistleblowers, told the Cybersecurity Law Report that he is aware of company insiders who reported their companies’ deployment of “AI products without adequate safeguards to ensure that the private information those products ingest is not widely and inappropriately disseminated to other parts of the company.”

Knowledgeable data engineers could report their company’s shortcomings to the SEC even if they contributed to the misconduct, LaGarde added, “because the commission does not prohibit those who participated in wrongdoing from getting a whistleblower award, depending on the nature and extent of their participation.” He urged such insiders to consult both defense and whistleblower attorneys before proceeding.

See “What to Know (and Do) About DOJ’s Efforts to Identify and Prosecute Cybersecurity Fraud Under the False Claims Act” (Oct. 30, 2024).

Use Examinations to Gain Feedback About Governance

The Division of Examinations conducted an “AI sweep” in 2023 to learn more about companies’ uses of AI for financial operations. Examiners “see companies’ practices across the industry and can see what the norms are for firms, and what practices are outliers,” Cohen pointed out.

“There’s a good chance that the Commission’s leaders will see examinations as a collaborative mechanism to help the industry identify areas where there might be room for improvement or enhancements,” Cohen noted. “Companies should try to have some engagement with the examiners about what the examiners are seeing across the industry. The registrant may have an opportunity to adopt those suggestions without the threat of a big penalty or a public enforcement case,” he said.

Companies should also “follow the risk alerts that the SEC puts out periodically” to gain AI guidance, Cohen recommended.

Update Cyber Policies and Procedures

The SEC’s examiners and CETU attorneys have a long history of enforcing cyber regulations, but the administration has enacted a significant policy change. To issue most types of enforcement subpoenas, CETU attorneys need the commissioners’ approval, which is granted by a vote. Because of that constraint on CETU, the Division of Examinations may play a more central role. Whether for AI or cyber issues, “compliance is now incredibly important because routine examinations and for-cause examinations can be a referral source to the Division of Enforcement,” Marlier noted.

“The SEC has been very clear over the years that cyber compliance is not a check-the-box approach, that they expect it to be well-tailored” to the organization, Liebermann emphasized. Problems arise when “companies take essentially off-the-shelf programs, change words, and assume that it fits them,” he cautioned. Instead, when reviewing and updating cyber policies and procedures, “take the compliance program from its beginning point and evaluate these rules as they relate to the company,” he advised.

Account takeovers are (again) a rising problem that the SEC is watching, which is why it is enunciated in the priorities, Liebermann continued. “The fraudsters have figured out how to get around multi-factor authentication. They have figured out social engineering, largely with the help of AI, to do a better job on the phone calls with help desks and others, so account takeovers are coming back, maybe even to an all-time high,” he warned.

Benchmarking

Data Clean Rooms and De-Identified Data Are Among Concerns in Navigating State Privacy Laws


Organizations may incorrectly believe they can avoid compliance with state data privacy laws by de-identifying data or running it through so-called “data clean rooms” (DCRs), according to the speakers at a program dissecting the results of the 2025 State Privacy Law Survey by the Interactive Advertising Bureau (IAB). In additional to DCRs and de-identification, the survey also assessed privacy professionals’ views on, and experiences with, inferring sensitive PI (SPI), private suits under wiretapping statutes, treatment of minors’ data, data minimization, issues involving consent and disclosures, and third-party due diligence. This article synthesizes the associated survey findings and insights from industry experts at Frankfurt Kurnit Klein & Selz, IAB, Kelley Drye & Warren, and Ketch.

See “Measures for Complying With 19 (and Counting) State Privacy Laws” (Jun. 26, 2024).

Survey Demographics

Respondents included publishers (43%), some of which may also provide adtech services; adtech providers (27%); advertisers, brands and their agencies (17%); and entities that serve other roles such as law firms and consultants (13%). Most individual respondents were either privacy lawyers or lawyers that touch the privacy field, according to Arlene Mu, assistant GC at IAB.

See “Advertising Opt‑Outs Drive New Privacy Strategies in 2025” (Dec. 18, 2024).

Inferring SPI

State privacy laws subject SPI to heightened regulatory review, noted Mu. Seventy-eight percent of respondents are of the view that non-sensitive data remains non-sensitive unless it is used for inferring sensitive information. Most respondents who opined on the issue have the same perspective with respect to browsing data under the Washington My Health My Data Act (MHMDA). Additionally, 78% of respondents believe opt-in consent is needed when inferring SPI from non-SPI.

Until recently, whether inferences drawn from non-sensitive data can be considered sensitive data was an open question, but regulators have now taken a stance, noted Andrew Folks, associate at Frankfurt Kurnit Klein & Selz. The California Privacy Protection Agency (CPPA) highlighted in its settlement with data broker Background Alert that companies can use inferences to target sensitive groups for improper purposes. Under the CCPA regulations, a company is not required to provide notice of the right to limit use of SPI “unless [it is] actually inferring characteristics about that SPI,” Folks added. The proposed American Privacy Rights Act and guidance on browsing data from the U.S. Department of Health and Human Services take a similar approach.

The growing consensus on inferences, with the results of several recent enforcement actions, reflects guidance, said Aaron Burstein, a partner at Kelley Drye. Accordingly, to protect themselves from regulatory scrutiny, organizations should have policies and controls substantiating that they are not drawing inferences from non-sensitive data.

Companies are interested in tools to identify inference-related risks as they decide whether and how to use SPI, noted Ketch co‑founder and head of product Max Anderson.

The survey results on SPI reflect what Burstein has seen in practice. Broad definitions of SPI and health information have been driving companies to take national approaches to opt‑in consent or avoid SPI in connection with advertising. With the exception of MHMDA, there is a definite trend toward adopting a national definition of SPI. Because MHMDA is so broad, many companies are choosing to treat that separately, explained Burstein.

See “California’s Delete Act Enforcement Sweep Takeaways” (Apr. 2, 2025); and “Addressing the Operational Complexities of Complying With the Washington My Health My Data Act” (Apr. 3, 2024).

Data Clean Rooms Not a Panacea

A DCR is a controlled environment into which multiple parties may contribute data for analytics, measurement, audience profile augmentation and retargeting, Mu explained. DCRs may employ encryption, hashing or other privacy-enhancing technologies. However, a majority of respondents acknowledged that contributing data to a DCR does not automatically de-identify that data, she noted.

Most respondents consider DCR providers to be service providers. For example, 70% of respondents believe that a DCR is a service provider when used for measurement and analytics. Respondents also treat data contributors as third parties and transfers of data through the DCR as data sales. Those survey results are consistent with Anderson’s experiences. Consequently, his customers using DCRs seek to ensure that consent data is delivered to the DCR as the room operationalizes data.

The survey results also reflect Burstein’s experience. There has been significant refinement in how organizations think about DCRs. They focus on what data goes into a DCR, how it is combined and what goes out to the contributors, he shared.

At first, some companies saw DCRs as “magic laundromats for data,” observed Anderson. They believed that putting data into a DRC reduced or eliminated privacy obligations. Those views have changed.

“There are no silver bullets here,” Burstein cautioned. Like other new technologies, DCRs were first seen as a panacea, added Folks. Many people – including many advertising lawyers – still assume that a DCR solves all concerns over PI. An organization should always consider whether a DCR is acting as a service provider or a third party, as well as the associated obligations.

See “Are ‘Privacy-First’ Clean Rooms Safe From Regulators?” (Mar. 23, 2022).

True De-Identification Generally Not Possible

There is ongoing debate over what satisfies obligations under state law to de-identify data, noted Mu. More than three-quarters of respondents (78%) do not think data can be de-identified in any use cases. About one-fifth think it is possible in measurement and analytics. A low percentage of respondents think de-identification is possible in audience profile augmentation and/or targeting.

“De-identification is hard to achieve,” Burstein said, advising that organizations should focus on the “linkability” of data. “If a use case involves linkage downstream or enrichment of data with a third-party source, no matter what information might be within a specific company’s control to link out to identifiers . . . I think it’s very hard to make the case that that data is truly de-identified,” he cautioned. On the other hand, if data goes into an analytics program and cannot be matched back to something else, that provides the greatest comfort and is most consistent with what regulators expect.

True de-identification is hard to achieve, Folks concurred. Even so, the survey results suggest that organizations may be committing publicly not to re-identify data and requiring recipients not to do so – which may be “good enough for regulators.” They are making the effort using the available – though imperfect – technologies.

More than one-quarter of respondents use or plan to use differential privacy to de-identify data. Roughly one-fifth use or plan to use synthetic data and/or “K-anonymity.” A handful use “perturbation” to de-identify data.

Whether or not a particular de-identification technology satisfies applicable law is less of a technology question and more of a legal question, explained Anderson. In his experience, most companies use differential privacy for purely analytical purposes. In those instances, they are reaching a complete “state of non-re-identifiability . . . and there is no way that that data is being reused for advertising or other purposes.”

See “Understanding Differential Privacy” (Dec. 15, 2021); and “The Impact of Recent Legislative and Litigation Trends on Commercial Use of De-Identified Data” (Jun. 16, 2021).

Wiretapping Litigation

Some plaintiffs have sued companies under state and federal wiretapping laws for using session cookies, chatbots and other technologies, noted Mu. IAB found that the most common way respondents mitigate risk of such litigation is by providing clear disclosure that chatbot communications will be recorded for specified purposes (59%); using a cookie banner to request consent for recording and storage of communications for specified purposes (43%); and disclosing in search bars that communications will be recorded and stored for specified purposes (30%).

No method is completely effective, said Burstein. Plaintiffs’ attorneys “do broad scans of the internet and look for appealing targets,” he noted. Additionally, consent banners can create additional risk if they fail to state accurately how cookies or tags are being used or are misleading in other ways. Organizations should scrutinize:

  • the disclosures they make;
  • the consumer choices they offer and how they are phrased; and
  • whether, from a technological perspective, the site functions in a manner consistent with the disclosures and choices.

“You can [definitely] get yourself in more trouble by making misleading or inaccurate claims in your disclosures,” Anderson agreed. It was surprising that just 43% of respondents use a cookie consent banner, which is the most obvious way organizations can avoid becoming easy targets.

Additionally, enforcement officials have indicated that organizations should be running websites and apps through simulations designed to identify potential violations, Anderson advised. “Most of our customers have started implementing these types of pen tests to check for whether the easy and obvious places where they’re going to get dinged are material or observable from the outside,” he reported. When a regulator sees a configuration issue or other obvious problem with a site, the regulator is likely to start wondering what other issues exist with the site or the company’s privacy practices, added Folks.

See “Managing Tracking Technologies and Their Privacy Dilemmas in 2025” (Mar. 12, 2025); and “Google’s Wiretap Cases Highlight Evolving Privacy Transparency Standards” (Jan. 24, 2024).

Minors’ Data

Forty percent of respondents said they conduct targeted advertising based on minors’ data, including 32% who take a national approach by meeting the highest applicable age standard. The remaining 8% plan to comply state by state.

The Maryland Online Data Privacy Act (MODPA) prohibits controllers from processing or selling minors’ data, according to IAB. Of the 27% of respondents who are subject to MODPA, 16% plan to cease targeting minors nationally, rather than solely in Maryland (11%).

Age-gating minors’ data is a hot topic, but there has not been much movement toward implementing it, according to Anderson. Although age-gating is a relatively easy problem to solve, it could have an adverse impact on the user experience. Many companies continue to seek guidance from enforcement agencies and regulators before implementing age-gating. When customers do address age thresholds, they tend to define “minor” on a state-by-state level, rather than a national level, in his experience.

See our three-part series “Children’s Privacy Grows Up”: “Examining New Laws That Now Protect Older Teens” (Jan. 15, 2025), “FTC Amends COPPA Rule and Targets Data Sharing” (Jan. 29, 2025), and “Seven Compliance Areas for Protecting Teens” (Feb. 12, 2025).

Data Minimization

Most state laws have provisions pertaining to data minimization and secondary use of data, noted Mu. Data minimization requires that the data collected and used is reasonably necessary and proportionate to the disclosed purpose for its use. California also requires that the disclosed purpose conform to reasonable customer expectations.

MODPA is an outlier among state laws, added Mu. It provides that the data collection purpose must be “reasonably necessary and proportionate to provide or maintain specific products or services requested by the consumer,” she said. Organizations are struggling with how to approach MODPA, according to Folks. Publishers are at risk of losing revenue because of its data minimization requirements. An amendment to MODPA, which the Maryland AG has supported, would bring the statute more in line with traditional data minimization standards while retaining restrictions on processing sensitive personal data.

See “Examining Maryland’s Game-Changing Data Minimization Requirements” (Apr. 24, 2024); and “How to Comply With the CPRA’s Data Minimization Standards” (Feb. 8, 2023).

Disclosure and Consent Issues

Importance of Clear Notice

Two-thirds of respondents agreed that, when SPI is not involved, a company still must obtain consumer consent to use PI for digital advertising purposes if its privacy statement does not disclose that use. Nearly one-quarter believe consent is required if consumers would be surprised, even if the privacy statement discloses such use.

The survey results are consistent with how Anderson’s customers are approaching the issue. “Have strong, accurate and thorough notices. Say what you’re going to do and do exactly what you said you were going to do. That seems to be a theme that continues, and certainly the survey reflects that,” he said.

Organizations should consider how regulators and enforcers might perceive their use of PI for digital advertising. Their perspectives are not monolithic. Additionally, “AGs will, by and large, be looking for clear violations of the law,” he said. They are likely to focus first on clear violations of policy statements, failures to disclose data uses, misrepresentations about sensitive data and failures to obtain consent to use sensitive data for advertising. They are not looking for “edge cases.”

If an organization is concerned that consumers may be surprised about a practice, they need to determine whether to provide additional disclosure within or outside the organization’s privacy policy, advised Burstein. “Specificity is also becoming very important to regulators who are scrutinizing privacy policies,” he observed. He is not comfortable with relying solely on a privacy notice. Conspicuous, specific disclosure enables an organization to say, “We really went out of our way to make our users aware of this practice,” he asserted.

See “Preparing for U.S. State Law Privacy Compliance in 2025” (Dec. 11, 2024); and “Practical Strategies for Effective Consent Management” (Sep. 25, 2024).

Dark Patterns and Opt-In Consent

Many state laws prohibit use of so-called “dark patterns” for obtaining consumer consent. A majority of respondents have either already reviewed all their consent-obtaining mechanisms to ensure they are compliant (38%) or are in the process of reviewing them (27%).

Fifty-nine percent of respondents believe the U.S. is moving toward an opt-in consent regime, even where opting out is permissible. “The state wiretapping lawsuits are a tremendous source of pressure in that direction,” posited Burstein. Even so, if the market moves toward opt-in consent banners for cookies or tags, it will not necessarily constitute consent for all data or all uses.

See “How Companies Can Identify and Prevent Unlawful Dark Patterns” (Jan. 10, 2024).

Oregon’s Requirements

The Oregon Consumer Privacy Act requires a company to disclose, in connection with an access request, either (i) the specific third parties with which it has shared the consumer’s PI or (ii) all third parties with which it may share any consumer’s PI, explained Mu. Seventy percent of respondents said they disclose all third parties with which they share any consumer’s PI. Most of those respondents (80%) disclose only the third parties that directly receive PI from the company, not any entities with which the third party might share the PI.

Limiting disclosure to only include third parties that directly receive PI makes practical sense, noted Anderson. Trying to identify and disclose potential fourth-party recipients of PI “seems perilous – you don’t know what you don’t know,” he observed.

The majority approach is both practical and consistent with the statute, Burstein concurred. As in the wiretapping scenario, inaccurate disclosure of potential fourth-party recipients could create new problems for the company.

At first, Oregon’s law was an outlier, but that may be changing, said Folks. For example, new rules under the Children’s Online Privacy Protection Act would require similar disclosures. Additionally, the FTC’s settlement with General Motors over its sharing of precise geolocation data and driving behavior information requires clear and conspicuous disclosure of the names and categories of the third parties to which General Motors discloses information. Thus, even if companies are not subject to the Oregon law, the obligation to disclose the third parties with which a company shares information is “going to be the new norm,” he predicted.

It remains to be seen whether regulators will take the position that disclosure of fourth-party recipients of data is also required. Notably, Rhode Island’s law “requires disclosure of the specific third parties [to which] a business may in the future disclose a consumer’s personal information,” noted Folks. “I have no idea how that’s workable.”

See “Analyzing 2023’s New State Privacy Laws: Oregon and Delaware Join the Strictest Tier” (Jul. 12, 2023).

Third Party Due Diligence

State privacy laws require organizations to conduct due diligence on processors, service providers and/or third parties, observed Mu. Roughly half of respondents said they send tailored due diligence questions to such parties. Nearly one-third use generic questionnaires or engage internal or external assessors.

Notably, 11% said they require additional technical proof from vendors to demonstrate their privacy controls, which shows that some companies are “really kind of digging into the technologies that their partners are using,” observed Burstein. That might be the case when companies are dealing with potentially risky third parties or other high-risk cases.

“I don’t see a path where [due diligence] is not survey based, unfortunately,” said Anderson. In some cases, he has been asked to provide log-level proof that a system actually effectuates users’ opt-out choices. However, it is difficult to provide technical proof for many of the matters covered by due diligence questionnaires.

In addition to using surveys and technical proofs, organizations can leverage monitoring and observation, continued Burstein. For example, if a company is depending on a partner to obtain consent, the company can seek verification that it is doing so. A company with hundreds of partners or data recipients could do some sampling and retain a record of the sampling to show regulators it did not just rely on survey responses.

“Third-party due diligence is always a risk management question at the end of the day,” said Folks. Enforcers and regulators have provided a significant amount of guidance on their expectations. “You can design the best program, but the regulator also expects you to follow through,” cautioned Mu.

See our two-part series on privacy and security provisions in vendor agreements: “Assessing the Risks” (Mar. 17, 2021), and “Key Data Processing Considerations” (Mar. 24, 2021).

Executive Orders

Reference Guide to 2025 Executive Orders for Compliance Professionals


Since President Donald Trump took office on January 20, 2025, there has been a veritable avalanche of executive orders (EOs) and memoranda touching on almost every aspect of the federal government and much of American life. Though it can be challenging to track all the changes, compliance professionals must keep abreast of the EOs and other updates that directly impact their work. To make staying on top of the developments as painless as possible, this quick guide compiles the EOs most relevant to compliance professionals, arranged in date order with links to germane documents, and distills the critical details of each.

See “Implications of the Trump AI Executive Order” (Mar. 26, 2025).

America First

  • Title: America First Policy Directive to the Secretary of State
  • Date Signed: January 20, 2025
  • Date Published: January 29, 2025
  • EO Number: 14150
  • Key Changes:
    • Articulates a new U.S. foreign policy to “champion core American interests and always put America and American citizens first.”
    • Directs the Secretary of State to issue guidance “bringing the Department of State’s policies, programs, personnel, and operations in line with an America First foreign policy, which puts America and its interests first.”
  • Related Memoranda:

DEI

Cartels and TCOs

DOGE

Equal Opportunity and Affirmative Action

AI

Deregulation

FCPA

Law Firms

People Moves

Former CFTC Director of Enforcement Joins Sidley


Sidley has welcomed Ian McGinley as a partner in its regulatory and enforcement practice group in New York. He most recently served as the Director of Enforcement for the U.S. Commodity Futures Trading Commission (CFTC).

McGinley’s practice focuses on regulatory enforcement and white-collar criminal defense, with a particular emphasis on commodities and securities laws. He advises clients in civil and criminal matters involving digital assets.

As the CFTC’s Director of Enforcement, McGinley served as the head enforcement official in the United States tasked with investigating and prosecuting violations of the Commodity Exchange Act and CFTC regulations. Under his leadership, the CFTC brought cases in traditional and emerging areas, created task forces focused on cybersecurity and emerging technologies, including AI, and reorganized and consolidated the Division of Enforcement’s analytical units to strengthen the agency’s ability to detect misconduct.

Prior to his work at the CFTC, McGinley served as an Assistant U.S. Attorney in the Southern District of New York for over a decade, where he was Co‑Chief of the Complex Frauds and Cybercrime Unit and oversaw a wide variety of matters involving sophisticated financial frauds, cybercrime, cryptocurrency fraud, healthcare fraud, tax fraud and Foreign Corrupt Practices Act violations. He also was a senior member of the Securities and Commodities Fraud Task Force and Chief of the Narcotics Unit.

For commentary from McGinley, see “Binance’s $4.3‑Billion Criminal Resolution Raises Questions on Crypto Guidance” (Feb. 7, 2024); and “Former SDNY Complex Frauds and Cybercrime Unit Co‑Chief Discusses Prosecution Trends and Takeaways” (Dec. 1, 2021). For insights from Sidley, see “Present and Former SEC Officials Discuss Strategy, Testimony, Proffers and Negotiations” (Mar. 19, 2025).