The SEC has vowed its cyber police will still patrol the markets during President Trump’s second administration. On February 20, 2025, the regulator announced the creation of the Cyber and Emerging Technologies Unit (CETU), which replaces the Crypto Assets and Cyber Unit and will have seven new priorities.
Following the announcement, CETU seems to have handed fresh white hats to its attorneys and fraud specialists. Their mission is “to protect retail investors from bad actors in the emerging technologies space” and “to focus on combatting cyber-related misconduct,” the SEC declared.
Fraud enabled by AI and machine learning tops the list of seven priorities. Giving AI the lead billing fits the current technology moment, observed Debevoise partner Erez Liebermann. Deepfakes and other AI deceptions have turbocharged investment scams. “There is a tremendous ability now with AI to generate messages to mislead investors,” including in connection with schemes to cause investors to pump and dump stocks, he told the Cybersecurity Law Report.
This article discusses cyber and AI securities enforcement on the horizon, and distills six steps that companies can consider to prepare for CETU’s new priorities, with insights from SEC enforcement specialists at A&O Shearman, Davis Polk, Debevoise, Katz Banks Kumin, and Morrison & Foerster.
See “SEC Stresses Cybersecurity, AI and Crypto in Its 2025 Exam Priorities” (Dec. 18, 2024).
Six of Seven Priorities Highlight Fraud
The wording of CETU’s 2025 priorities resembles the Cyber Unit’s 2017 priorities announcement, from the first Trump administration, and CETU will be led by Laura D’Allaird, who served in the unit in 2017.
“The new unit returns to some of the core day-one objectives in 2017 that wound up getting pushed aside by crypto issues in subsequent years,” like stopping hackers from manipulating the markets, observed Davis Polk partner and former SEC Cyber Unit Chief Robert Cohen.
CETU also will focus on other classic issues. “Investment scams that use the promise of the latest or trendiest industry to attract retail investors is a perennial SEC enforcement issue,” Cohen told the Cybersecurity Law Report. Solar energy, crypto investments and cannabis are examples of areas that drew hype during the past three administrations, he noted.
Fraud Committed With Emerging Technologies
When announcing the priorities, Acting Chair Commissioner Mark Uyeda said CETU will facilitate capital formation and market efficiency by rooting out those who seek to misuse innovations. To achieve that goal, CETU seems set to target “the types of fraud that would undermine confidence in AI generally and broadly, rather than ramping up an industry-wide enforcement of AI governance and risk management,” A&O Shearman partner Andrew Tannenbaum told the Cybersecurity Law Report.
So far, the SEC has not charged any company for mild puffery about AI. In 2024, the SEC settled with Delphia and Global Predictions, each for falsely claiming that they used AI-enhanced tools for customers’ investment processes when they had not.
In another 2024 SEC enforcement action with an AI washing claim, still being litigated, the SEC charged QZ Asset Management for falsely stating that its AI analytics would bring customers exceptional investment returns. Additionally, the scheme allegedly involved multiple fraudulent and deceptive practices. The case continues in South Dakota federal court, awaiting service on defendants in China.
With pressure rising for companies to display AI prowess, the SEC will likely bring more AI washing cases in coming years. However, Cohen noted, likely “there would need to be strong evidence of fraud and a real intent to deceive about something that was clearly material to investors” for an AI washing case to move forward.
See “SEC Enforcement Actions Target ‘AI Washing’” (May 22, 2024).
Fraud Related to Social Media
Social media continues to attract investors. For example, in 2024, the SEC charged Abraham Shafi, the founder and former CEO of Get Together Inc., a privately held social media startup called IRL, for raising $170 million by hiding from investors its extensive spending to pump up the social media platform.
Hacking to Obtain Material Nonpublic Information and Takeovers of Retail Brokerage Accounts
In 2024, the SEC charged a U.K. hacker for allegedly infiltrating the computer systems of five public companies to grab material nonpublic information and trade the stock before the companies’ public earnings announcements, gaining approximately $3.75 million in profits.
When probing hacks of SEC-regulated entities and public companies, CETU primarily will be “looking through the bad-guy lens, as opposed to looking first at what companies could have done to prevent the hack or fraud,” Liebermann posited. The prior administration at times emphasized the latter concern.
CETU’s focus on the “bad-guy” aspects of “data theft and hacking suggests it wants to prevent interference with deals that might spur growth and innovation,” which aligns with the Trump administration’s goal of boosting U.S. companies’ competitive position in AI and cryptocurrency, Tannenbaum observed.
See “Present and Former SEC Officials Discuss Strategy, Testimony, Proffers and Negotiations” (Mar. 19, 2025).
Fraud Involving Blockchain Technology and Crypto Assets
In 2025, CETU’s priority is to stop blockchain “fraud.” In 2017, it more expansively targeted “violations” involving distributed ledger technology and initial coin offerings.
Commissioner Hester Peirce lamented in March 2025 that many scams are proceeding under crypto’s name. She stressed that the SEC will seek to shut those down to further facilitate a healthy investing market around genuine crypto offerings. In 2023, the SEC brought charges against Terraform Labs and its founder, Do Kwon, for orchestrating a multi-billion-dollar crypto asset securities fraud that incorporated misleading and inadequate statements about an algorithmic stablecoin and other crypto asset securities. In 2022, the SEC charged FTX and its CEO, Samuel Bankman-Fried, with securities fraud for myriad deceptions.
Peirce is chairing a Crypto Task Force to issue guidelines to enable a robust crypto market with accountability, so most observers expect a narrow SEC effort in pursuing crypto bad actors.
See “Protecting Against Crypto Theft” (Aug. 10, 2022).
Compliance With Cyber Regulations
“Regulation S‑P, Regulation S‑ID are still out there,” among many other requirements, Liebermann pointed out. Regulation S‑P, Privacy of Consumer Financial Information and Safeguarding Personal Information, requires broker-dealers, investment companies, registered investment advisers, funding portals and transfer agents to follow certain procedures when handling consumers’ data. Regulation S‑ID, Identity Theft Red Flags, concerns financial institutions’ identity verification efforts. Companies still must keep updated policies and procedures addressing these and other rules, he urged, adding, “there will still be examinations relating to them. They are still in the priorities of the SEC.”
See “What Regulated Companies Need to Know About the SEC’s Final Amendments to Regulation S‑P” (Jul. 24, 2024).
Steps to Take
With the newly formed CETU and its priorities announced by the SEC, companies can take the following steps to prepare for enforcement.
Review the Language of AI Disclosures
Amid market pressure for companies to show their AI capabilities, businesses “should not be aspirational in the way that they describe their AI use cases relative to how they actually use AI,” Debevoise partner Kristin Snyder recommended. Investment advisers have begun disclosing their AI use cases more often in Form ADV filings to put investors on notice of risks, she reported.
Disclosures about AI usage, Debevoise partner Charu Chandrasekhar added, will attract the most SEC scrutiny “when the disclosures involve – for instance – an investment adviser’s investment process or, for a public company, when the disclosures are central to the description of the firm’s business model.” For financial institutions, this might encompass AI used for portfolio management, valuation or data extraction, or AI use that is at the core of research on the selection or approval of investments, she said.
“The SEC obviously is very keenly focused on AI-related deception, and this scrutiny will be administration neutral, carrying over into the latest administration,” Snyder highlighted.
To investigate AI disclosure shortcomings, the Commission can deploy technological tools to crawl through filings to find AI buzzwords. “The SEC also has a pretty wide and broad legal toolkit to ground an action if there are false and misleading statements that a firm makes about its AI use cases,” including Section 206 of the Investment Advisers Act and the Marketing Rule, Snyder said.
See our two-part series on the SEC charging four companies for misleading cyber incident disclosures: “New Expectations?” (Nov. 20, 2024), and “Lessons on Contents and Procedures” (Dec. 4, 2024).
Review Escalation Policies for AI and Cyber
Once started, SEC investigations often involve reviewing a company’s incident response policies. “Information flow needs to be prioritized” because company decision-makers need to quickly learn when there is a situation with AI or a hack, Morrison & Foerster partner Haima Marlier told the Cybersecurity Law Report. “The most protective compliance measure that the company can take now is to have a robust, well-understood, functioning escalation process to show the SEC,” she stressed.
Cyber incidents carry well-known reputational and operational risks. But the market remains inexperienced with the risks of AI incidents. “There are just so many different technologies out there now,” Marlier noted. Although an AI problem may not ultimately pose a material risk financially or reputationally, companies should consider centralizing AI incident information to help ensure escalation to leaders. “Incident information flow is so important with emerging technology,” she said.
Keep Current With AI Governance Standards and SEC Risk Alerts
The SEC has expected a company’s compliance program to cover all of its risk factors and be tailored to its business. But AI governance standards are still emerging. “It is good to be mindful or aware of the governance principles in the [National Institute of Standards and Technology] AI Risk Management Framework [NIST AI RMF] because the staff potentially could take those into consideration,” as they have with NIST’s cybersecurity framework, Snyder noted.
The NIST AI RMF, Chandrasekhar observed, “can be a useful guide in designing AI governance, policies and procedures. But, ultimately, companies and firms must design programs that they can implement in practice, as opposed to adopting a blanket set of AI standards without thinking fully through implementation or applicability.”
Important AI governance actions for SEC compliance include:
- Having an Inventory of Where AI Is Being Used: “Just knowing where AI is deployed across the business can be pretty challenging for a lot of firms and companies,” Chandrasekhar reported.
- Establishing an Approval Process for AI That Works for the Organization and Is Right-Sized for Its Scale of AI Development and Use: One goal for compliance teams, Snyder advised, is sufficiently understanding the organization’s AI use such that its compliance team can “be comfortable with variations of a particular use case without having to go back through a full approval process.”
- Reviewing the Use of AI for Meeting Notes: Many employees have embraced AI summary tools to generate meeting notes for non-attendees. However, the AI’s outputs might trigger companies’ regulatory recordkeeping obligations.
- Following the Policies As Written: “The SEC has used its existing ruleset, including the Compliance Rule, to ensure that registrants are in compliance with any AI policies and procedures that they have,” Snyder said.
See our two-part series on emojis and video communications: “The Next Frontier of SEC Scrutiny?” (Oct. 2, 2024), and “Compliance Practices to Overcome Recordkeeping Challenges” (Oct. 9, 2024).
Review AI Treatment of Customer Information
Registrants are required to maintain policies and procedures to safeguard customer information. The Division of Examinations’ most recent priorities list, from December 2024, includes probing how “firms protect against loss or misuse of client information that may result from use of third-party AI models or tools.” Thus, the growing adoption of AI agents to query customer management records requires careful review.
Katz Banks Kumin partner Matthew LaGarde, who represents cybersecurity whistleblowers, told the Cybersecurity Law Report that he is aware of company insiders who reported their companies’ deployment of “AI products without adequate safeguards to ensure that the private information those products ingest is not widely and inappropriately disseminated to other parts of the company.”
Knowledgeable data engineers could report their company’s shortcomings to the SEC even if they contributed to the misconduct, LaGarde added, “because the commission does not prohibit those who participated in wrongdoing from getting a whistleblower award, depending on the nature and extent of their participation.” He urged such insiders to consult both defense and whistleblower attorneys before proceeding.
See “What to Know (and Do) About DOJ’s Efforts to Identify and Prosecute Cybersecurity Fraud Under the False Claims Act” (Oct. 30, 2024).
Use Examinations to Gain Feedback About Governance
The Division of Examinations conducted an “AI sweep” in 2023 to learn more about companies’ uses of AI for financial operations. Examiners “see companies’ practices across the industry and can see what the norms are for firms, and what practices are outliers,” Cohen pointed out.
“There’s a good chance that the Commission’s leaders will see examinations as a collaborative mechanism to help the industry identify areas where there might be room for improvement or enhancements,” Cohen noted. “Companies should try to have some engagement with the examiners about what the examiners are seeing across the industry. The registrant may have an opportunity to adopt those suggestions without the threat of a big penalty or a public enforcement case,” he said.
Companies should also “follow the risk alerts that the SEC puts out periodically” to gain AI guidance, Cohen recommended.
Update Cyber Policies and Procedures
The SEC’s examiners and CETU attorneys have a long history of enforcing cyber regulations, but the administration has enacted a significant policy change. To issue most types of enforcement subpoenas, CETU attorneys need the commissioners’ approval, which is granted by a vote. Because of that constraint on CETU, the Division of Examinations may play a more central role. Whether for AI or cyber issues, “compliance is now incredibly important because routine examinations and for-cause examinations can be a referral source to the Division of Enforcement,” Marlier noted.
“The SEC has been very clear over the years that cyber compliance is not a check-the-box approach, that they expect it to be well-tailored” to the organization, Liebermann emphasized. Problems arise when “companies take essentially off-the-shelf programs, change words, and assume that it fits them,” he cautioned. Instead, when reviewing and updating cyber policies and procedures, “take the compliance program from its beginning point and evaluate these rules as they relate to the company,” he advised.
Account takeovers are (again) a rising problem that the SEC is watching, which is why it is enunciated in the priorities, Liebermann continued. “The fraudsters have figured out how to get around multi-factor authentication. They have figured out social engineering, largely with the help of AI, to do a better job on the phone calls with help desks and others, so account takeovers are coming back, maybe even to an all-time high,” he warned.

