SEC Disclosure

Unpacking the AI Risks Disclosed in 2025 SEC Filings


Visibility into AI’s downside is on the upswing in corporations’ annual SEC filings. Seventy-six percent of S&P 500 companies added or expanded descriptions of AI as a material risk in their 2025 annual disclosure filings, according to an Autonomy Institute AI risk disclosure report (Report).

Three years into the generative AI revolution, AI risk disclosures have emerged as a secondary statement that corporations issue to counterbalance their bolder declarations that the organization has embraced AI. “Companies risk being the outlier if not mentioning AI in filings,” Goodwin partner Kaitlin Betancourt told the Cybersecurity Law Report.

The top AI concerns that the 500 companies disclosed were cyber threats, competitive disruption, bias in inputs and outputs, and data leakage. Businesses did not identify all these risks as hypothetical. Some disclosures allude to prior trouble. “Companies specifically mentioning incidents include Salesforce, Gen Digital, Intel and Visa. Those examples include updated language and a more active statement” about AI’s role in attacks in the prior year’s filing, Report author Sean Greaves told the Cybersecurity Law Report.

With insights from Greaves and Betancourt, an SEC regulatory expert, this article examines the risks that companies have disclosed, the language used and pitfalls around AI disclosures. It also highlights recommended actions for companies.

See “Guide to AI Risk Assessments” (Jun. 18, 2025).

Widespread Recognition of Concrete and Broad Liabilities From AI

In annual reports filed through April 2025, 380 of the S&P 500 companies augmented or added mentions of AI to their risk factors. These risk acknowledgments suggest a rising awareness by corporate leaders that AI-enabled opportunities are part of a technological upheaval that carries perils.

Concrete operational challenges from AI that companies highlighted include cyberattacks, regulatory compliance, data governance, third-party dependencies and access to energy. Business challenges cited were competitive declines, unprofitable investment in AI, and disruptions to the companies’ product or service delivery.

One purpose for publishing the Report was to “increase the capability for companies and people to truly assess the level of risk quite clearly and understand how others are experiencing it,” Greaves said.

To prepare the Report, Greaves used three different large language models (LLMs) to filter the disclosure reports filed from 2023 and 2024, then analyzed the dataset of Form 10‑K disclosures. The Autonomy Institute also made a web tool allowing the public to browse and filter the reviewed risk disclosures.

The Report’s identification of 11 types of AI risks in Form 10‑Ks spotlights a key pitfall for companies’ disclosures. The frenetic push to use AI throughout companies means their employees may be excitedly speaking about different aspects of AI at conferences and in press releases, possibly discussing a material risk not in their organization’s more reserved filings. Omissions of a material risk discussed publicly elsewhere have been an SEC enforcement staple around disclosures, Betancourt highlighted.

See “A Framework for Materiality Determinations Under SEC’s Cyber Incident Disclosure Rules” (Jul. 10, 2024).

Four Top Concerns

The risks below are discussed in order of how frequently they are mentioned in SEC disclosures, according to the Report. All disclosure quotes referenced appeared in companies’ Form 10‑Ks for fiscal year 2024.

AI-Aided Cyber Threats

One in three companies (193) cited adversaries’ use of AI to commit fraud, breach security perimeters or manipulate markets. “Threat actors are using these technologies to create new sophisticated attack methods that are increasingly automated, targeted and coordinated, and more difficult to defend against,” Salesforce noted in its 10‑K.

Across the business world, “the cyber threat is really front and center” for executive teams after more regulations and headlines about hacking, Betancourt observed. “I’ve heard AI described as an arms race, but I see two races. One is the competitive race among businesses. The other race is good versus evil, with nefarious actors absolutely capitalizing on AI” for more ways to attack, she added.

Some disclosures include details on the multiple ways adversaries can leverage AI. Malicious actors may use the tech to “develop new hacking tools and attack vectors, exploit vulnerabilities, obscure their activities, and increase the difficulty of threat attribution,” Accenture said in its filing.

Airbnb highlighted in its disclosure the challenges AI introduces for longstanding cyber hygiene measures. Machine learning might crack its encryption or hurt the company’s ability “to detect, investigate, contain or recover” from attacks. American Express noted that AI could be deployed to disrupt password management.

Other filers, like Analog Devices, included the broad caveat that AI might expand the types of attacks in unforeseen ways.

See “Assessing and Managing AI’s Transformation of Cybersecurity in 2025” (Mar. 19, 2025).

Competitive Disruption

Almost one in three companies (168) stated that AI poses competitive and financial risks. They commonly mentioned AI as one of the disruptive technologies that they could fail to keep up with or that would benefit the market share of faster adopters.

For example, 3M cautioned that demand for its product could be impacted by customers who prefer competitors that take more advantage of AI, “machine learning, block-chain, expanded analytics, and other enhanced learnings from increasing volumes of available data.”

See our two-part series on the SEC charging four companies for misleading cyber incident disclosures: “New Expectations?” (Nov. 20, 2024), and “Lessons on Contents and Procedures” (Dec. 4, 2024).

Bias and Unfairness

The number of companies citing the risk of harmful bias from AI use in their disclosures doubled between fiscal year 2023 and 2024, from 70 to 146. Match Group, for a representative example, noted that training datasets “may be overbroad, insufficient, contain biased information, or infringe third parties’ rights.” Others mentioned “unintended outcomes” and “lowered interpretability” as risks.

Palantir warned in its filing that its employees or users might use “inappropriate or controversial data practices” that might “impair the acceptance of AI solutions.”

“Some broader ethical concerns are baked into companies’ discussions of, or their references to, bias,” Greaves observed.

See “Navigating Ever-Increasing State AI Laws and Regulations” (Jan. 15, 2025).

Data Leakage and IP Risk

One in five (95) companies warned of confidential data or IP exposure from employees using third-party chatbots like those of OpenAI, Anthropic or Microsoft. These providers could use the company’s proprietary data and sensitive prompts to retrain their services, which would expose trade secrets or customer information, many of the filings said.

Possible impediments to AI use include the possibility that less data will be available to train the tools in the future due, for example, to transnational restrictions on data transfers, some companies pointed out.

Greaves acknowledged that his study’s method may have missed some mentions of AI-related data problems, “as we only really looked at the risk factors section of the annual report, not other parts where [AI risks] sometimes can be mentioned.”

See “From CEO Deepfakes to AI Slop, AI Incident Tracking Ramps Up” (Jul. 30, 2025).

Less-Cited Concerns

Regulatory Pressure

Filings mentioning the E.U. AI Act tripled from 21 in 2023 to 67 in 2024, underscoring that multinational legal risk calculations are a common concern for many of the S&P 500 corporations. Stated concerns include penalties of up to seven percent of global revenue.

Most mentions of the E.U. AI Act are high-level, quick acknowledgements, Greaves pointed out, as little enforcement has surfaced outside of sectors like autonomous driving and medical technology. “Some of the mentions seem to be in response to the U.S. legislation developments,” he said.

See our three-part series answering top questions about the E.U. AI Act: “Reach and Unique Requirements” (Apr. 24, 2024), “Risk Tiers and Big-Player Transparency” (May 1, 2024), and “Practical Steps and What’s Next” (May 8, 2024).

Overinvestment, Poor Results and “Disillusionment”

Fifty-seven companies disclosed that their AI programs may not deliver operational benefits or recoup investments. Some cautioned premature deployment might set the business back. The Report groups these risk factors as “disillusionment.”

More statements about poor returns might show up in 2026 disclosures, perhaps. Media reports in August 2025 prompted weeks of chatter about an MIT study that concluded that 95 percent of companies have received “zero return” on their AI investments, based on a sample of interviewed executives and 52 businesses studied.

Third-Party Dependency

“Rapid advancements in technology could quickly render our existing LLM obsolete, requiring the licensing and training of a replacement LLM at significant cost,” Paycom noted in its filing. One in 10 companies (56) warned of issues from relying on third-party AI model providers. Along with obsolescence, companies mentioned contractual opacity, disruptive model updates and the inability to audit model outputs.

Third-party risks often are “understated because of the concentration of certain key providers. There may not be explainability and transparency,” nor contractual flexibility, Betancourt noted. Interdependencies and connections across the corporate software environment also exacerbate these risks.

Some companies’ digital infrastructure now may be entangled with quick-moving, opaque AI startups, adding volatility risks.

Several companies also mentioned their vendors were targets of cyber threats.

See “Managing Third-Party AI Risk” (Aug. 20, 2025).

Vulnerability to Deepfakes

Mentions of deepfakes more than doubled from 2023 to 2024, jumping from 16 to 40 citations, with many companies expressing concern over impersonation of executives. More media coverage of synthetic media incidents may have driven some statements, but “companies have started to be more specific about attacks that they have faced,” Greaves said, noting that eBay stated in its disclosures that someone tried to impersonate the voice of one of its senior leaders.

Marsh McLennan, which has disclosed AI risks for several years, noted in its filing that “the barrier for entry has gone down significantly” for using GPTs to fake video and voices, Greaves pointed out.

Deepfakes’ possible harm goes beyond vulnerabilities around executive impersonation. Fox Corporation also alerted investors in its filing about “fake news impacting stock prices and manipulated audio/video targeting brand trust or executive credibility.”

See “Examining the Deepfake Landscape and Measures for Combatting Scams” (Sep. 3, 2025).

Spiking Energy Demands

One in three utilities firms (10 of 30) referenced the strain that AI-related energy demands place on power grids and long-term infrastructure planning, citing data centers’ electricity draws. They noted that the strain from AI poses operational, regulatory and capital allocation risks.

Job Displacement

Despite extended public debate about AI eliminating jobs for humans, only six companies mentioned labor impacts or workforce transformation. Essex Property Trust warned in its disclosures that widening use of AI to replace workers could depress employment rates and its prospects for tenants. Accenture noted that AI-enabled solutions could reduce demand for its consultants.

Companies could be underestimating the impact of one of the most socially visible risks, “indicating a disconnect between public discourse and companies’ risk discussions,” Greaves posited. Not acknowledging AI’s destabilizing labor effects might saddle companies with reputational blowback and draw the attention of regulators, which might count as a material risk. Adobe is an exception, as it cautions in its filing that AI’s potential to “modify workforce needs” could reduce demand for its products, services and solutions, as could “negative publicity about AI.”

See our two-part series on New York City’s law requiring AI bias audits: “What Five Companies Published – and How Others Avoid It” (Sep. 13, 2023), and “A Best Practice Guide, From Choosing an Auditor to Avoiding Enforcement” (Sep. 20, 2023).

Litigation

Only two companies mentioned existing litigation. Cigna stated that it faces “litigation claiming that we improperly used AI in the claims evaluation process.” Ford disclosed that it has responded to inquiries related to regulatory investigations.

Planning Steps for Companies’ AI Risk Disclosures

With companies’ accelerating reliance on AI and the related SEC and shareholder plaintiff scrutiny, the measures below can help companies best navigate AI use disclosures and related risks.

Warn Employees About Conflicting Statements

Many companies lack routines and coordination to monitor whether they are making clashing statements about their AI use. “The infrastructure to manage AI adoptions holistically and support disclosure statements can be difficult to put in place,” Betancourt cautioned. Ensuring consistency in disclosures requires painstaking collaboration across departments – but corporate managers face “tremendous competitive pressure now to say that the company is at the forefront of using AI to make processes more efficient and cheaper for consumers,” she observed. This pressure could lead business teams to make a public statement about AI that does not align with the legal team’s perceptions of AI use or its disclosures.

Disclosures should be accounted for throughout a comprehensive AI governance program. “If the proper infrastructure is put around AI usage and there are processes and checks and balances, then issues are more likely to be flagged for a risk factor and fleshed out,” Betancourt explained.

First steps to establishing an underlying AI governance program include identifying the AI uses, determining the company’s risk tolerances and creating an AI committee that receives updates on companywide AI implementations on a regular cadence. Committee participants from the legal and compliance teams then would be responsible for briefing the regulatory disclosure specialists on the company’s AI developments, to ensure specific and accurate descriptions, Betancourt recommended. AI governance roles vary and other arrangements may work better for some companies, she added.

Recruit Leadership Buy-In

Company executives should periodically highlight internally the company’s responsible AI strategy, to counter the ambient pressure to derive benefit from AI. The executives also should vocally support a holistic governance program for responsible AI, Betancourt advised.

The executives should consider going beyond quickly praising the AI governance program and its leaders, by stressing that the company has given it resources.

See “Benchmarking AI Governance Practices and Challenges” (May 7, 2025).

Link AI to Enterprise Risk Management

AI governance is in its early stages for many companies. Plenty of companies “do not have their ducks in order,” with a governance process in place that “rolls up to enterprise risk management,” Betancourt observed.

Amid many distracting novelties for employees around AI, business managers’ top thoughts may not include ensuring that the company’s risk managers are evaluating AI for legal, operational and reputational risks. To make such connections, companies should build internal AI assurance and auditing capabilities. Risk managers can specify that AI scenario testing should occur in the company’s risk framework and include AI in its risk registers.

Uncover AI Vendor Risks

To help ensure AI risk disclosures are accurate, CISOs, procurement teams and CIOs should audit the company’s exposure to third-party AI vendors and data flows, with attention to customer data and confidential IP.

To assist the company in keeping informed about risks that may need to be disclosed, the procurement teams can insist that AI suppliers divulge their efforts to manage their own vendor lock-in and other risks.

See “Pain Points and New Demands in AI Contracts” (Jun. 18, 2025).

Be Realistic About Hypothetical Risks

Companies need to be realistic about anything they present as a hypothetical risk in their disclosures. The SEC has long treated companies negatively when they experience an incident but frame it as a hypothetical risk in a disclosure, Betancourt warned. The lawyers who review the disclosure before filing usually have “an acute awareness that any hypothetical risk really has to be hypothetical,” but incidents may be more difficult to suss out until a company’s AI use is well-governed.

Consider Providing More Details in 2026

Companies’ 2025 disclosures vary in approaches and level of detail, Greaves noted. Some offer a revelatory detail about the company’s AI strategy in one concise sentence. “Other times, there is seemingly a lot of waffle,” he reflected.

A company’s size may shape the risk discussions, Greaves observed. With the S&P 500, “most of the discussion is quite bland. It tends to be with smaller companies where the language can be more expressive or flowery,” he reported, based on his analysis of U.K. filings.

Broad high-level disclosures are not prudent but vague wording is understandable. “Companies may be taking a measured approach because they just don’t know what is going to happen with AI,” Betancourt said.

Cyber Crime

Defending Against Faster, Stealthier and More Sophisticated Cyber Adversaries


Attackers are bypassing traditional cybersecurity defenses and exploiting overlooked security weaknesses and vulnerabilities, according to CrowdStrike’s 2025 Threat Hunting Report (Report). They are also working patiently to establish footholds, moving slowly and stealthily over time, making detection more difficult. Many attacks combine stolen credentials with intrusions on unmanaged endpoint devices and/or cloud applications. Moreover, generative AI (GenAI) is helping attackers craft more convincing social engineering ploys and support their attacks in other ways. This article synthesizes the key takeaways from the Report and the additional insights offered by Adam Meyers, CrowdStrike senior vice president of counter adversary operations, in a related webinar.

See “Leading Attack Vectors and Other Key Findings From Verizon 2025 Data Breach Investigations Report” (Jun. 25, 2025).

Methodology

The Report covers threat activity from July 1, 2024, through June 30, 2025. It is based on CrowdStrike’s contacts with adversaries through its Overwatch threat hunting team – which sees about 4.7 trillion events each day, noted Meyers. In 2024, the team looked at 60 billion hunting leads, which resulted in 13 million initial investigations and 27,000 customer escalations. “We’re making a lot of contact with the threat actors out there, and that’s what goes into the [Report],” he said.

The Report breaks down cyber adversaries into three main buckets, based on their motivations, explained Meyers:

  1. nation-state espionage, sabotage, other disruptive activities and, in the case of North Korea, financially motivated attacks;
  2. “eCrime,” which has been moving away from wire and credit card fraud to ransomware and information-based extortion; and
  3. “hacktivism,” which encompasses activism, nationalism and/or terrorism, often using denial of service attacks, website defacement and/or information leaks.

CrowdStrike presently tracks about 265 threat groups. Under CrowdStrike’s naming conventions, threat groups in China are referred to by “Panda,” those in North Korea, “Chollima,” and those in Russia, “Bear.” It uses the “Spider” moniker for eCrime groups and “Jackal” for hacktivists. It also monitors 150 malicious activity clusters whose motivations and affiliations are not yet known.

See “Staying Ahead of Rising Identity-Based and Cloud Intrusions” (Mar. 19, 2025).

Increasing Speed, Stealth and Sophistication

Speed

Intrusions are getting faster, cautioned Meyers. For example, a year ago, the typical time between an initial intrusion by Scattered Spider to its deployment of malware was 72 hours. That has dropped to just 24 hours.

Stealth

Attackers are using tactics to evade defenses, get oriented within a network and operate undetected. They seek “to blend their activity into expected network activity while enabling follow-on activities,” notes the Report. 

Sophistication

Adversaries are becoming more sophisticated, said Meyers. For example, Famous Chollima uses AI in every aspect of its operations. There also has been a huge increase in voice phishing (vishing) attacks to obtain access credentials. Similarly, the number of cloud intrusions in the first half of 2025 was 136% higher than in all of 2024. “The cloud is becoming a hot target for a lot of these threat actors,” said Meyers.

To defend against evolving threats, organizations will need faster detection capabilities and better visibility into threats, noted Meyers. In 2010, organizations often focused their cybersecurity efforts on detecting viruses, which were the most common threats. However, “they were missing the least prevalent attacks, which ended up being the most significant,” he said. Since then, CrowdStrike’s motto has been, “you don’t have a malware problem, you have an adversary problem,” which is even more true today. There is more adversary activity, but less malware.

Interactive Intrusions

In an interactive intrusion, a human attacker works within the target’s network in real time and adapts tactics as needed. Interactive intrusions increased 27% year-over-year, and that number typically rises toward the end of each year. “Turns out a lot of these eCrime adversaries like to take long vacations in January and regroup and come back later in Q1 or Q2,” remarked Meyers.

Nearly three-quarters of interactive intrusions were for eCrime. Just over one-quarter of interactive intrusions were by nation-states. Notably, four-fifths of the interactive intrusions did not involve malware.

Interactive intrusions targeted many sectors of the economy, government and academia. The most frequently targeted sectors include technology (which was the top target for the eighth consecutive year), consulting/professional services, manufacturing, retail, financial services, healthcare, government and telecommunications. Moreover, each of those sectors experienced a year-over-year increase in the frequency of intrusions. Intrusions by nation-state actors in technology, telecommunications, consulting/professional services, government and financial services were up by anywhere from 80% to 185% year-over-year.

Over the past year, there was a 71% increase in interactive intrusions in the government sector. That included a 185% increase in attacks by nation-state actors – primarily Russia, in connection with its ongoing war against Ukraine. Similarly, there was a 53% increase in intrusions in the telecom industry, including a 130% increase by nation-state actors – primarily China. China “has been on an absolute tear, particularly going after telecom, consulting and professional services, government and financial services,” Meyers said. There were also significant spikes in eCrime attacks in the manufacturing (55%) and retail (41%) sectors.

See “Implementing NSA-CISA-FBI Advisory Mitigation Tactics for Vulnerabilities Exploited by Russia” (Apr. 28, 2021).

Weaponization of AI

Publicly available GenAI allows attackers to operate on a more sophisticated level and at greater scale, according to Meyers.

Nefarious AI Uses

Attackers are using such AI for:

  • social engineering, including phishing, generating identities and optimizing content;
  • technical operations, including:
    • enhancing reconnaissance;
    • detecting vulnerabilities;
    • creating and improving malware and ransomware; and
    • supporting attack activities; and
  • information operations, including generating disinformation content and creating websites.

For example, Famous Chollima infiltrated 320 companies in 2024. There were 32 encounters with the group in July, including 28 remote IT workers, who used AI to build LinkedIn profiles and resumes, complete tasks during interviews and perform work-related tasks when hired, recounted Meyers. They also used deepfakes to change their appearance so they could apply to the same company more than once.

“The big takeaway from all we’ve observed” is that technically sophisticated adversaries “are going to use generative AI to their advantage to do things at scale, faster and stealthier,” said Meyers. On the other hand, less sophisticated eCriminals may get tripped up using AI. For example, one ransomware program can be defeated because the AI model used to create it did not implement the underlying cryptography correctly.

Targeting AI Tools

Attackers are also using targets’ AI tools as attack vectors. They are attempting to use targets’ AI for persistence, credential access and malware deployment. Organizations’ ongoing integration of AI tools will increase their attack surfaces. Still, “GenAI enhances threat actors’ operations rather than replacing existing attack methodologies,” notes the Report. For example, it is likely to make social engineering attacks and influence campaigns more convincing. On the other hand, it is “not likely to definitively benefit offensive or defensive operations,” it states.

Cross-Domain Attacks

Some attacks target identity systems, unmanaged endpoints and cloud environments, using a smaller footprint in each area, which makes them more difficult to detect. Attackers are also getting better at identifying and targeting unmanaged hosts on target networks.

Instead of trying to defeat endpoint detection and response technology (EDR), attackers try to steal an identity, log in to the cloud as a legitimate user and pivot from there, explained Meyers. Such attacks are becoming the norm. The most active cross-domain attackers have been Scattered Spider, Operator Panda (also known as Salt Typhoon) and Blockade Spider.

Vishing and Social Engineering

Attackers are using vishing and other social engineering attacks to obtain credentials and bypass traditional security measures. Vishing activity increased by 442% in 2024 and is likely to double in 2025.

Focus on Obtaining Credentials

Traditional security tools may not be able to distinguish between a legitimate user and one with stolen credentials. “Identity is absolutely the number one thing that organizations should be worried about right now, in terms of how [attackers] gain access,” stressed Meyers.

In some operations, an attacker calls a helpdesk, pretending to be a user, obtains a password reset and then bypasses multi-factor authentication (MFA), explained Meyers. Alternatively, an attacker may spam an employee’s email account, call the employee pretending to be from the help desk and persuade the employee to click on a (malicious) link.

Exfiltration in Under Five Minutes

These attacks unfold very quickly. For example, in a recent Scattered Spider attack, once the attacker secured a new password from a help desk, the attacker:

  • registered its own device for MFA within 30 seconds;
  • moved into Microsoft 365 applications within one minute;
  • deleted the email notifying the legitimate account holder of the new MFA enrollment within two minutes;
  • searched for credentials and network documentation on SharePoint within three minutes; and
  • exfiltrated bulk data on other users in less than five minutes.

Cloud Attacks

Adversaries are increasingly targeting cloud environments, reported Meyers. Users’ misconfigurations can create vulnerabilities. China, in particular, uses innovative techniques to exploit the cloud, including harvesting credentials from within the cloud, using the crown control plane to run commands on other cloud-hosted virtual machines, and leveraging cached credentials on virtual machines to pivot elsewhere.

Between July 2024 and July 2025, there was a 40% increase in cloud intrusions by China. Genesis Panda has been exploiting cloud services for tool deployment, command and control communications and exfiltration. It has displayed an understanding of cloud administration and has the ability to move laterally in cloud environments. In many cases, it gained access to the target’s cloud account, added local users to virtual machines, performed “host-based enumeration,” deployed malware and established persistence, according to the Report. The group also has been acting as an access broker for other China adversaries, noted Meyers.

Murky Panda, also known as Silk Typhoon, was very active in 2025, added Meyers. It exploits internet-facing devices for initial access, exploits zero-day vulnerabilities and leverages trusted relationships for compromises.

See “How to Select the Latest Cloud Security Tools and Platforms” (Aug. 21, 2024).

Endpoint Vulnerabilities and Long Game Adversaries

Attackers are still targeting endpoints. “We’ve seen that ‘long game’ adversaries execute slow and sustained attacks, steal sensitive data and prepare for future operations,” noted Meyers. They focus on unmanaged endpoints and seek to disable EDR. Often, multiple attackers focus on same the endpoint and/or network. They have deep knowledge of telecom companies – often installing backdoors that facilitate undetected entry. Glacial Panda, Operator Panda and Liminal Panda are all focusing on telecom.

Just over half of vulnerabilities CrowdStrike observed in 2024 involved initial access, especially internet-exposed applications.

Zero-Day Attacks

Zero-day attacks are often initiated by nation-states, with eCrime attackers following once the vulnerability is revealed, explained Meyers. The attacks focus on devices that do not have modern security tools, including VPN concentrators, firewalls and routers. Organizations must ensure patches are comprehensive or attackers will find workarounds. Attackers also use other tools so, even with a zero-day attack, a target with cross-domain visibility might be able to quickly detect and respond to the attack.

See “Ten Cybersecurity Resolutions for 2024” (Jan. 10, 2024).

Addressing Key Vulnerabilities and Defending Against Attacks

There are both general and threat-specific ways for organizations to address vulnerabilities, according to Meyers and the Report.

General Principles

  • Secure Identities:
    • Deploy identity threat detection and response tools.
    • Use MFA everywhere.
    • Prohibit users from changing MFA enrollments.
    • Avoid SMS-based MFA, which is vulnerable to SIM swapping.
  • Use Adversary-Driven Patching: Focusing on the severity or criticality of a vulnerability may be insufficient because some attackers will chain a low-severity vulnerability with something else, like a local privilege escalation. Organizations should consult the known exploited vulnerabilities list maintained by the Cybersecurity and Infrastructure Security Agency and seek to patch all known vulnerabilities.
  • Know the Attackers: Take an intelligence-driven approach, integrating threat intelligence into security workflows.
  • Improve Awareness: Conduct awareness training and tabletop exercises.
  • Leverage AI: Leverage AI-powered security solutions.

See “Strengthening Cyber Defenses in an Ever-Evolving Threat Landscape” (Jun. 4, 2025).

Cloud Attacks

To defend against cloud attacks, organizations should:

  • ensure cloud workload protection and cloud native application protection are turned on;
  • use containerized environments;
  • have unified security posture management and visibility into cloud settings;
  • ensure rigorous and timely patch management;
  • monitor legacy assets and those at the end of their service lives;
  • monitor for malicious modifications to system binaries;
  • monitor SSH connections for anomalous activity;
  • enforce network access controls for servers; and
  • require and enforce complex passwords.

See “Restricting Super Users and Zombie IDs to Increase Cloud Security” (Jul. 31, 2024); and “Six Steps for Improving Cloud Security From CSRB’s Report on Microsoft Intrusion” (Jun. 12, 2024).

AI-Enhanced Attacks

To defend against AI-enhanced attacks, organizations should:

  • implement enhanced identity verification during hiring;
  • use real-time deepfake challenges;
  • improve remote access security controls;
  • validate all connected peripheral devices;
  • monitor communications for signs of aberrant translation activities and concurrent administration of multiple accounts; and
  • conduct appropriate training.

See “From CEO Deepfakes to AI Slop, AI Incident Tracking Ramps Up” (Jul. 30, 2025); and “Emerging Cyber Threats and Defenses” (Jan. 24, 2024).

Cross-Domain Attacks

Organizations should seek to eliminate cross-domain visibility gaps. They should expand threat hunting to encompass additional data sources, especially devices beyond the scope of traditional endpoint coverage, including routers, switches, VPN devices and firewalls. To that end, security information and event management (SIEM) tools can ingest data from multiple sources around the environment – including identity threat detection and response tools, the cloud control plain and endpoints – and help identify and defend against attacks.

See “Typhoon Threats and the Call From FBI and NSA for Public-Private Collaboration” (Aug. 20, 2025).

Social Engineering

To defend against social engineering attacks, organizations should:

  • enhance identity protection, as indicated above;
  • implement:
    • continuous monitoring for authentication anomalies, administrative changes, unusual network traffic and other suspicious usage; and
    • comprehensive logging and behavioral analytics;
  • enhance infrastructure security; and
  • improve incident response capabilities with backups, incident response exercises, periodic readiness assessments and training.

See “Go Phish: Employee Training Key to Fighting Social Engineering Attacks” (Aug. 9, 2023); and “Checklist for Building an Identity-Centric Cybersecurity Framework” (Nov. 3, 2021).

Endpoint Vulnerabilities

CrowdStrike recommends a “defense in depth” approach to managing endpoint vulnerability threats, including:

  • deploying extended detection and response capabilities to ensure wide network visibility and coverage;
  • maintaining an enterprise-wide asset inventory;
  • conducting ongoing vulnerability scanning and exposure assessments;
  • monitoring externally exposed assets for signs of compromise;
  • rigorous patch management; and
  • enforcing script execution policies.

See “Assessing and Managing AI’s Transformation of Cybersecurity in 2025” (Mar. 19, 2025); and “Getting Board Buy-In for Edge Cybersecurity Initiatives Post COVID19” (Jul. 8, 2020).

Training

Four Tips for Effective Privacy Training


It is not enough for small and medium-sized businesses and enterprises to simply offer privacy training to employees; the organization’s leaders must also ensure successful training. This article, synthesizing insights shared in an August 2025 Privacy Ref webinar, details four recommended actions for effective privacy training, including: (1) setting goals; (2) making content engaging; (3) avoiding jargon; and (4) reviewing and improving.

See our three-part series “Rethinking Click-Through Training”: The Pluses and Minuses (Mar. 26, 2025), Maximize Effectiveness With Customization (Apr. 16, 2025), and Integration Into a Comprehensive Training Program (May 7, 2025).

Set Training Goals

The first step in developing a training program is to establish its goal(s). In most cases, a training program is intended to change the behavior of a person or a group of people, or to inform them of a policy revision or an event of which they should be aware, Privacy Ref senior privacy consultant Ben Siegel said. For example, when training employees not to open a phishing email, the goals might be to inform trainees of the risks posed by malicious emails and to ensure that they know how to identify and report malicious emails.

Goals can be achieved in different ways. Getting trainees to conduct privacy impact assessments when proposing a new product or policy, for instance, can be achieved by having a subject matter expert teach them directly or distributing a document on the topic, Siegel suggested.

See our three-part guide to cybersecurity training: “Program Hallmarks and Whom to Train” (Oct. 16, 2019), “What to Cover and Implementation Strategies” (Oct. 23, 2019), and “Assessing Effectiveness and Avoiding Pitfalls” (Oct. 30, 2019).

Make Training Engaging

Any kind of training is, for the most part, boring, because it is generally meant to be clear and concise, not fun, Siegel observed. To counter the boredom factor, the privacy trainer should work to keep the audience interested and focused on the training. Sending out an anti-phishing email, for example, is “going to be boring and hard to digest, because someone has to sit there, read that email, and understand what’s going on when they could be doing something else,” he explained. An interactive video presentation that is “quick and easy for people to ingest” might garner more engagement.

There are downsides taking engagement efforts too far, however, and trying to make training fun. For instance, there could be inappropriate messaging and distractions from the original purpose of the training program, Seigel cautioned. Privacy training is like CPR training, he elaborated. While it should be engaging, protecting privacy and saving lives are serious matters, and trying to make them fun could inappropriately distract from the intended result. In the privacy context, employees should understand that if the company loses a customer’s data, the customer could face harm and there would also be potential downstream consequences. Another disadvantage to trying to make training fun is the additional cost in time or money, particularly with interactive training that involves graphics and videos. Engagement is not about having fun, he opined, but rather about getting people involved and interested.

See “How Ericsson Made Compliance Training Must-See TV” (Apr. 23, 2025).

Conduct Demonstrations

“First and foremost,” engagement can be achieved through demonstrations, Siegel suggested. It is similar to teaching someone how to play a board game. Have them play the game and “get them involved and have them demonstrate the things we’re talking about slowly over time,” he added.

Provide Real World Examples

Another way to promote engagement is to use real world examples, Siegel continued. Privacy trainers should not just say what the law is and how it applies, because that is not an effective way to help someone remember and understand the privacy training. A trainer talking about the GDPR and records of processing activities, for example, should not say something like, “Under Article 30, you’re going to include the following information when you are a controller of the data.” Rather, a trainer should say something more practical and easier to understand, such as, “We maintain a record through a spreadsheet where we put down these pieces of information and it’s updated regularly. If a regulator comes in, we can provide them with a spreadsheet. And here’s what that spreadsheet looks like.”

Trainers also should use real world scenarios that the trainees are likely to encounter to make the training more applicable to them, Siegel advised. Trainees are more likely to pay attention if they understand that the topic is relevant to their job. To come up with effective real world examples, trainers should empathize with the trainees and imagine what it is like to be in their shoes, he offered. That will help deliver examples that will resonate with trainees, which will allow them to better remember the information.

See “As Email Scams Surge, Training Lessons From 115 Million Phishing Messages” (Mar. 30, 2022).

Avoid Jargon

Jargon – the industry-based or technical language that is used by professionals or people deeply involved in a certain field – is an enemy of engagement, Siegel said. Using privacy jargon with someone unfamiliar with it, such as an employee working in marketing or customer service, will make them stop paying attention because they are not able to understand what is being said. Trainers should ask themselves whether using a particular jargon term is necessary and if they could describe the concept with simpler and more understandable language. For example, instead of talking about “data subject access requests” and “the right to erasure,” a trainer could say, “when a person calls us and asks us to delete their data.”

Review and Improve Results

The most important aspect of making privacy training effective is to ensure the trainer has some method of reviewing the training to determine if it has achieved the set goals, Siegel said.

Use Metrics

If the goal of a training is to reduce the number of phishing emails being opened, for example, that is what should be measured, Siegel advised. The trainer can look at the percentage of employees opening phishing emails or the percentage of phishing emails being opened.

If a false phishing email is opened by too many people, say 60 percent, a goal could be getting that number down by at least 20 percent. A metric lets trainers know what success looks like and the minimum level that needs to be reached to consider the training effective. If the company or enterprise fails to reach a measure, the trainer should go back and tweak the program.

Gather Feedback

The trainer should also get feedback from people participating in the training, advised Siegel. The following are examples of questions to ask trainees.

  • What was good?
  • What was meaningful?
  • Was it engaging enough?
  • Were there areas that were confusing?
  • Was there jargon that you did not understand, or concepts that need to be explained more fully?
  • What can we do to make it better?

Trainers should keep feedback that is not helpful or actionable – e.g., “it was boring” – “on the back burner,” Siegel suggested. However, they should prioritize responding to feedback indicating that a trainee had not understood a particular concept or, in a phishing training context, that the individual was unsure of the meaning of the expression “hovering over the link.”

See “Cargill Compliance Director Discusses Putting Training Data to Work” (Feb. 24. 2021).

Leverage Cross-Departmental Help

In efforts to improve the outcome, trainers should work with other internal groups whose job it is to make privacy training successful, such as HR or other professional development groups, Siegel advised. Those with experience implementing different training programs “can help you to make changes available quickly,” he said.

See “Go Phish: Employee Training Key to Fighting Social Engineering Attacks” (Aug. 9, 2023).

People Moves

JFrog Welcomes New Assistant GC for Privacy, Cyber and AI


Rick Borden has joined software platform JFrog as senior director and assistant GC for privacy, cybersecurity, AI and patents. He arrives from Frankfurt, Kurnit, Klein + Selz.

Borden has extensive experience guiding fintech, insurtech, software as a service, cloud computing and other tech-forward companies on technology transactions and privacy and data security issues, including compliance with regulations such as the Gramm-Leach-Bliley Act, New York State Department of Financial Services’ Cybersecurity and AI Regulations, SEC cybersecurity rules, and emerging state privacy laws and associated requirements.

Borden will help JFrog manage legal and security risks associated with providing a software supply chain platform for enterprises. The company offers management and oversight tools for AI and machine learning models, security engineering, IoT implementations and general software development, marketing itself to companies in many industries, including those heavily regulated such as finance, healthcare and gaming.

Prior to joining JFrog, Borden was a partner in the data strategy, privacy and security practice at Frankfurt, Kurnit, Klein + Selz. He also previously held senior legal roles in cybersecurity, privacy and technology at The Hartford, Bank of America and Depository Trust and Clearing Corporation.

For insights from Borden, see “Fifty-Three Regulators Raise Cyber Expectations With Multistate Breach Settlement” (Jan. 22, 2025); “What Regulated Companies Need to Know About the SEC’s Final Amendments to Regulation S‑P” (Jul. 24, 2024); and “NYDFS Changes Its Cybersecurity Regulation Requirements Through Enforcement – Again” (Jul. 19, 2023).

 

People Moves

Kessler Joins Klaviyo As Privacy, AI and Regulatory Counsel


Klaviyo, a marketing automation platform, has welcomed Kyle Kessler as privacy, AI and regulatory counsel. She will lead the company’s legal privacy and data strategy.

Kessler will help guide Klaviyo through compliance challenges arising from the expansion of its AI suite to include predictive analytics and its offering of an AI-driven personalized shopping agent. She will counsel on the company’s adherence to a global array of data protection, AI and privacy laws.

Kessler has extensive experience building privacy programs and managing data protection compliance, incident response, consumer protection and emerging regulatory challenges. She has advised multinational corporations and fast-growing startups on complex regulatory matters.

Kessler arrives from Womble Bond Dickinson, where she was a partner in the firm’s privacy, cybersecurity and AI practice. Prior to that, she served as counsel at Orrick, advising on complex privacy, cybersecurity and AI-related challenges.