Visibility into AI’s downside is on the upswing in corporations’ annual SEC filings. Seventy-six percent of S&P 500 companies added or expanded descriptions of AI as a material risk in their 2025 annual disclosure filings, according to an Autonomy Institute AI risk disclosure report (Report).
Three years into the generative AI revolution, AI risk disclosures have emerged as a secondary statement that corporations issue to counterbalance their bolder declarations that the organization has embraced AI. “Companies risk being the outlier if not mentioning AI in filings,” Goodwin partner Kaitlin Betancourt told the Cybersecurity Law Report.
The top AI concerns that the 500 companies disclosed were cyber threats, competitive disruption, bias in inputs and outputs, and data leakage. Businesses did not identify all these risks as hypothetical. Some disclosures allude to prior trouble. “Companies specifically mentioning incidents include Salesforce, Gen Digital, Intel and Visa. Those examples include updated language and a more active statement” about AI’s role in attacks in the prior year’s filing, Report author Sean Greaves told the Cybersecurity Law Report.
With insights from Greaves and Betancourt, an SEC regulatory expert, this article examines the risks that companies have disclosed, the language used and pitfalls around AI disclosures. It also highlights recommended actions for companies.
See “Guide to AI Risk Assessments” (Jun. 18, 2025).
Widespread Recognition of Concrete and Broad Liabilities From AI
In annual reports filed through April 2025, 380 of the S&P 500 companies augmented or added mentions of AI to their risk factors. These risk acknowledgments suggest a rising awareness by corporate leaders that AI-enabled opportunities are part of a technological upheaval that carries perils.
Concrete operational challenges from AI that companies highlighted include cyberattacks, regulatory compliance, data governance, third-party dependencies and access to energy. Business challenges cited were competitive declines, unprofitable investment in AI, and disruptions to the companies’ product or service delivery.
One purpose for publishing the Report was to “increase the capability for companies and people to truly assess the level of risk quite clearly and understand how others are experiencing it,” Greaves said.
To prepare the Report, Greaves used three different large language models (LLMs) to filter the disclosure reports filed from 2023 and 2024, then analyzed the dataset of Form 10‑K disclosures. The Autonomy Institute also made a web tool allowing the public to browse and filter the reviewed risk disclosures.
The Report’s identification of 11 types of AI risks in Form 10‑Ks spotlights a key pitfall for companies’ disclosures. The frenetic push to use AI throughout companies means their employees may be excitedly speaking about different aspects of AI at conferences and in press releases, possibly discussing a material risk not in their organization’s more reserved filings. Omissions of a material risk discussed publicly elsewhere have been an SEC enforcement staple around disclosures, Betancourt highlighted.
See “A Framework for Materiality Determinations Under SEC’s Cyber Incident Disclosure Rules” (Jul. 10, 2024).
Four Top Concerns
The risks below are discussed in order of how frequently they are mentioned in SEC disclosures, according to the Report. All disclosure quotes referenced appeared in companies’ Form 10‑Ks for fiscal year 2024.
AI-Aided Cyber Threats
One in three companies (193) cited adversaries’ use of AI to commit fraud, breach security perimeters or manipulate markets. “Threat actors are using these technologies to create new sophisticated attack methods that are increasingly automated, targeted and coordinated, and more difficult to defend against,” Salesforce noted in its 10‑K.
Across the business world, “the cyber threat is really front and center” for executive teams after more regulations and headlines about hacking, Betancourt observed. “I’ve heard AI described as an arms race, but I see two races. One is the competitive race among businesses. The other race is good versus evil, with nefarious actors absolutely capitalizing on AI” for more ways to attack, she added.
Some disclosures include details on the multiple ways adversaries can leverage AI. Malicious actors may use the tech to “develop new hacking tools and attack vectors, exploit vulnerabilities, obscure their activities, and increase the difficulty of threat attribution,” Accenture said in its filing.
Airbnb highlighted in its disclosure the challenges AI introduces for longstanding cyber hygiene measures. Machine learning might crack its encryption or hurt the company’s ability “to detect, investigate, contain or recover” from attacks. American Express noted that AI could be deployed to disrupt password management.
Other filers, like Analog Devices, included the broad caveat that AI might expand the types of attacks in unforeseen ways.
See “Assessing and Managing AI’s Transformation of Cybersecurity in 2025” (Mar. 19, 2025).
Competitive Disruption
Almost one in three companies (168) stated that AI poses competitive and financial risks. They commonly mentioned AI as one of the disruptive technologies that they could fail to keep up with or that would benefit the market share of faster adopters.
For example, 3M cautioned that demand for its product could be impacted by customers who prefer competitors that take more advantage of AI, “machine learning, block-chain, expanded analytics, and other enhanced learnings from increasing volumes of available data.”
See our two-part series on the SEC charging four companies for misleading cyber incident disclosures: “New Expectations?” (Nov. 20, 2024), and “Lessons on Contents and Procedures” (Dec. 4, 2024).
Bias and Unfairness
The number of companies citing the risk of harmful bias from AI use in their disclosures doubled between fiscal year 2023 and 2024, from 70 to 146. Match Group, for a representative example, noted that training datasets “may be overbroad, insufficient, contain biased information, or infringe third parties’ rights.” Others mentioned “unintended outcomes” and “lowered interpretability” as risks.
Palantir warned in its filing that its employees or users might use “inappropriate or controversial data practices” that might “impair the acceptance of AI solutions.”
“Some broader ethical concerns are baked into companies’ discussions of, or their references to, bias,” Greaves observed.
See “Navigating Ever-Increasing State AI Laws and Regulations” (Jan. 15, 2025).
Data Leakage and IP Risk
One in five (95) companies warned of confidential data or IP exposure from employees using third-party chatbots like those of OpenAI, Anthropic or Microsoft. These providers could use the company’s proprietary data and sensitive prompts to retrain their services, which would expose trade secrets or customer information, many of the filings said.
Possible impediments to AI use include the possibility that less data will be available to train the tools in the future due, for example, to transnational restrictions on data transfers, some companies pointed out.
Greaves acknowledged that his study’s method may have missed some mentions of AI-related data problems, “as we only really looked at the risk factors section of the annual report, not other parts where [AI risks] sometimes can be mentioned.”
See “From CEO Deepfakes to AI Slop, AI Incident Tracking Ramps Up” (Jul. 30, 2025).
Less-Cited Concerns
Regulatory Pressure
Filings mentioning the E.U. AI Act tripled from 21 in 2023 to 67 in 2024, underscoring that multinational legal risk calculations are a common concern for many of the S&P 500 corporations. Stated concerns include penalties of up to seven percent of global revenue.
Most mentions of the E.U. AI Act are high-level, quick acknowledgements, Greaves pointed out, as little enforcement has surfaced outside of sectors like autonomous driving and medical technology. “Some of the mentions seem to be in response to the U.S. legislation developments,” he said.
See our three-part series answering top questions about the E.U. AI Act: “Reach and Unique Requirements” (Apr. 24, 2024), “Risk Tiers and Big-Player Transparency” (May 1, 2024), and “Practical Steps and What’s Next” (May 8, 2024).
Overinvestment, Poor Results and “Disillusionment”
Fifty-seven companies disclosed that their AI programs may not deliver operational benefits or recoup investments. Some cautioned premature deployment might set the business back. The Report groups these risk factors as “disillusionment.”
More statements about poor returns might show up in 2026 disclosures, perhaps. Media reports in August 2025 prompted weeks of chatter about an MIT study that concluded that 95 percent of companies have received “zero return” on their AI investments, based on a sample of interviewed executives and 52 businesses studied.
Third-Party Dependency
“Rapid advancements in technology could quickly render our existing LLM obsolete, requiring the licensing and training of a replacement LLM at significant cost,” Paycom noted in its filing. One in 10 companies (56) warned of issues from relying on third-party AI model providers. Along with obsolescence, companies mentioned contractual opacity, disruptive model updates and the inability to audit model outputs.
Third-party risks often are “understated because of the concentration of certain key providers. There may not be explainability and transparency,” nor contractual flexibility, Betancourt noted. Interdependencies and connections across the corporate software environment also exacerbate these risks.
Some companies’ digital infrastructure now may be entangled with quick-moving, opaque AI startups, adding volatility risks.
Several companies also mentioned their vendors were targets of cyber threats.
See “Managing Third-Party AI Risk” (Aug. 20, 2025).
Vulnerability to Deepfakes
Mentions of deepfakes more than doubled from 2023 to 2024, jumping from 16 to 40 citations, with many companies expressing concern over impersonation of executives. More media coverage of synthetic media incidents may have driven some statements, but “companies have started to be more specific about attacks that they have faced,” Greaves said, noting that eBay stated in its disclosures that someone tried to impersonate the voice of one of its senior leaders.
Marsh McLennan, which has disclosed AI risks for several years, noted in its filing that “the barrier for entry has gone down significantly” for using GPTs to fake video and voices, Greaves pointed out.
Deepfakes’ possible harm goes beyond vulnerabilities around executive impersonation. Fox Corporation also alerted investors in its filing about “fake news impacting stock prices and manipulated audio/video targeting brand trust or executive credibility.”
See “Examining the Deepfake Landscape and Measures for Combatting Scams” (Sep. 3, 2025).
Spiking Energy Demands
One in three utilities firms (10 of 30) referenced the strain that AI-related energy demands place on power grids and long-term infrastructure planning, citing data centers’ electricity draws. They noted that the strain from AI poses operational, regulatory and capital allocation risks.
Job Displacement
Despite extended public debate about AI eliminating jobs for humans, only six companies mentioned labor impacts or workforce transformation. Essex Property Trust warned in its disclosures that widening use of AI to replace workers could depress employment rates and its prospects for tenants. Accenture noted that AI-enabled solutions could reduce demand for its consultants.
Companies could be underestimating the impact of one of the most socially visible risks, “indicating a disconnect between public discourse and companies’ risk discussions,” Greaves posited. Not acknowledging AI’s destabilizing labor effects might saddle companies with reputational blowback and draw the attention of regulators, which might count as a material risk. Adobe is an exception, as it cautions in its filing that AI’s potential to “modify workforce needs” could reduce demand for its products, services and solutions, as could “negative publicity about AI.”
See our two-part series on New York City’s law requiring AI bias audits: “What Five Companies Published – and How Others Avoid It” (Sep. 13, 2023), and “A Best Practice Guide, From Choosing an Auditor to Avoiding Enforcement” (Sep. 20, 2023).
Litigation
Only two companies mentioned existing litigation. Cigna stated that it faces “litigation claiming that we improperly used AI in the claims evaluation process.” Ford disclosed that it has responded to inquiries related to regulatory investigations.
Planning Steps for Companies’ AI Risk Disclosures
With companies’ accelerating reliance on AI and the related SEC and shareholder plaintiff scrutiny, the measures below can help companies best navigate AI use disclosures and related risks.
Warn Employees About Conflicting Statements
Many companies lack routines and coordination to monitor whether they are making clashing statements about their AI use. “The infrastructure to manage AI adoptions holistically and support disclosure statements can be difficult to put in place,” Betancourt cautioned. Ensuring consistency in disclosures requires painstaking collaboration across departments – but corporate managers face “tremendous competitive pressure now to say that the company is at the forefront of using AI to make processes more efficient and cheaper for consumers,” she observed. This pressure could lead business teams to make a public statement about AI that does not align with the legal team’s perceptions of AI use or its disclosures.
Disclosures should be accounted for throughout a comprehensive AI governance program. “If the proper infrastructure is put around AI usage and there are processes and checks and balances, then issues are more likely to be flagged for a risk factor and fleshed out,” Betancourt explained.
First steps to establishing an underlying AI governance program include identifying the AI uses, determining the company’s risk tolerances and creating an AI committee that receives updates on companywide AI implementations on a regular cadence. Committee participants from the legal and compliance teams then would be responsible for briefing the regulatory disclosure specialists on the company’s AI developments, to ensure specific and accurate descriptions, Betancourt recommended. AI governance roles vary and other arrangements may work better for some companies, she added.
Recruit Leadership Buy-In
Company executives should periodically highlight internally the company’s responsible AI strategy, to counter the ambient pressure to derive benefit from AI. The executives also should vocally support a holistic governance program for responsible AI, Betancourt advised.
The executives should consider going beyond quickly praising the AI governance program and its leaders, by stressing that the company has given it resources.
See “Benchmarking AI Governance Practices and Challenges” (May 7, 2025).
Link AI to Enterprise Risk Management
AI governance is in its early stages for many companies. Plenty of companies “do not have their ducks in order,” with a governance process in place that “rolls up to enterprise risk management,” Betancourt observed.
Amid many distracting novelties for employees around AI, business managers’ top thoughts may not include ensuring that the company’s risk managers are evaluating AI for legal, operational and reputational risks. To make such connections, companies should build internal AI assurance and auditing capabilities. Risk managers can specify that AI scenario testing should occur in the company’s risk framework and include AI in its risk registers.
Uncover AI Vendor Risks
To help ensure AI risk disclosures are accurate, CISOs, procurement teams and CIOs should audit the company’s exposure to third-party AI vendors and data flows, with attention to customer data and confidential IP.
To assist the company in keeping informed about risks that may need to be disclosed, the procurement teams can insist that AI suppliers divulge their efforts to manage their own vendor lock-in and other risks.
See “Pain Points and New Demands in AI Contracts” (Jun. 18, 2025).
Be Realistic About Hypothetical Risks
Companies need to be realistic about anything they present as a hypothetical risk in their disclosures. The SEC has long treated companies negatively when they experience an incident but frame it as a hypothetical risk in a disclosure, Betancourt warned. The lawyers who review the disclosure before filing usually have “an acute awareness that any hypothetical risk really has to be hypothetical,” but incidents may be more difficult to suss out until a company’s AI use is well-governed.
Consider Providing More Details in 2026
Companies’ 2025 disclosures vary in approaches and level of detail, Greaves noted. Some offer a revelatory detail about the company’s AI strategy in one concise sentence. “Other times, there is seemingly a lot of waffle,” he reflected.
A company’s size may shape the risk discussions, Greaves observed. With the S&P 500, “most of the discussion is quite bland. It tends to be with smaller companies where the language can be more expressive or flowery,” he reported, based on his analysis of U.K. filings.
Broad high-level disclosures are not prudent but vague wording is understandable. “Companies may be taking a measured approach because they just don’t know what is going to happen with AI,” Betancourt said.
