On December 19, 2025, New York Governor Kathy Hochul signed the Responsible Artificial Intelligence Safety and Education (RAISE) Act into law. Governor Hochul signed the RAISE Act (Signed Act) subject to an agreement with the New York State Legislature to adopt chapter amendments clarifying several of its core requirements. Those amendments would substantially align the RAISE Act with California’s Transparency in Frontier Artificial Intelligence Act (TFAIA). If, as anticipated, the amendments are enacted, two of the largest states in the U.S. will have largely converged on a regulatory framework for frontier developers and models.
In light of the announced amendments, this article analyzes the RAISE Act’s obligations as reflected in the negotiated framework described in the governor’s approval memorandum and in new legislation introduced into the state Assembly (Assembly Bill A9449) and the state Senate (Senate Bill S8828) (collectively, Amended Act), rather than the Signed Act’s unamended text.
With insights from experts at Davis Wright Tremaine, DLA Piper, Eversheds Sutherland, Gibson Dunn and Morrison Foerster, this article discusses the principal features of the RAISE Act, as set forth in the proposed amendment, and some distinctions from the TFAIA. It also offers compliance measures for covered companies to consider.
See our two-part series on California’s landmark AI transparency law: “Covered Entities, Reporting Requirements and Penalties” (Oct. 29, 2025), and “Compliance Considerations” (Nov. 5, 2025).
Principal Features of the Amended Act
The RAISE Act is intended to address public concerns about the potential for catastrophic risk posed by AI, Marian Waldmann Agarwal, a partner at Morrison Foerster, told the Cybersecurity Law Report. The statute aims at promoting transparency around safety as well as practices that prevent the release of harmful AI products. Overall, the statute does a “pretty good job” of balancing innovation and consumer protection, she said.
With the passage of the RAISE Act, for the first time there are “bicoastal pillars for what may become a de facto federal standard for AI” frontier models, DLA Piper partner Danny Tobey told the Cybersecurity Law Report. It could be the “bones of a consensus approach” that may eventually be reflected in federal legislation, he added.
Covered Entities
The Amended Act applies to developers of “frontier models” that are “developed, deployed, or operating in whole or in part” in New York State. “Frontier models” are foundation AI models trained using greater than 10^26 integer or floating-point operations. The law distinguishes between “frontier developers” and “large frontier developers” that have more than $500 million in annual gross revenues, with greater reporting obligations imposed on the latter.
Focus on Catastrophic Risk
The Amended Act is directed at “catastrophic risk.” As in the TFAIA, “catastrophic risk” is defined as a “foreseeable and material” risk that a developer’s “development, storage, use, or deployment” of a frontier model will “materially” contribute to the death, or serious injury to, more than 50 people. It is also a catastrophic risk if a frontier model could materially contribute to more than $1 billion in property damage in a single incident resulting from the model:
- providing “expert-level assistance” in creating or releasing a chemical, biological, radiological or nuclear weapon;
- engaging in conduct with no “meaningful human oversight, intervention or supervision” that is a cyberattack or would be the crime murder, assault, extortion or theft if committed by a human; or
- evading user control.
The terms “foreseeable” and “material” give companies flexibility and the “ability to eliminate risks” they can plausibly deem immaterial or unforeseeable, emphasized Michael Borgia, a partner at Davis Wright Tremaine. Companies would not have to “catastrophize” about AI scenarios but instead could focus on realistic outcomes based on the facts before them, he stated.
Transparency Requirements for Developers of Frontier Models
Section 1421 of the Amended Act would impose various transparency obligations on developers of frontier models, and the AG would have recourse to sue for violations of those obligations.
Notably, the Signed Act contains obligations beyond reporting, requiring frontier developers to implement safeguards to prevent the unreasonable risk of critical harm and not to deploy any frontier models prior to implementing such safeguards, Michael Atleson, of counsel at DLA Piper, pointed out. Those obligations are “fundamentally different” than transparency requirements that simply entail describing the risks, he explained.
Transparency Report
Under the Amended Act, all frontier developers are required – prior to the time of deployment of a frontier model – to prepare and publish on their website a “transparency report” providing:
- the model’s release date;
- the languages and modalities supported by the model;
- its intended uses, including any restrictions or conditions on such uses; and
- a method to contact the developer.
For large frontier developers, the transparency report would also have to include the results of any assessments of “catastrophic risks” conducted under the developer’s frontier AI framework, discussed below, and the extent of any involvement by third parties in such assessments. These requirements track the TFAIA.
Frontier AI Framework
As under the TFAIA, under the Amended Act, a large frontier developer would have to describe “in detail” in the frontier AI framework how it:
- incorporates national and international standards and industry best practices into the framework;
- evaluates the frontier model’s potential for catastrophic risk, including defining and assessing thresholds for determining such risk;
- mitigates catastrophic risk;
- reviews its assessments and the adequacy of its mitigations;
- uses third parties to conduct assessments;
- determines when modifications to the frontier model require an updating of the frontier AI framework;
- applies cybersecurity to prevent the unauthorized transfer or modification of model weights;
- identifies and responds to safety incidents; and
- institutes internal governance practices.
The developer would have to review the framework at least annually, and any “material” modifications, along with the justification for such modifications, would have to be published in an updated frontier AI framework within 30 days.
Whereas the transparency report is more geared toward the public, the frontier AI framework is more geared toward risk assessments and cybersecurity, and is likely to be more heavily redacted, Agarwal commented.
Instead of the frontier AI framework and the transparency report, the Signed Act requires large frontier developers to implement and publish a “safety and security protocol” (SSP) that contains overlapping, but not identical, requirements with the frontier AI framework. “Conceptually, there are a lot of similarities between the frontier AI framework and the SSP,” Borgia said. However, the former places a greater emphasis on risk assessment and mitigation, he stated.
Disclosure Statement
The Amended Act requires large frontier developers, prior to the development, deployment or operation of a frontier model in New York, to file a disclosure statement with the New York Department of Financial Services (DFS). The disclosure statement should list the identity and contact information of the company, as well as information about ownership in the event the company is privately or closely held. In addition, to defray DFS’ cost of administering the disclosure obligations, large frontier developers will pay a pro rata assessment.
Reporting of Safety Incidents
The Amended Act requires a frontier developer to report a “critical safety incident” to the DFS within 72 hours of determining that an incident has occurred, or the developer learns sufficient facts to establish a “reasonable belief” that an incident has occurred. A “critical safety incident” is defined as:
- any death or bodily injury resulting from “unauthorized access to, modification of, or exfiltration of, the model weights of a frontier model”;
- any harm resulting from the materialization of a catastrophic risk;
- any death or injury resulting from the loss of control of the frontier model; or
- any incident where a frontier model uses “deceptive techniques” to subvert the frontier developer’s monitoring or controls in a manner that demonstrates “materially increased catastrophic risk.”
The Amended Act also requires a frontier developer that discovers a critical safety incident posing an “imminent risk of death or serious physical injury” to disclose the incident within 24 hours to an “authority,” including “any law enforcement agency or public safety agency with jurisdiction, that is appropriate based on the nature of that incident and as required by law.” The TFAIA has a similar 24-hour requirement.
The language in the Amended Act provides “a bit more certainty” about what incidents are reportable, Borgia, said. The Signed Act, on the other hand, requires the reporting of any incident that provides “demonstrable evidence of an increased risk of critical harm,” and it does not qualify what constitutes an “increase,” he noted. “One could read the [Signed Act] to say that any increase in the risk of significant harm, even if it’s a very small increase, is enough,” which makes reporting more difficult, he said.
No False or Materially Misleading Statements
Like the TFAIA, the Amended Act prohibits large developers from making any “materially false or misleading statement” about catastrophic risk from its frontier models, including its management of such risk, and implementation or compliance with its frontier AI framework. Statements made in “good faith” and that are “reasonable under the circumstances” would be exempted.
Regulation by DFS
The Amended Act cloaks a new office within DFS with rulemaking authority. The Signed Act, on the other hand, had the Division of Homeland Security and Emergency Services as the overseeing government agency.
Enforcement oversight by the DFS is “extremely significant” and could make a “big difference” in the level of enforcement activity, Borgia commented. The DFS is “one of the most ambitious and aggressive technology regulators in the country,” he said. It has “broad enforcement tools” and is “willing to engage in a lot of rulemaking and enforcement activity to accomplish its goals,” he elaborated.
The DFS is “traditionally a fairly powerful regulator,” and the transfer to DFS creates a “supervisory mindset” much closer to that of a banking regulator, according to Vivek Mohan, a partner at Gibson Dunn. It is likely to vigorously enforce “black letter” disclosure statement filing and fee requirements, he added. However, it is unclear what enforcement will look like on less clear standards, such as what qualifies as a reportable critical safety incident, he remarked.
Effective Date
As set forth in the Amended Act, the RAISE Act will take effect January 1, 2027. The Signed Act is scheduled to take effect 90 days after becoming law (March 19, 2026), “but I do not think that anybody in New York State government is treating that date as real. The signed version of the law is instead being treated as an interstitial placeholder until [the Amended Act,] the truly agreed-upon version, is approved and signed,” Atleson said.
As of the time of publication, the Amended Act is in committee, moving through the Legislature.
Some Differences Between the Amended Act and the TFAIA
While the Amended Act and the TFAIA mostly align, there are some notable differences. Those differences, however, generally do not create more compliance challenges, Atleson said.
Shorter Reporting Period
The Amended and Signed Acts prescribe a 72‑hour reporting period for critical safety incidents. This is a significantly shorter time period than the analogous 15‑day period under the TFAIA.
“Practically speaking, it may be difficult for companies to comply with the 72‑hour time frame,” Agarwal predicted. “Usually when something goes wrong, there is lots of chaos trying to figure out why and how” the incident happened, she noted. The 30‑day period for updates could similarly be “challenging” for companies, she added.
Disclosure Statement
The Amended Act requires large frontier developers, prior to the development, deployment or operation of a frontier model in New York, to file a disclosure statement with the DFS. The TFAIA does not require the filing of a disclosure statement.
Increased Penalties
The Amended Act imposes penalties on large frontier developers for violations of disclosure and reporting requirements and the failure to comply with the frontier AI framework. The AG could recover up to $1 million for first violations and $3 million for subsequent violations. This is down from $10 million and $30 million under the Signed Act, but higher than the maximum $1‑million penalty available under the TFAIA.
Furthermore, developers that fail to file or submit false information in their disclosure statements or fail to pay required fees would be subject to civil fines starting at $1,000 per day.
No Whistleblower Provision
A big distinction between the TFAIA and the Amended Act is that the latter lacks a whistleblower provision, Agarwal remarked. “Whistleblower protections are a good thing for employees and the public, especially where companies might not be forthcoming with their reporting obligations and [such protections are not] provided for in other labor laws,” she said.
No Preemption
While the TFAIA preempts local laws, there is no such provision in the Amended or Signed Act. The exclusion may be in recognition of the fact that local legislative bodies may act in related ways in the future, whereas California is focused more on a statewide standard, Mohan said.
AI Framework Detail
One difference between the Amended Act and the TFAIA that may have little significance is that only the former would require that a frontier developer describe the AI framework “in detail.”
“I think that may not matter,” Atleson opined. “I think that the frontier AI framework may not look any different if those two words were not there,” he stated.
Compliance Considerations
Given the similarities between the TFAIA and the Amended Act, frontier developers that are compliant with the former should be “most of the way there” in complying with the latter, Atleson said. Nonetheless, covered developers that have not fully addressed compliance with these laws should be taking compliance steps now.
Establish and Maintain Basic Reporting Channels
Regardless of which law applies, a “day one” obligation for frontier developers should be to implement appropriate internal escalation pathways for incidents and harms that could trigger reporting obligations, Mohan advised. These pathways should reach the appropriate level of management to allow for timely reporting, he stated. Companies should also proactively “keep their ear to the ground” regarding what level of disclosure is appropriate and acceptable to regulators, he added.
See our two-part series on the practicalities of AI governance: “AI Governance Gets Real: Tips From a Chat Platform on Building a Program” (Feb. 1, 2023), and “AI Governance Gets Real: Core Compliance Strategies” (Feb. 8, 2023).
Do Not Wait to Develop a Frontier AI Framework
Probably the most imposing obligation in both the TFAIA and Amended Act, and something that covered entities should not wait on, is the establishment of a frontier AI framework. A large frontier developer cannot “wait until a week before it is due and just mash something together,” Atleson cautioned. Creating the framework “requires a lot of advanced planning and thinking, he emphasized. The developer must have tested and developed its products sufficiently not just to report information, but to tell a good story about that information, he instructed.
The National Institute of Standards and Technology Risk Management Framework is one helpful source of information for companies developing AI governance, Atleson continued, but many enterprise teams and experts, including lawyers, should be involved in the process. Ideally, companies can establish AI governance frameworks that apply across jurisdictions or that can be adapted, to the extent necessary, for different jurisdictions by “layering on” specific controls, he suggested.
Frontier developers also should document how and why they create the framework. A frontier developer should always maintain an “objectively reasonable narrative” for both regulators and the public demonstrating that it is approaching an issue in a reasonable way, Mohan advised.
Develop a Cross-Jurisdictional Compliance Program
Large frontier developers can choose from different approaches when seeking to comply with laws in different jurisdictions.
The larger companies that make frontier models will have to spend the time and resources gearing everything toward compliance with “the most rigorous regulatory scheme, unless they make the informed decision that they will pull out of certain jurisdictions because of onerous regulatory burdens,” Bradford Newman, a partner at Eversheds Sutherland, said. Compliance also means companies must constantly play a catch-up game with respect to new regulations, he noted.
Generally, a company should try to hold itself to the “highest achievable and reasonable standard,” Mohan agreed.
Usually, we try to establish a baseline that addresses 90 percent of a company’s obligations under all laws, Agarwal said. A company can have “jurisdictional additions” for the remaining 10 percent, she continued. Or some companies choose to take a risk and try to “come as close as possible” to compliance with the 90‑percent approach, she explained.
In developing the compliance plan, companies should focus on “core processes” for the common disclosure and risk assessment and mitigation obligations that underly all the laws, Borgia advised. Regardless of whatever law wins the day, they will need to leverage those processes, he said.
See “AI Compliance Playbook: Seven Questions to Ask Before Regulators or Reporters Do” (Apr. 21, 2021).
Be Ready to Pivot When the Law Changes
One important component of a good compliance plan is having the ability to adjust quickly as laws change.
The “biggest challenge” for keeping compliant in a shifting regulatory landscape is integrating different obligations into a “pipeline of development, deployment and constant improvement,” Mohan observed. As lawmakers continually enact new laws and amend existing ones, companies and counsel need to “keep their head on a swivel,” he advised. It may be difficult in terms of time and cost, but a company must be able to acknowledge when the “world has changed” and it must pivot, he emphasized.
The alignment of the Amended Act and the TFAIA is helpful. “Given that the TFAIA, already in effect, and the Amended Act, as currently written, are already similar, as a practical matter it is relatively safe for covered developers to keep complying with the TFAIA and keep a close eye on the New York legislative process,” Atleson said. If it suddenly appears that the Amended Act law will not be passed, then frontier developers would have to start preparing to comply with the Signed Act, he stated. However, even then, aligning compliance should not be too difficult because the [Signed Act] and the TFAIA are still “not that different,” he explained.
“Most companies are not going to want to “go hard” into compliance with the Signed Act because it is likely to be amended, Borgia said.
See “Navigating Ever-Increasing State AI Laws and Regulations” (Jan. 15, 2025).
Revisit Vendor Contracts
The RAISE Act will result in renewed focus on vendor contracts, Newman said. Developers and their customers will be paying attention to representations and warranties about compliance and the safety and security of the frontier model, and how liability will be apportioned for any violations, he predicted. Insurance coverage litigation between frontier developers and their customers can also be expected, he added.
See “Key Legal and Business Issues in AI-Related Contracts” (Aug. 9, 2023).
A Coming Challenge to State Laws?
President Donald Trump’s December 11, 2025, executive order stating that the federal government will review and potentially challenge state-level AI laws, places the RAISE Act at risk of challenge.
“I think it is almost guaranteed that the RAISE Act will be challenged by the Department of Commerce as violative of the Supremacy Clause,” Newman predicted. One of the “most basic” arguments would be that the law impermissibly burdens interstate commerce and is federally preempted, he opined. Nonetheless, a company still must be ready to comply with the RAISE Act and avoid penalties while any challenge is ongoing, he cautioned.
See “Staying Compliant After Trump AI Executive Order Introduces Regulatory Uncertainty” (Jan. 14, 2026).
Federal Legislation Needed?
The TFAIA and the RAISE Act are “perfect examples of why federal preemption in the AI sphere is necessary and how state efforts to litigate at a hyper-technical detail level is counterproductive,” Newman said. “We cannot have a competitive AI landscape where there are 50 states regulating in 50 different hyper-technical ways with different penalties and different requirements,” he emphasized. A legal landscape with conflicting laws is “burdensome and vexing to innovators, and it is not the optimal way to ensure public health and safety. It is also very sensitive to local politics, and that is not how we want AI to be regulated,” he added.