Overview
New York just told the biggest AI labs: if something goes seriously wrong, you don’t get to bury it. Under the RAISE Act, large “frontier AI” developers must publish a safety approach and report “critical harm” incidents to the state within 72 hours after determining one occurred.
This isn’t just a New York story. It’s the next escalation in a state-led push to set de facto national AI rules—colliding head-on with a White House campaign to preempt state AI laws before Congress passes anything meaningful.
Key Indicators
People Involved
Organizations Involved
DFS is New York’s hard-nosed regulator, now drafted to supervise frontier-AI transparency.
OAG is the enforcement backstop that can turn AI transparency failures into penalties.
Washington is trying to prevent a patchwork of state AI laws—by force if needed.
Cal OES became an unusual hub for AI safety reporting after California’s SB 53.
Timeline
-
Wall Street Journal frames it as defiance
MediaWSJ spotlights New York’s move despite the federal push for preemption.
-
Hochul signs the RAISE Act
LegalNew York enacts frontier-AI safety frameworks and 72-hour incident reporting.
-
Parents push Hochul to sign
Public PressureA parent-led coalition urges Hochul to enact RAISE without weakening edits.
-
White House tries to freeze the states
Rule ChangesTrump signs an order aimed at blocking restrictive state AI laws.
-
RAISE reaches Hochul’s desk
LegalLegislature delivers the enrolled bill to the governor.
-
California sets the template
LegalNewsom signs SB 53, a frontier-AI transparency and incident-reporting law.
-
Albany passes RAISE
LegalNew York Senate and Assembly pass RAISE after amendments.
-
RAISE lands in the Senate
LegalSenator Andrew Gounardes introduces the Senate companion bill.
-
RAISE lands in the Assembly
LegalAssemblymember Alex Bores introduces the RAISE Act’s Assembly bill.
-
California veto sparks the “what’s feasible” debate
LegalNewsom vetoes SB 1047, rejecting the toughest frontier-AI safety plan.
Scenarios
“The New Standard”: Big AI labs quietly comply nationwide
Discussed by: The Wall Street Journal; Axios; California and New York officials citing “unified benchmarks”
DFS stands up the new oversight office, companies publish frameworks, and incident reporting becomes routine—because the biggest developers decide it’s cheaper to standardize than to fight. Other tech-heavy states copy the model, and “frontier-AI transparency” becomes the default expectation for top-tier labs even where it’s not legally required.
“Preemption Showdown”: DOJ sues, states counter-sue, courts decide who’s boss
Discussed by: The Washington Post; The Wall Street Journal; state-level officials openly challenging the executive order
The federal government escalates from threats to litigation, arguing state AI laws obstruct national policy. New York and allies respond with federalism arguments and injunction requests. The practical result is limbo: companies prepare compliance plans while waiting to see whether courts uphold state authority or bless federal preemption via executive power.
“RAISE, But Softer”: Reporting happens, but the law becomes mostly a transparency paperwork regime
Discussed by: The American Prospect; industry arguments about feasibility and trade secret exposure
Industry pressure shifts from “kill the bill” to “minimize the impact.” Regulators interpret obligations narrowly, redactions expand, and incident reporting becomes highly standardized and low-detail. New York still gets a reporting pipeline and leverage, but the public learns less than advocates hoped—and the biggest wins move to quieter enforcement settlements.
“Congress Finally Moves”: A federal framework overrides the state patchwork
Discussed by: Major tech-industry lobbying coalitions; national political coverage framing patchwork risk
After enough states adopt divergent AI rules—and enough companies complain—Congress passes a federal framework that partially preempts states while borrowing the core idea: mandatory safety frameworks and incident reporting for frontier developers. New York’s law becomes the prototype that helped write the national statute, even if parts are superseded.
Historical Context
NYDFS Cybersecurity Regulation (23 NYCRR 500)
2017–presentWhat Happened
New York’s financial regulator imposed detailed cybersecurity obligations and incident reporting requirements on covered entities. Even outside New York, many firms treated it as a de facto baseline because compliance programs don’t scale well state-by-state.
Outcome
Short term: Companies built formal reporting and governance processes to avoid NYDFS penalties.
Long term: New York proved a state regulator can set national compliance norms in practice.
Why It's Relevant
RAISE repeats the same playbook: make reporting mandatory, then make it enforceable.
GDPR and the “Brussels effect” in privacy
2016–2018 (adoption to enforcement)What Happened
Europe passed a privacy regime with strong disclosure and breach notification rules. Global companies often chose worldwide compliance rather than running separate systems by geography.
Outcome
Short term: Companies rewired privacy operations, contracts, and incident response for GDPR timelines.
Long term: Privacy expectations shifted globally, even in places without identical laws.
Why It's Relevant
New York and California are trying to create an American version of that compliance gravity.
California Consumer Privacy Act (CCPA)
2018–2020 (passage to early enforcement)What Happened
California passed a sweeping consumer privacy law that forced national brands to update disclosures and data practices. Many firms rolled out CCPA-style controls nationally to simplify operations.
Outcome
Short term: National compliance teams treated California as the design constraint.
Long term: State policy became the launchpad for broader U.S. privacy regulation.
Why It's Relevant
RAISE aims for the same dynamic—one big state sets the rulebook everyone else follows.
