Governor Kathy Hochul signed the RAISE Act into law on December 20, 2025, making New York the second U.S. state after California to enact comprehensive AI safety legislation. The signing came one week after President Trump issued an executive order attempting to override state AI laws, setting up a constitutional showdown. The law requires AI companies generating over $500 million in revenue to publish safety protocols and report incidents within 72 hours, with penalties up to $3 million. For developers at OpenAI, Anthropic, and Google, this creates immediate compliance requirements and regulatory uncertainty.
The State-Federal Showdown
On December 11, Trump signed an executive order directing Attorney General Pam Bondi to create an AI Litigation Task Force whose “sole responsibility shall be to challenge State AI laws.” The order also instructs the Commerce Secretary to study withholding federal broadband funding from states with AI regulations the administration deems excessive.
Legal experts are skeptical. Brad Carson, president of Americans for Responsible Innovation, said the executive order will “hit a brick wall in the courts.” The constitutional issue is clear: the executive branch cannot preempt state law—that’s Congress’s job. Federal preemption requires Congressional legislation, not executive fiat.
New York’s defiance matters because California and New York represent the first and fourth largest U.S. economies. AI companies cannot afford to ignore regulations in these states. The practical reality is that companies will comply with California and New York rules nationwide rather than maintain separate systems. When Congress fails to act, state regulations become de facto national policy—similar to how GDPR created global privacy standards.
What the RAISE Act Requires
The law applies to AI companies generating over $500 million in revenue that develop “frontier models”—the most advanced AI systems. This captures OpenAI ($12 billion annual recurring revenue), Anthropic ($7 billion ARR), xAI ($500 million ARR), and the AI divisions of Google, Meta, and Amazon.
Beginning January 1, 2027, these companies must report “safety incidents” to the New York Attorney General and Department of Homeland Security within 72 hours. A safety incident includes any event where a frontier model acts autonomously without user request, unauthorized access to model weights, critical failure of safety controls, or “critical harm”—defined as 100 or more deaths or $1 billion in property damage.
Companies must also publish safety protocols publicly, establish comprehensive safety frameworks, and submit to oversight by a new office within the New York Department of Financial Services. Violations carry penalties of $1 million for first offenses and $3 million for subsequent violations.
How Tech Lobbying Weakened the Bill
The RAISE Act passed the New York Legislature in June 2025. Then the tech industry went to work. Andreessen Horowitz, the super PAC Leading the Future (which pledged $100 million to oppose pro-regulation candidates), and major AI companies lobbied throughout the summer and fall.
The results were significant. The original bill applied to companies with over $100 million in computational training costs—capturing AI startups. The final bill changed this to $500 million revenue, narrowing the scope dramatically. Penalties were slashed 70-90%, from $10-30 million to $1-3 million. The original prohibition on releasing unsafe AI models became a mere “warning” requirement.
What survived? The 72-hour incident reporting requirement. Governor Hochul wanted 15 days to match California’s law, but legislators held firm. The political compromise: Hochul signed the original bill while lawmakers agreed to amend it in 2026 to incorporate her SB 53-style changes.
What Developers Need to Know
If you work at a company exceeding $500 million revenue and developing frontier AI models, compliance work starts now. The January 1, 2027 deadline gives teams one year to build monitoring systems detecting safety incidents within 72 hours.
Detecting autonomous AI behavior, unauthorized access to model weights, or critical failures requires infrastructure—logging, monitoring, alerting, and escalation procedures. Companies need documented safety protocols, public transparency reports, and clear chains of command for incident reporting.
Multi-state compliance adds complexity. New York requires 72-hour reporting while California allows 15 days. The practical approach: default to the strictest requirements (New York’s 72 hours) nationwide rather than maintaining separate systems.
The larger uncertainty is legal. Trump’s executive order may not survive judicial review, but litigation could take years. Developers face regulatory limbo: comply with state laws that might be overturned, or ignore them and risk penalties if states prevail. Compliance requirements start January 1, 2027, regardless of legal battles. Build the monitoring systems, document the safety protocols, and prepare for 72-hour incident reporting now.











