On January 1, 2026, developers woke up to a new reality: 38 state tech laws took effect across six states, creating the largest wave of AI and privacy regulation in U.S. history. California alone activated six AI transparency laws—including mandatory training dataset disclosure and catastrophic-risk assessments—while Indiana, Kentucky, and Rhode Island added comprehensive privacy frameworks. The twist? President Trump’s December 11 executive order promises to preempt “onerous” state laws, setting up a federal-state collision that leaves developers in legal limbo.
California’s AI Transparency Arsenal: No Trade Secrets Allowed
California’s AB 2013 doesn’t mess around. AI developers must now disclose training datasets publicly on their websites before making generative AI systems available to Californians. The kicker? No exemptions for trade secrets or intellectual property. OpenAI must reveal what trained GPT-4. Anthropic must disclose Claude’s datasets. Meta and Google face the same requirement for Llama and Gemini.
Required disclosures include dataset sources, number of data points, copyright status, whether personal information was used, and whether synthetic data made the cut. The law is enforceable through California’s Unfair Competition Law, enabling both public and private lawsuits. Translation: Trial lawyers just got a new revenue stream.
California SB 53 targets frontier AI models—those trained on more than 10^26 floating-point operations. Large developers with over $500 million in annual revenue must publish catastrophic-risk assessments, report critical safety incidents within 15 days to California’s Office of Emergency Services, and provide whistleblower protections with anonymous reporting platforms. Penalties reach $1 million per violation. This isn’t regulatory theater.
AB 316 closes the AI autonomy loophole. Companies can’t use “the AI did it” as a legal defense. If your AI causes damages, you’re liable. Period.
Federal Preemption Battle: Trump vs. States
Three weeks before California’s laws took effect, Trump signed an executive order to “sustain and enhance the United States’ global AI dominance through a minimally burdensome national policy framework.” The mechanisms? A Justice Department AI Litigation Task Force to sue states over AI laws, an FTC policy statement due around March 11 explaining when state laws are preempted, and Commerce Department studies on withholding rural broadband funding from states with “unfavorable” AI laws.
Here’s the legal reality: The executive order cannot overturn state laws. Only Congress or courts can do that. Federal preemption by executive decree isn’t generally accepted practice, and courts are reluctant to preempt based on regulations rather than statutes. Constitutional challenges are inevitable.
For developers, this creates impossible choices. Comply with state laws now and risk wasted effort if they’re preempted? Or violate them and face penalties up to $1 million while waiting for federal courts to rule? There’s no good answer.
Privacy Law Expansion: Indiana, Kentucky, Rhode Island Join the Fray
Three new comprehensive state privacy laws took effect January 1. Indiana and Kentucky apply to companies processing data from at least 100,000 residents or from at least 25,000 residents while deriving over 50 percent of revenue from data sales. Rhode Island sets a lower bar: 35,000 residents or 10,000 residents with over 20 percent revenue from data sales.
All three mandate consumer rights to access, correct, delete, and obtain copies of personal data, plus opt-out rights for targeted advertising, data sales, and certain profiling. Opt-in consent is required for sensitive data including health conditions, race, religion, sexual orientation, biometric data, and precise geolocation.
Rhode Island is stricter: $10,000 per violation with no cure period. Indiana and Kentucky offer 30-day cure periods but still hit $7,500 per violation. Rhode Island also defines “sale” more broadly to include analytics and advertising services, not just monetary exchanges.
Texas App Store Age Verification Blocked by Federal Court
Texas SB 2420 would have required age verification for all app downloads, forcing users to submit driver’s licenses even to check the weather. On December 23, 2025, U.S. District Judge Robert Pitman blocked it as “unconstitutionally vague” and “exceedingly overbroad,” finding it violated the First Amendment by restricting access to protected speech.
Apple opposed the law for forcing “collection of sensitive, personally identifiable information to download any app.” Privacy advocates argued it created identity theft risks through massive data collection. The preliminary injunction keeps the law blocked while litigation continues.
The Compliance Nightmare Developers Face
Microsoft has already struggled to harmonize AI tools across state lines. Developers must now track multiple state thresholds, navigate different definitions of “sale,” and manage varying penalties from $7,500 to $1 million per violation. Some states offer 30-day cure periods. Rhode Island offers none.
California AB 2013 forces disclosure of training datasets with no trade secret protection. Federal preemption threatens to invalidate compliance investments. There’s no federal standard. No single framework. Just a 50-state patchwork with legal uncertainty baked in.
Companies are hiring dedicated privacy officers and investing in secure data architectures. Big tech is lobbying for federal override. Meanwhile, developers building AI systems, mobile apps, and data-driven products are caught in the middle, facing immediate compliance requirements while federal courts decide whether those requirements will survive constitutional scrutiny.
The January 1 effective date wasn’t a starting gun. It was the sound of regulatory chaos hitting production systems. Welcome to 2026.












