The Trump administration just proposed eliminating federal transparency requirements for healthcare AI tools. On December 22, 2025, the Department of Health and Human Services released a proposed rule scrapping Biden-era AI “model cards”—transparency documents showing how AI systems make medical decisions. When AI recommends cancer treatment or calculates medication dosages, providers would no longer see documentation on how it works, its risks, or where it fails.
The move is framed as cutting regulatory burden to accelerate innovation. But what’s actually being removed are specification sheets for algorithms that influence patient care.
What Model Cards Actually Do
AI model cards aren’t regulatory red tape—they’re product documentation. When hospitals evaluate healthcare AI, model cards disclose what patient populations the AI trained on, performance metrics, bias patterns, and failure modes. They’re specification sheets showing where AI works and where it breaks.
The Biden administration’s Office of the National Coordinator for Health IT established these requirements in January 2024, calling them “nutrition labels” for algorithms. Vendors of health IT decision support tools had to document how AI systems are developed, tested, and what risks they pose. Compliance started January 2025.
Now Trump’s HTI-5 proposed rule eliminates that requirement to remove barriers to AI deployment in healthcare.
Why Transparency Matters: The Evidence
AI healthcare failures are documented, systematic, and dangerous. Research from Nature and peer-reviewed journals shows the scope of AI bias in medical settings:
- Pneumonia detection AI: 23% higher false-negative rates for rural patients underrepresented in training data
- Melanoma detection: More errors on dark-skinned patients due to dataset imbalances
- Emergency room AI: Nine programs tested on 1,000 cases changed recommendations based on demographic labels (Black, unhoused, LGBTQIA+) rather than clinical need
- Genomic AI: 80% of training data from European descent, perpetuating health disparities
The consequences aren’t theoretical. Studies show providers follow erroneous AI advice 6% of the time with severe outcomes. Diagnostic error rates hit 11% when AI isn’t trained on diverse datasets. Research proves clinicians perform worse with biased AI—their accuracy improves with unbiased predictions but degrades with systematically biased models.
The Innovation Argument Doesn’t Hold Up
The administration argues transparency requirements slow beneficial AI deployment. Some vendors report FDA pathways are unclear and time-intensive. These concerns aren’t unreasonable.
But removing all transparency creates a worse problem. When hospitals deploy AI they can’t evaluate, liability risk skyrockets. If an ER AI sends a stroke patient home and the hospital can’t explain the algorithm’s decision, lawsuits won’t accelerate adoption. Widespread AI failures erode trust, killing adoption faster than any regulatory requirement.
Model cards don’t prevent deployment—they document what AI does. If vendors can’t explain how systems work, what training data they used, and where they fail, those systems aren’t ready for medical deployment.
The UK Patient Safety Commissioner stated it clearly: “The future of medical regulation should not be framed as a choice between innovation and safety.” Transparency enables both. Opacity guarantees neither.
What Comes Next
The HTI-5 rule enters a 60-day comment period through February 21, 2026. Patient safety advocates will likely oppose the change. Healthcare AI vendors may support it.
But federal government isn’t the only player. Twenty-one states passed AI healthcare laws in 2025, with 47 states introducing 250+ bills. If federal requirements disappear, state regulations may fill the gap, creating a patchwork that slows national deployment.
Industry self-regulation might continue through the Coalition for Health AI’s voluntary model card registry. But voluntary transparency is selective—vendors document systems that look good and stay quiet about the rest.
The Real Test
Three days before proposing to eliminate model cards, HHS released a request seeking input on “accelerating AI adoption in clinical care” with a “forward-leaning, industry-supportive approach.” That framing reveals the assumption: transparency slows things down.
But transparency makes adoption sustainable. When providers trust AI because they understand how it works, adoption accelerates. When patients know AI was trained on populations like theirs with documented accuracy, they accept AI-assisted care. When hospitals evaluate AI against their patient demographics, they deploy the right systems instead of gambling.
Removing model cards isn’t innovation policy. It’s a bet that healthcare AI will self-correct without oversight. The documented evidence of bias, errors, and patient harm suggests otherwise.
If we want healthcare AI deployed at scale, we need more transparency, not less. The real innovation-killer isn’t documentation requirements—it’s deploying systems we can’t explain, don’t understand, and can’t fix when they fail.
Do we want responsible AI deployment that builds trust, or do we want to move fast and break things when the things we’re breaking are human lives?











