NIST released the Cybersecurity Framework Profile for Artificial Intelligence (NISTIR 8596) on December 17, 2025—the first official guidance for securing AI systems using CSF 2.0. The preliminary draft addresses three critical areas: securing AI systems, using AI for cyber defense, and preventing AI-enabled attacks. Organizations have until January 30, 2026, to submit feedback before NIST finalizes the guidance.
The Three-Pillar Framework
NISTIR 8596 structures AI security around three overlapping pillars: secure, defend, and thwart. Each maps to CSF 2.0’s core functions but adds AI-specific considerations that traditional cybersecurity misses.
Secure addresses protecting AI systems from attacks. This means defending against data poisoning—where attackers corrupt training datasets—model theft via repeated query exploitation, and adversarial machine learning that fools systems with imperceptible input changes. In 2025 alone, multiple cloud providers suffered model theft attacks, and data poisoning hit repositories, search results, and third-party tools across the entire LLM lifecycle.
Defend flips the script: using AI to strengthen cybersecurity operations. Automated threat detection, anomaly identification in network traffic, predictive security analytics, and vulnerability scanning all fall here. But NIST doesn’t ignore the risks—deploying AI in defensive roles creates new attack surfaces that the profile explicitly addresses.
Thwart builds resilience against AI-enabled cyberattacks. Attackers are using machine learning to discover vulnerabilities at scale, generate sophisticated phishing campaigns, create deepfakes for authentication bypass, and produce polymorphic malware that evades traditional detection. The World Economic Forum flagged AI-powered cybercrime as a top concern for 2025, and the data backs it up: supply chain-related breaches increased 40% from 2023 to 2025.
Why This Matters Now
Eighty-four percent of developers use AI tools, yet only 29% trust their outputs—a gap that screams unresolved security concerns. By late 2024, 41% of enterprises had reported AI security incidents. The numbers get worse: 95% of organizations use AI for software development, but only 24% conduct comprehensive security evaluations of AI-generated code. That leaves 76% exposing their supply chain to risk.
The threats aren’t hypothetical. Data poisoning attacks in 2025 reached across pre-training, fine-tuning, retrieval-augmented generation, and agent tooling. Adversarial attacks can alter a stop sign image so AI classifies it as a yield sign—invisible to humans, catastrophic for autonomous systems. Supply chain compromises propagate through external models, weights, tokenizers, and configuration files that most organizations can’t trace or verify.
Traditional cybersecurity frameworks don’t address these AI-specific vulnerabilities. NISTIR 8596 does.
What’s Inside the Profile
The profile maps every CSF 2.0 subcategory to AI-specific recommendations. Intrusion detection gets guidance on adversarial input patterns. Supply chain security covers AI model provenance and third-party component verification. Vulnerability management includes adversarial robustness testing and model manipulation scanning. Access control extends to model API permissions and data governance during training and inference.
NIST also links the Cyber AI Profile to the AI Risk Management Framework (AI RMF). While AI RMF tackles broader concerns like bias, explainability, and model drift, the Cyber AI Profile focuses on cybersecurity risks. The integration point is the GOVERN function—new to CSF 2.0 in 2024—which emphasizes leadership-level collaboration on risk management. Together, the frameworks provide comprehensive coverage without duplication.
Implementation examples accompany the guidance. Organizations learn how to cryptographically sign models at every stage, version-control AI components, scan production deployments for manipulation, and build incident response playbooks for adversarial attacks. This isn’t abstract theory—it’s actionable security engineering.
Next Steps for Organizations
NIST developed NISTIR 8596 over a year with input from 6,500+ contributors. The preliminary draft is open for public comment until January 30, 2026, with a workshop scheduled for January 14, 2026. Organizations can review the draft at NIST’s website, submit feedback via email to cyberaiprofile@nist.gov, and attend the workshop to shape the final guidance before the 2026 release.
This matters because NIST standards drive industry adoption. CSF 2.0 is already widely implemented across sectors. Adding AI-specific profiles creates a structured path for organizations to extend existing cybersecurity programs to AI systems—no need to reinvent infrastructure. As AI adoption scales past security practices, NISTIR 8596 offers a way to close the gap before incidents become breaches and breaches become disasters.
The comment period is short. If your organization deploys AI or plans to, January 30 is the deadline to influence the standard.











