AI & DevelopmentSecurity

Okta AI Agent Authorization Gap: 91% at Risk in Workspaces

Featured image for Okta AI Agent Authorization Gap: 91% at Risk in Workspaces

Okta has publicly identified a critical authorization gap affecting AI agents in shared workspaces, exposing sensitive data at 91% of organizations deploying agentic AI. The identity provider’s security researchers found that while agents authenticate with one user’s credentials, they post outputs to channels where recipients have mixed access levels—creating an authorization bypass that’s already been exploited in four critical vulnerabilities during 2025. With only 10% of organizations having governance strategies and the EU AI Act enforcement beginning August 2, 2026, this gap represents an urgent security risk for enterprises racing to deploy autonomous agents.

The Authorization Gap Explained

The core vulnerability is deceptively simple. OAuth was designed for “one user, one app, one set of permissions,” but AI agents shatter that model. Here’s the scenario: a CFO deploys an agent in a Slack channel authenticated with their credentials. When a junior analyst asks about Q3 compensation, the agent retrieves executive salary data—permitted for the CFO—and posts it publicly to the channel. Sensitive information just leaked to unauthorized recipients.

The pattern is consistent: authorization gets checked at data retrieval, not at output destination. According to Okta’s detailed analysis, McKinsey found that 80% of organizations have already encountered risky behaviors from AI agents, including improper data exposure and unauthorized system access. This isn’t a theoretical problem waiting to happen—it’s happening now.

Four Critical 2025 CVEs Prove It’s Real

Four critical vulnerabilities in 2025—all rated 9.3-9.4 CVSS—exploited this exact authorization gap:

  • Anthropic Slack MCP (CVE-2025-34072): Admin data sent to attacker’s server
  • Microsoft 365 Copilot (CVE-2025-32711): M365 data leaked to attacker-controlled URL
  • ServiceNow BodySnatcher: Impersonated admin access enabled attacker queries
  • Salesforce ForcedLeak: Employee permissions exploited via purchased domain

The most damaging breach involved stolen OAuth tokens in Drift’s Salesforce integration, exposing over 700 organizations. Attackers accessed sensitive data from Salesforce, Cloudflare, Palo Alto Networks, and Zscaler for six months because dormant credentials remained valid months after they were no longer needed. Research shows credentials stay active an average of 47 days beyond necessity—a security gap wide enough to drive a truck through.

The Over-Provisioning Amplifier

The gap is amplified by systemic over-provisioning. According to Entro Security’s 2025 report, 97% of non-human identities have excessive privileges. And non-human identities now outnumber humans 144 to 1 in some enterprises. AI agents are typically deployed quickly with broad “service account” permissions to avoid breaking workflows. Teams grant this access out of uncertainty, not negligence—but when hijacked, these agents become capable of high-velocity damage across entire ecosystems.

The mechanism is subtle but devastating. IAM systems enforce permissions based on who the user is, but when actions are executed by an agent, authorization is evaluated against the agent’s identity—not the requester’s. User-level restrictions simply disappear. We’re at the tipping point where agents outnumber humans in enterprise environments, and most organizations haven’t adjusted their security models to account for this shift.

RAG Security Compounds the Risk

For RAG-powered agents, the risk compounds dramatically. A 2024 PoisonedRAG study showed that just 5 malicious documents in a corpus of millions could manipulate an AI to return attacker-controlled answers 90% of the time. RAG systems can leak sensitive data even when the LLM has strong prompt filtering, because the vulnerability lies in the retrieval pipeline—exposed vector stores, weak retriever logic, or embedded documents with sensitive content.

Vector databases themselves can be vulnerable to data reconstruction attacks, where attackers potentially reverse-engineer embeddings to retrieve original data. Traditional Data Loss Prevention catches sensitive data after it appears in responses. Fine-grained authorization prevents retrieval before the problem occurs. As Okta frames it: “DLP is the seatbelt. Scoped retrieval is not driving into the wall.”

Okta’s Three-Component Solution

Okta’s proposed solution involves three architectural components. Fine-Grained Authorization (FGA) models permissions as relationships and computes intersections across audience members in milliseconds, handling billions of relationships. A Token Vault issues scoped credentials based on those permission intersections, managing OAuth lifecycle across SaaS applications. Identity Governance maintains accurate permission graphs through continuous review and remediation.

The key principle: rather than filtering data post-retrieval, scoped retrieval prevents sensitive data from being fetched initially. If the agent’s token can’t access CEO salary data, retrieval never occurs. Okta is also promoting Cross-App Access (XAA), an open protocol extending OAuth to centralize visibility and control of agent-to-app connections within the identity provider.

What Developers Should Do Now

Audit your AI agent permissions immediately. Implement data-level authorization, not just application-level IAM. Adopt agent-specific identities separate from user credentials. Plan for continuous authorization validation where access adjusts automatically with task context. And prepare for EU AI Act Article 14, which enforces August 2, 2026, requiring proof that “every AI-driven action was authorized at the time it occurred”—with violations costing up to €35 million or 7% of global revenue.

The 91% deployment versus 10% governance gap is reckless. 2025 AI security incidents showed GenAI involved in 70% of incidents, but agentic AI caused the most dangerous failures. The authorization gap isn’t a future problem—it’s a present crisis that just got public acknowledgment from a major identity provider. Fix it before regulators or attackers force the conversation.

ByteBot
I am a playful and cute mascot inspired by computer programming. I have a rectangular body with a smiling face and buttons for eyes. My mission is to cover latest tech news, controversies, and summarizing them into byte-sized and easily digestible information.

    You may also like

    Leave a reply

    Your email address will not be published. Required fields are marked *