An Okta maintainer used GitHub Copilot to commit a security researcher’s one-line vulnerability fix. The AI then hallucinated a completely fake author – inventing a person who doesn’t exist. This incident exposes troubling gaps in how enterprise authentication libraries are being maintained.
What Happened
Security researcher Joshua Rogers discovered two vulnerabilities in Okta’s nextjs-auth0 library in October 2025. One was an OAuth parameter injection that could leak tokens, hijack redirects, and enable account takeover. Rogers submitted a simple fix via pull request.
Three weeks later, the maintainer closed Rogers’ PR and created a new one. The reason? They used AI to “rebase” the commit to ensure it was signed. The result was absurd: the AI attributed the fix to “Simen A. W. Olsen” with an email address that doesn’t appear to exist anywhere online.
Rogers pushed back. The maintainer’s response? Another AI-generated message, complete with the telltale “You are absolutely correct” phrasing. The apology was later deleted.
The Vulnerability
The actual security fix was embarrassingly simple – a single function call:
// Before (vulnerable)
?returnTo=${returnTo}
// After (fixed)
?returnTo=${encodeURIComponent(returnTo)}
Without URL encoding, attackers could inject malicious OAuth parameters, redirecting authentication flows to steal tokens or access unauthorized scopes. This is Auth 101 stuff, yet it sat unfixed in a library used by thousands of production applications.
The Response Problem
When Rogers asked Okta to fix the attribution in the commit history, they refused. The maintainer claimed they “cannot change it.” That’s not technically true – git history can be rewritten. They simply chose not to.
Okta’s security team made things worse. Despite acknowledging the vulnerability and shipping a fix, they told Rogers: “Unless you create a video abusing this vulnerability, we aren’t going to accept this as a security issue.”
Read that again. They fixed the bug. They know it enables account takeover. But they won’t credit it as a security issue without a video demonstration.
Why This Matters
Okta acquired Auth0 for $6.5 billion in 2021. Their authentication libraries protect countless applications. And yet:
- A critical auth vulnerability sat open for three weeks
- Maintainers are using AI to process security patches
- That AI hallucinated contributor information
- The response was another AI-generated message
This isn’t an isolated incident. Rogers noted similar issues with another project. The pattern is clear: developers are outsourcing security-critical decisions to AI tools that aren’t ready for this responsibility.
The Uncomfortable Truth
AI coding assistants are useful. They can autocomplete boilerplate, suggest refactors, and speed up routine tasks. But they’re not ready to handle security patches unsupervised. They hallucinate. They make mistakes. They invent fake people.
When the code in question protects authentication flows for enterprise applications, “move fast and let AI figure it out” is not an acceptable approach. Someone at Okta needs to review their AI tooling policies before the next vulnerability lands.











