A security researcher discovered OAuth vulnerabilities in Okta’s Auth0 nextjs-auth0 library last month. When he submitted a one-line fix, an Okta maintainer used AI to commit it – and the AI hallucinated a fictitious contributor, stealing the researcher’s attribution. The incident is a damning look at how AI slop has infiltrated enterprise security workflows.
The Hallucinated Contributor
Security researcher Joshua Rogers reported two vulnerabilities in October: an OAuth parameter injection that allows token manipulation and redirect hijacking, and an account hijacking bug. The fix was trivial – adding encodeURIComponent() to a single line.
Rogers submitted a PR. Three weeks later, the maintainer closed it and created a new PR with the identical code. But here’s where it gets absurd: the maintainer used Copilot to create the commit, and the AI hallucinated an author – “Simen A. W. Olsen” with email “my@simen.io.”
This person doesn’t exist. The email has zero hits online. The AI fabricated a contributor out of thin air and attributed Rogers’ security fix to them.
When Rogers called this out, asking for a force-push to fix the attribution, he was told it couldn’t be changed. The fictitious “Simen Olsen” is now permanently engraved in the Auth0 repository as a contributor to a security fix they never wrote.
The AI-Generated Apology
It gets worse. When confronted about the attribution theft, the maintainer responded with an obviously AI-generated apology – complete with the classic ChatGPT opener “You are absolutely correct.”
The maintainer later admitted they had to delete an AI-generated comment. So the workflow here is: use AI to commit the code (wrong), use AI to generate the attribution (wrong), use AI to apologize for using AI (pathetic).
“Create a Video” Security Theater
The account hijacking vulnerability? Okta’s security team fixed it in code but refused to acknowledge it as a security issue. Their response to the researcher: “unless you create a video abusing this vulnerability, we aren’t going to accept this as a security issue.”
A security company asking for a proof-of-concept video for an obvious vulnerability they already fixed. The absurdity writes itself.
Developer Community Reacts
The Hacker News discussion pulled no punches. Comments ranged from “I was pretty appalled to see such a basic mistake from a security company, but then again it is Okta” to “amateur-hour all the way down” to “90% of resources go to sales/marketing, engineering remains minimum viable.”
One developer working security at an S&P-listed company was blunter: “Okta sucks balls.”
The community also questioned why anyone would hire someone who “delegates their core job to a slop generator.” Several suggested alternatives like Keycloak, Authentik, and FusionAuth.
The Real Problem
This incident isn’t just about attribution theft or a single maintainer’s bad judgment. It’s about enterprise security companies using lowest-common-denominator AI workflows for security-critical code.
If a one-line security fix gets mangled by AI hallucinations and lazy workflows, what else in enterprise security products is AI slop? What other “Simen Olsens” are haunting codebases with their fabricated identities?
For developers relying on Auth0 in their stack, this should raise questions. Not about whether to use AI tools – that ship has sailed – but about whether the companies handling your authentication take security engineering seriously or just check boxes for procurement teams.
The answer, based on this incident, is not encouraging.











