
Forty-two percent of all code written today is AI-generated or AI-assisted. A meaningful share of that code is also insecure — Veracode tested over a hundred large language models and found that 45% of AI-generated code samples introduce OWASP Top 10 vulnerabilities. Vercel’s answer, open-sourced on May 5, is deepsec: an AI-powered security harness that sends coding agents through your codebase to find what the AI that wrote your code left behind. AI created the security debt. Now AI is being asked to audit it.
Not Your Usual Security Scanner
Deepsec is not Semgrep with a better marketing page. Traditional static analysis tools — Semgrep, CodeQL, SonarQube — pattern-match against fixed rule libraries. They are fast and cheap, but they miss context. They cannot reason about what your code is actually doing.
Deepsec takes a different approach. It dispatches Claude Opus 4.7 (at max effort) and GPT-5.5 (at xhigh reasoning) as coding agents to investigate security-sensitive files, trace data flows across your codebase, and flag what looks exploitable. The practical difference is significant: research found LLM-augmented static analysis detected 69 of 120 real-world CVEs, versus 27 for the best traditional SAST tool. And where legacy tools can hit false positive rates of 78% (per NIST), deepsec runs at roughly 10 to 20%.
It is also fully open source, MIT licensed, and runs on your own infrastructure. Your code never leaves your machine.
How It Works: Five Stages
Deepsec runs a five-stage pipeline:
- Scan — A regex-only pass using roughly 110 matchers. No AI, no cost. Takes about 15 seconds on a 2,000-file project. Identifies security-sensitive candidates for further investigation.
- Investigate — LLM agents examine each flagged file, tracing data flows and checking for mitigations. Findings are rated by severity.
- Revalidate — A second agent pass validates findings and removes false positives before you ever see them.
- Enrich — Git metadata attributes findings to the contributors best positioned to fix them.
- Export — Results are formatted as actionable instructions for ticketing systems or directly as tasks for coding agents.
To get started:
npx deepsec init
cd .deepsec && pnpm install
pnpm deepsec scan # ~15 seconds, no AI cost
pnpm deepsec process # AI investigation
pnpm deepsec revalidate # remove false positives
pnpm deepsec export --format md-dir --out ./findings
Authentication uses your existing Claude or OpenAI API keys — or you can route through Vercel’s AI Gateway. The community has already extended it to run locally with ollama if you want to avoid cloud API costs entirely.
The Cost Question Nobody Wants to Skip
Deepsec is configured to run the best models at maximum thinking levels. That is the good news and the bad news in the same sentence.
For small-to-medium repos, expect costs in the hundreds to low thousands of dollars per full scan. For enterprise monorepos, scans can reach tens of thousands of dollars — Vercel scales to 1,000+ concurrent sandboxes for large codebases. This is not a tool you run on every pull request.
The framing that makes sense: deepsec is a quarterly deep audit, or a pre-major-release gate. Steven Tey, founder of dub.co, put it well after running deepsec on his codebase: “deepsec is the first tool that’s surfaced the kind of issues we’d actually want a security engineer to flag, and it runs on infrastructure we control.”
For everyday CI coverage, Semgrep still runs in 10 seconds and catches the obvious stuff. Deepsec finds what Semgrep misses — but at a price that means you schedule it, not automate it on every commit.
Why This Matters Now
The security context around deepsec is not a background detail. The Cloud Security Alliance found 62% of AI-generated code in their 2026 study contained vulnerabilities. Aikido Security’s data shows AI-generated code now causes 1 in 5 enterprise security breaches. In March 2026 alone, 35 new CVE entries were traced directly to AI-generated code — up from 6 in January.
Vercel is also the first major player to open-source an AI security scanner. Symbiotic Security, Wiz, and Aikido each shipped proprietary AI-security products in the weeks before deepsec launched. Open-sourcing it is a deliberate move: developers trust tools they can audit.
Deepsec is not a complete answer. It does not catch business logic flaws, authorization edge cases, or race conditions that only show up at runtime. The practical recommendation: layer it with Semgrep in CI for fast PR feedback, CodeQL in nightly builds for semantic depth, and deepsec on a schedule for the AI-agent-level investigation your other tools cannot do.
The tool is available now at github.com/vercel-labs/deepsec. Full details in the Vercel announcement post.













