A critical security vulnerability has been identified in LangChain’s core library that could allow attackers to extract sensitive system data from artificial intelligence applications. The flaw, tracked as CVE-2025-68664, affects how the framework processes and reconstructs internal data, creating serious risks for organizations relying on AI-driven workflows.
LangChain is a widely adopted framework used to build applications powered by large language models, including chatbots, automation tools, and AI agents. Due to its extensive use across the AI ecosystem, security weaknesses within its core components can have widespread consequences.
The issue stems from how LangChain handles serialization and deserialization. These processes convert data into a transferable format and then rebuild it for use by the application. In this case, two core functions failed to properly safeguard user-controlled data that included a reserved internal marker used by LangChain to identify trusted objects. As a result, untrusted input could be mistakenly treated as legitimate system data.
This weakness becomes particularly dangerous when AI-generated outputs or manipulated prompts influence metadata fields used during logging, event streaming, or caching. When such data passes through repeated serialization and deserialization cycles, the system may unknowingly reconstruct malicious objects. This behavior falls under a known security category involving unsafe deserialization and has been rated critical, with a severity score of 9.3.
In practical terms, attackers could craft inputs that cause AI agents to leak environment variables, which often store highly sensitive information such as access tokens, API keys, and internal configuration secrets. In more advanced scenarios, specific approved components could be abused to transmit this data outward, including through unauthorized network requests. Certain templating features may further increase risk if invoked after unsafe deserialization, potentially opening paths toward code execution.
The vulnerability was discovered during security reviews focused on AI trust boundaries, where the researcher traced how untrusted data moved through internal processing paths. After responsible disclosure in early December 2025, the LangChain team acknowledged the issue and released security updates later that month.
The patched versions introduce stricter handling of internal object markers and disable automatic resolution of environment secrets by default, a feature that was previously enabled and contributed to the exposure risk. Developers are strongly advised to upgrade immediately and review related dependencies that interact with LangChain-core.
Security experts stress that AI outputs should always be treated as untrusted input. Organizations are urged to audit logging, streaming, and caching mechanisms, limit deserialization wherever possible, and avoid exposing secrets unless inputs are fully validated. A similar vulnerability identified in LangChain’s JavaScript ecosystem accentuates broader security challenges as AI frameworks become more interconnected.
As AI adoption accelerates, maintaining strict data boundaries and secure design practices is essential to protecting both systems and users from newly developing threats.