Cybersecurity researchers have revealed a newly identified attack technique that shows how artificial intelligence chatbots can be manipulated to leak sensitive information with minimal user involvement. The method, known as Reprompt, demonstrates how attackers could extract data from AI assistants such as Microsoft Copilot through a single click on a legitimate-looking link, while bypassing standard enterprise security protections.
According to researchers, the attack requires no malicious software, plugins, or continued interaction. Once a user clicks the link, the attacker can retain control of the chatbot session even if the chat window is closed, allowing information to be quietly transmitted without the user’s awareness.
The issue was disclosed responsibly, and Microsoft has since addressed the vulnerability. The company confirmed that enterprise users of Microsoft 365 Copilot are not affected.
At a technical level, Reprompt relies on a chain of design weaknesses. Attackers first embed instructions into a Copilot web link using a standard query parameter. These instructions are crafted to bypass safeguards that are designed to prevent direct data exposure by exploiting the fact that certain protections apply only to the initial request. From there, the attacker can trigger a continuous exchange between Copilot and an external server, enabling hidden and ongoing data extraction.
In a realistic scenario, a target might receive an email containing what appears to be a legitimate Copilot link. Clicking it would cause Copilot to execute instructions embedded in the URL. The attacker could then repeatedly issue follow-up commands remotely, prompting the chatbot to summarize recently accessed files, infer personal details, or reveal contextual information. Because these later instructions are delivered dynamically, it becomes difficult to determine what data is being accessed by examining the original prompt alone.
Researchers note that this effectively turns Copilot into an invisible channel for data exfiltration, without requiring user-entered prompts, extensions, or system connectors. The underlying issue reflects a broader limitation in large language models: their inability to reliably distinguish between trusted user instructions and commands embedded in untrusted data, enabling indirect prompt injection attacks.
The Reprompt disclosure coincides with the identification of multiple other techniques targeting AI-powered tools. Some attacks exploit chatbot connections to third-party applications, enabling zero-interaction data leaks or long-term persistence by injecting instructions into AI memory. Others abuse confirmation prompts, turning human oversight mechanisms into attack vectors, particularly in development environments.
Researchers have also shown how hidden instructions can be planted in shared documents, calendar invites, or emails to extract corporate data, and how AI browsers can be manipulated to bypass built-in prompt injection defenses. Beyond software, hardware-level risks have been identified, where attackers with server access may infer sensitive information by observing timing patterns in machine learning accelerators.
Additional findings include abuses of trusted AI communication protocols to drain computing resources, trigger hidden tool actions, or inject persistent behavior, as well as spreadsheet-based attacks that generate unsafe formulas capable of exporting user data. In some cases, attackers could manipulate AI development platforms to alter spending controls or leak access credentials, enabling stealthy financial abuse.
Taken together, the research underlines that prompt injection remains a persistent and evolving risk. Experts recommend layered security defenses, limiting AI privileges, and restricting access to sensitive systems. Users are also advised to avoid clicking unsolicited AI-related links and to be cautious about sharing personal or confidential information in chatbot conversations.
As AI systems gain broader access to corporate data and greater autonomy, researchers warn that the potential impact of a single vulnerability increases substantially, underscoring the need for careful deployment, continuous monitoring, and ongoing security research.
Cybersecurity analysts have identified a sophisticated web skimming operation that has been running continuously since early 2022, silently targeting online checkout systems. The campaign focuses on stealing payment card information and is believed to affect businesses that rely on globally used card networks.
Web skimming is a type of cyberattack where criminals tamper with legitimate shopping websites rather than attacking customers directly. By inserting malicious code into payment pages, attackers are able to intercept sensitive information at the exact moment a customer attempts to complete a purchase. Because the website itself appears normal, victims are usually unaware their data has been compromised.
This technique is commonly associated with Magecart-style attacks. While Magecart initially referred to groups exploiting Magento-based websites, the term now broadly describes any client-side attack that captures payment data through infected checkout pages across multiple platforms.
The operation was uncovered during an investigation into a suspicious domain hosting malicious scripts. This domain was linked to infrastructure previously associated with a bulletproof hosting provider that had faced international sanctions. Researchers found that the attackers were using this domain to distribute heavily concealed JavaScript files that were loaded directly by e-commerce websites.
Once active, the malicious script continuously monitors user activity on the payment page. It is programmed to detect whether a website administrator is currently logged in by checking for specific indicators commonly found on WordPress sites. If such indicators are present, the script automatically deletes itself, reducing the risk of detection during maintenance or inspection.
The attack becomes particularly deceptive when certain payment options are selected. In these cases, the malicious code creates a fake payment form that visually replaces the legitimate one. Customers unknowingly enter their card number, expiration date, and security code into this fraudulent interface. After the information is captured, the website displays a generic payment error, making it appear as though the transaction failed due to a simple mistake.
In addition to financial data, the attackers collect personal details such as names, contact numbers, email addresses, and delivery information. This data is sent to an external server controlled by the attackers using standard web communication methods. Once the transfer is complete, the fake form is removed, the real payment form is restored, and the script marks the victim as already compromised to avoid repeating the attack.
Researchers noted that the operation reflects an advanced understanding of website behavior, especially within WordPress-based environments. By exploiting both technical features and user trust, the attackers have managed to sustain this campaign for years without drawing widespread attention.
This discovery reinforces the importance of continuous website monitoring and script validation for businesses, as well as cautious online shopping practices for consumers.
The breach happened in EEOC's Public Portal system where unauthorized access of agency data may have disclosed personal data in logs given to agency by the public. “Staff employed by the contractor, who had privileged access to EEOC systems, were able to handle data in an unauthorized (UA) and prohibited manner in early 2025,” reads the EEOC email notification sent by data security office.
The email said that the review suggested personally identifiable information (PII) may have been leaked, depending on the individual. The exposed information may contain names, contact and other data. The review of is still ongoing while EOCC works with the law enforcement.
EOCC has asked individuals to review their financial accounts for any malicious activity and has also asked portal users to reset their passwords.
Contracting data indicates that EEOC had a contract with Opexus, a company that provides case management software solutions to the federal government.
Open spokesperson confirmed this and said EEOC and Opex “took immediate action when we learned of this activity, and we continue to support investigative and law enforcement efforts into these individuals’ conduct, which is under active prosecution in the Federal Court of the Eastern District of Virginia.”
Talking about the role of employees in the breach, the spokesperson added that “While the individuals responsible met applicable seven-year background check requirements consistent with prevailing government and industry standards at the time of hire, this incident made clear that personnel screening alone is not sufficient."
The second Trump administration's efforts to prevent claimed “illegal discrimination” driven by diversity, equity, and inclusion programs, which over the past year have been examined and demolished at almost every level of the federal government, centre on the EEOC.
Large private companies all throughout the nation have been affected by the developments. In an X post this month, EEOC chairwoman Andrea Lucas asked white men if they had experienced racial or sexual discrimination at work and urged them to report their experiences to the organization "as soon as possible.”
A critical security vulnerability has been identified in LangChain’s core library that could allow attackers to extract sensitive system data from artificial intelligence applications. The flaw, tracked as CVE-2025-68664, affects how the framework processes and reconstructs internal data, creating serious risks for organizations relying on AI-driven workflows.
LangChain is a widely adopted framework used to build applications powered by large language models, including chatbots, automation tools, and AI agents. Due to its extensive use across the AI ecosystem, security weaknesses within its core components can have widespread consequences.
The issue stems from how LangChain handles serialization and deserialization. These processes convert data into a transferable format and then rebuild it for use by the application. In this case, two core functions failed to properly safeguard user-controlled data that included a reserved internal marker used by LangChain to identify trusted objects. As a result, untrusted input could be mistakenly treated as legitimate system data.
This weakness becomes particularly dangerous when AI-generated outputs or manipulated prompts influence metadata fields used during logging, event streaming, or caching. When such data passes through repeated serialization and deserialization cycles, the system may unknowingly reconstruct malicious objects. This behavior falls under a known security category involving unsafe deserialization and has been rated critical, with a severity score of 9.3.
In practical terms, attackers could craft inputs that cause AI agents to leak environment variables, which often store highly sensitive information such as access tokens, API keys, and internal configuration secrets. In more advanced scenarios, specific approved components could be abused to transmit this data outward, including through unauthorized network requests. Certain templating features may further increase risk if invoked after unsafe deserialization, potentially opening paths toward code execution.
The vulnerability was discovered during security reviews focused on AI trust boundaries, where the researcher traced how untrusted data moved through internal processing paths. After responsible disclosure in early December 2025, the LangChain team acknowledged the issue and released security updates later that month.
The patched versions introduce stricter handling of internal object markers and disable automatic resolution of environment secrets by default, a feature that was previously enabled and contributed to the exposure risk. Developers are strongly advised to upgrade immediately and review related dependencies that interact with LangChain-core.
Security experts stress that AI outputs should always be treated as untrusted input. Organizations are urged to audit logging, streaming, and caching mechanisms, limit deserialization wherever possible, and avoid exposing secrets unless inputs are fully validated. A similar vulnerability identified in LangChain’s JavaScript ecosystem accentuates broader security challenges as AI frameworks become more interconnected.
As AI adoption accelerates, maintaining strict data boundaries and secure design practices is essential to protecting both systems and users from newly developing threats.