Cybersecurity analysts are raising concerns about a growing trend in which corporate cloud-based file-sharing platforms are being leveraged to extract sensitive organizational data. A cybercrime actor known online as “Zestix” has recently been observed advertising stolen corporate information that allegedly originates from enterprise deployments of widely used cloud file-sharing solutions.
Findings shared by cyber threat intelligence firm Hudson Rock suggest that the initial compromise may not stem from vulnerabilities in the platforms themselves, but rather from infected employee devices. In several cases examined by researchers, login credentials linked to corporate cloud accounts were traced back to information-stealing malware operating on users’ systems.
These malware strains are typically delivered through deceptive online tactics, including malicious advertising and fake system prompts designed to trick users into interacting with harmful content. Once active, such malware can silently harvest stored browser data, saved passwords, personal details, and financial information, creating long-term access risks.
When attackers obtain valid credentials and the associated cloud service account does not enforce multi-factor authentication, unauthorized access becomes significantly easier. Without this added layer of verification, threat actors can enter corporate environments using legitimate login details without immediately triggering security alarms.
Hudson Rock also reported that some of the compromised credentials identified during its investigation had been present in criminal repositories for extended periods. This suggests lapses in routine password management practices, such as timely credential rotation or session invalidation after suspected exposure.
Researchers describe Zestix as operating in the role of an initial access broker, meaning the actor focuses on selling entry points into corporate systems rather than directly exploiting them. The access being offered reportedly involves cloud file-sharing environments used across a range of industries, including transportation, healthcare, utilities, telecommunications, legal services, and public-sector operations.
To validate its findings, Hudson Rock analyzed malware-derived credential logs and correlated them with publicly accessible metadata and open-source intelligence. Through this process, the firm identified multiple instances where employee credentials associated with cloud file-sharing platforms appeared in confirmed malware records. However, the researchers emphasized that these findings do not constitute public confirmation of data breaches, as affected organizations have not formally disclosed incidents linked to the activity.
The data allegedly being marketed spans a wide spectrum of corporate and operational material, including technical documentation, internal business files, customer information, infrastructure layouts, and contractual records. Exposure of such data could lead to regulatory consequences, reputational harm, and increased risks related to privacy, security, and competitive intelligence.
Beyond the specific cases examined, researchers warn that this activity reflects a broader structural issue. Threat intelligence data indicates that credential-stealing infections remain widespread across corporate environments, reinforcing the need for stronger endpoint security, consistent use of multi-factor authentication, and proactive credential hygiene.
Hudson Rock stated that relevant cloud service providers have been informed of the verified exposures to enable appropriate mitigation measures.
Although advanced spyware attacks do not affect most smartphone users, cybersecurity researchers stress that awareness is essential as these tools continue to spread globally. Even individuals who are not public figures are advised to remain cautious.
In December, hundreds of iPhone and Android users received official threat alerts stating that their devices had been targeted by spyware. Shortly after these notifications, Apple and Google released security patches addressing vulnerabilities that experts believe were exploited to install the malware on a small number of phones.
Spyware poses an extreme risk because it allows attackers to monitor nearly every activity on a smartphone. This includes access to calls, messages, keystrokes, screenshots, notifications, and even encrypted platforms such as WhatsApp and Signal. Despite its intrusive capabilities, spyware is usually deployed in targeted operations against journalists, political figures, activists, and business leaders in sensitive industries.
High-profile cases have demonstrated the seriousness of these attacks. Former Amazon chief executive Jeff Bezos and Hanan Elatr, the wife of murdered Saudi dissident Jamal Khashoggi, were both compromised through Pegasus spyware developed by the NSO Group. These incidents illustrate how personal data can be accessed without user awareness.
Spyware activity remains concentrated within these circles, but researchers suggest its reach may be expanding. In early December, Google issued threat notifications and disclosed findings showing that an exploit chain had been used to silently install Predator spyware. Around the same time, the U.S. Cybersecurity and Infrastructure Security Agency warned that attackers were actively exploiting mobile messaging applications using commercial surveillance tools.
One of the most dangerous techniques involved is known as a zero-click attack. In such cases, a device can be infected without the user clicking a link, opening a message, or downloading a file. According to Malwarebytes researcher Pieter Arntz, once infected, attackers can read messages, track keystrokes, capture screenshots, monitor notifications, and access banking applications. Rocky Cole of iVerify adds that spyware can also extract emails and texts, steal credentials, send messages, and access cloud accounts.
Spyware may also spread through malicious links, fake applications, infected images, browser vulnerabilities, or harmful browser extensions. Recorded Future’s Richard LaTulip notes that recent research into malicious extensions shows how tools that appear harmless can function as surveillance mechanisms. These methods, often associated with nation-state actors, are designed to remain hidden and persistent.
Governments and spyware vendors frequently claim such tools are used only for law enforcement or national security. However, Amnesty International researcher Rebecca White states that journalists, activists, and others have been unlawfully targeted worldwide, using spyware as a method of repression. Thai activist Niraphorn Onnkhaow was targeted multiple times during pro-democracy protests between 2020 and 2021, eventually withdrawing from activism due to fears her data could be misused.
Detecting spyware is challenging. Devices may show subtle signs such as overheating, performance issues, or unexpected camera or microphone activation. Official threat alerts from Apple, Google, or Meta should be treated seriously. Leaked private information can also indicate compromise.
To reduce risk, Apple offers Lockdown Mode, which limits certain functions to reduce attack surfaces. Apple security executive Ivan Krstić states that widespread iPhone malware has not been observed outside mercenary spyware campaigns. Apple has also introduced Memory Integrity Enforcement, an always-on protection designed to block memory-based exploits.
Google provides Advanced Protection for Android, enhanced in Android 16 with intrusion logging, USB safeguards, and network restrictions.
Experts recommend avoiding unknown links, limiting app installations, keeping devices updated, avoiding sideloading, and restarting phones periodically. However, confirmed infections often require replacing the device entirely. Organizations such as Amnesty International, Access Now, and Reporters Without Borders offer assistance to individuals who believe they have been targeted.
Security specialists advise staying cautious without allowing fear to disrupt normal device use.
The approach to cybersecurity in 2026 will be shaped not only by technological innovation but also by how deeply digital systems are embedded in everyday life. As cloud services, artificial intelligence tools, connected devices, and online communication platforms become routine, they also expand the surface area for cyber exploitation.
Cyber threats are no longer limited to technical breaches behind the scenes. They increasingly influence what people believe, how they behave online, and which systems they trust. While some risks are still emerging, others are already circulating quietly through commonly used apps, services, and platforms, often without users realizing it.
One major concern is the growing concentration of internet infrastructure. A substantial portion of websites and digital services now depend on a limited number of cloud providers, content delivery systems, and workplace tools. This level of uniformity makes the internet more efficient but also more fragile. When many platforms rely on the same backbone, a single disruption, vulnerability, or attack can trigger widespread consequences across millions of users at once. What was once a diverse digital ecosystem has gradually shifted toward standardization, making large-scale failures easier to exploit.
Another escalating risk is the spread of misleading narratives about online safety. Across social media platforms, discussion forums, and live-streaming environments, basic cybersecurity practices are increasingly mocked or dismissed. Advice related to privacy protection, secure passwords, or cautious digital behavior is often portrayed as unnecessary or exaggerated. This cultural shift creates ideal conditions for cybercrime. When users are encouraged to ignore protective habits, attackers face less resistance. In some cases, misleading content is actively promoted to weaken public awareness and normalize risky behavior.
Artificial intelligence is further accelerating cyber threats. AI-driven tools now allow attackers to automate tasks that once required advanced expertise, including scanning for vulnerabilities and crafting convincing phishing messages. At the same time, many users store sensitive conversations and information within browsers or AI-powered tools, often unaware that this data may be accessible to malware. As automated systems evolve, cyberattacks are becoming faster, more adaptive, and more difficult to detect or interrupt.
Trust itself has become a central target. Technologies such as voice cloning, deepfake media, and synthetic digital identities enable criminals to impersonate real individuals or create believable fake personas. These identities can bypass verification systems, open accounts, and commit fraud over long periods before being detected. As a result, confidence in digital interactions, platforms, and identity checks continues to decline.
Future computing capabilities are already influencing present-day cyber strategies. Even though advanced quantum-based attacks are not yet practical, some threat actors are collecting encrypted data now with the intention of decrypting it later. This approach puts long-term personal, financial, and institutional data at risk and underlines the need for stronger, future-ready security planning.
As digital and physical systems become increasingly interconnected, cybersecurity in 2026 will extend beyond software and hardware defenses. It will require stronger digital awareness, better judgment, and a broader understanding of how technology shapes risk in everyday life.
Cybersecurity researchers have revealed a newly identified attack technique that shows how artificial intelligence chatbots can be manipulated to leak sensitive information with minimal user involvement. The method, known as Reprompt, demonstrates how attackers could extract data from AI assistants such as Microsoft Copilot through a single click on a legitimate-looking link, while bypassing standard enterprise security protections.
According to researchers, the attack requires no malicious software, plugins, or continued interaction. Once a user clicks the link, the attacker can retain control of the chatbot session even if the chat window is closed, allowing information to be quietly transmitted without the user’s awareness.
The issue was disclosed responsibly, and Microsoft has since addressed the vulnerability. The company confirmed that enterprise users of Microsoft 365 Copilot are not affected.
At a technical level, Reprompt relies on a chain of design weaknesses. Attackers first embed instructions into a Copilot web link using a standard query parameter. These instructions are crafted to bypass safeguards that are designed to prevent direct data exposure by exploiting the fact that certain protections apply only to the initial request. From there, the attacker can trigger a continuous exchange between Copilot and an external server, enabling hidden and ongoing data extraction.
In a realistic scenario, a target might receive an email containing what appears to be a legitimate Copilot link. Clicking it would cause Copilot to execute instructions embedded in the URL. The attacker could then repeatedly issue follow-up commands remotely, prompting the chatbot to summarize recently accessed files, infer personal details, or reveal contextual information. Because these later instructions are delivered dynamically, it becomes difficult to determine what data is being accessed by examining the original prompt alone.
Researchers note that this effectively turns Copilot into an invisible channel for data exfiltration, without requiring user-entered prompts, extensions, or system connectors. The underlying issue reflects a broader limitation in large language models: their inability to reliably distinguish between trusted user instructions and commands embedded in untrusted data, enabling indirect prompt injection attacks.
The Reprompt disclosure coincides with the identification of multiple other techniques targeting AI-powered tools. Some attacks exploit chatbot connections to third-party applications, enabling zero-interaction data leaks or long-term persistence by injecting instructions into AI memory. Others abuse confirmation prompts, turning human oversight mechanisms into attack vectors, particularly in development environments.
Researchers have also shown how hidden instructions can be planted in shared documents, calendar invites, or emails to extract corporate data, and how AI browsers can be manipulated to bypass built-in prompt injection defenses. Beyond software, hardware-level risks have been identified, where attackers with server access may infer sensitive information by observing timing patterns in machine learning accelerators.
Additional findings include abuses of trusted AI communication protocols to drain computing resources, trigger hidden tool actions, or inject persistent behavior, as well as spreadsheet-based attacks that generate unsafe formulas capable of exporting user data. In some cases, attackers could manipulate AI development platforms to alter spending controls or leak access credentials, enabling stealthy financial abuse.
Taken together, the research underlines that prompt injection remains a persistent and evolving risk. Experts recommend layered security defenses, limiting AI privileges, and restricting access to sensitive systems. Users are also advised to avoid clicking unsolicited AI-related links and to be cautious about sharing personal or confidential information in chatbot conversations.
As AI systems gain broader access to corporate data and greater autonomy, researchers warn that the potential impact of a single vulnerability increases substantially, underscoring the need for careful deployment, continuous monitoring, and ongoing security research.
Cybersecurity analysts have identified a sophisticated web skimming operation that has been running continuously since early 2022, silently targeting online checkout systems. The campaign focuses on stealing payment card information and is believed to affect businesses that rely on globally used card networks.
Web skimming is a type of cyberattack where criminals tamper with legitimate shopping websites rather than attacking customers directly. By inserting malicious code into payment pages, attackers are able to intercept sensitive information at the exact moment a customer attempts to complete a purchase. Because the website itself appears normal, victims are usually unaware their data has been compromised.
This technique is commonly associated with Magecart-style attacks. While Magecart initially referred to groups exploiting Magento-based websites, the term now broadly describes any client-side attack that captures payment data through infected checkout pages across multiple platforms.
The operation was uncovered during an investigation into a suspicious domain hosting malicious scripts. This domain was linked to infrastructure previously associated with a bulletproof hosting provider that had faced international sanctions. Researchers found that the attackers were using this domain to distribute heavily concealed JavaScript files that were loaded directly by e-commerce websites.
Once active, the malicious script continuously monitors user activity on the payment page. It is programmed to detect whether a website administrator is currently logged in by checking for specific indicators commonly found on WordPress sites. If such indicators are present, the script automatically deletes itself, reducing the risk of detection during maintenance or inspection.
The attack becomes particularly deceptive when certain payment options are selected. In these cases, the malicious code creates a fake payment form that visually replaces the legitimate one. Customers unknowingly enter their card number, expiration date, and security code into this fraudulent interface. After the information is captured, the website displays a generic payment error, making it appear as though the transaction failed due to a simple mistake.
In addition to financial data, the attackers collect personal details such as names, contact numbers, email addresses, and delivery information. This data is sent to an external server controlled by the attackers using standard web communication methods. Once the transfer is complete, the fake form is removed, the real payment form is restored, and the script marks the victim as already compromised to avoid repeating the attack.
Researchers noted that the operation reflects an advanced understanding of website behavior, especially within WordPress-based environments. By exploiting both technical features and user trust, the attackers have managed to sustain this campaign for years without drawing widespread attention.
This discovery reinforces the importance of continuous website monitoring and script validation for businesses, as well as cautious online shopping practices for consumers.
The breach happened in EEOC's Public Portal system where unauthorized access of agency data may have disclosed personal data in logs given to agency by the public. “Staff employed by the contractor, who had privileged access to EEOC systems, were able to handle data in an unauthorized (UA) and prohibited manner in early 2025,” reads the EEOC email notification sent by data security office.
The email said that the review suggested personally identifiable information (PII) may have been leaked, depending on the individual. The exposed information may contain names, contact and other data. The review of is still ongoing while EOCC works with the law enforcement.
EOCC has asked individuals to review their financial accounts for any malicious activity and has also asked portal users to reset their passwords.
Contracting data indicates that EEOC had a contract with Opexus, a company that provides case management software solutions to the federal government.
Open spokesperson confirmed this and said EEOC and Opex “took immediate action when we learned of this activity, and we continue to support investigative and law enforcement efforts into these individuals’ conduct, which is under active prosecution in the Federal Court of the Eastern District of Virginia.”
Talking about the role of employees in the breach, the spokesperson added that “While the individuals responsible met applicable seven-year background check requirements consistent with prevailing government and industry standards at the time of hire, this incident made clear that personnel screening alone is not sufficient."
The second Trump administration's efforts to prevent claimed “illegal discrimination” driven by diversity, equity, and inclusion programs, which over the past year have been examined and demolished at almost every level of the federal government, centre on the EEOC.
Large private companies all throughout the nation have been affected by the developments. In an X post this month, EEOC chairwoman Andrea Lucas asked white men if they had experienced racial or sexual discrimination at work and urged them to report their experiences to the organization "as soon as possible.”
A critical security vulnerability has been identified in LangChain’s core library that could allow attackers to extract sensitive system data from artificial intelligence applications. The flaw, tracked as CVE-2025-68664, affects how the framework processes and reconstructs internal data, creating serious risks for organizations relying on AI-driven workflows.
LangChain is a widely adopted framework used to build applications powered by large language models, including chatbots, automation tools, and AI agents. Due to its extensive use across the AI ecosystem, security weaknesses within its core components can have widespread consequences.
The issue stems from how LangChain handles serialization and deserialization. These processes convert data into a transferable format and then rebuild it for use by the application. In this case, two core functions failed to properly safeguard user-controlled data that included a reserved internal marker used by LangChain to identify trusted objects. As a result, untrusted input could be mistakenly treated as legitimate system data.
This weakness becomes particularly dangerous when AI-generated outputs or manipulated prompts influence metadata fields used during logging, event streaming, or caching. When such data passes through repeated serialization and deserialization cycles, the system may unknowingly reconstruct malicious objects. This behavior falls under a known security category involving unsafe deserialization and has been rated critical, with a severity score of 9.3.
In practical terms, attackers could craft inputs that cause AI agents to leak environment variables, which often store highly sensitive information such as access tokens, API keys, and internal configuration secrets. In more advanced scenarios, specific approved components could be abused to transmit this data outward, including through unauthorized network requests. Certain templating features may further increase risk if invoked after unsafe deserialization, potentially opening paths toward code execution.
The vulnerability was discovered during security reviews focused on AI trust boundaries, where the researcher traced how untrusted data moved through internal processing paths. After responsible disclosure in early December 2025, the LangChain team acknowledged the issue and released security updates later that month.
The patched versions introduce stricter handling of internal object markers and disable automatic resolution of environment secrets by default, a feature that was previously enabled and contributed to the exposure risk. Developers are strongly advised to upgrade immediately and review related dependencies that interact with LangChain-core.
Security experts stress that AI outputs should always be treated as untrusted input. Organizations are urged to audit logging, streaming, and caching mechanisms, limit deserialization wherever possible, and avoid exposing secrets unless inputs are fully validated. A similar vulnerability identified in LangChain’s JavaScript ecosystem accentuates broader security challenges as AI frameworks become more interconnected.
As AI adoption accelerates, maintaining strict data boundaries and secure design practices is essential to protecting both systems and users from newly developing threats.