Cybersecurity experts have discovered another incident of the ongoing GlassWorm campaign, which uses a new Zig dropper that's built to secretly compromise all integrated development environments (IDEs) on a developer's system.
The tactic was found in an Open VSX extension called "specstudio.code-wakatime-activity-tracker”, which disguised as WakaTime, a famous tool that calculates the time programmes spend with the IDE. The extension can not be downloaded now.
In previous attacks, GlassWorm used the same native compiled code in extensions. Instead of using the binary as the payload directly, it is deployed as a covert indirection for the visible GlassWorm dropper. It can secretly compromise all other IDEs that may be present in your device.
The recently discovered Microsoft Visual Studio Code (VS Code) extension is a replica (almost).
The extension installs a universal Mach-O binary called "mac.node," if the system is running Apple macOS, and a binary called "win.node" for Windows computers.
These Zig-written compiled shared libraries that load straight into Node's runtime and run outside of the JavaScript sandbox with complete operating system-level access are Node.js native addons.
Finding every IDE on the system that supports VS Code extensions is the binary's main objective once it has been loaded. This includes forks like VSCodium, Positron, and other AI-powered coding tools like Cursor and Windsurf, in addition to Microsoft VS Code and VS Code Insiders.
Once this is achieved, the binary installs an infected VS Code extension (.VSIX) from a hacker-owned GitHub account. The extension, known as “floktokbok.autoimport”, imitates “steoates.autoimport”, an authentic extension with over 5 million downloads on the office Visual Studio Marketplace.
After that, the installed .VSIX file is written to a secondary path and secretly deployed into each IDE via editor's CLI installer.
In the second-stage, VS Code extension works as a dropper that escapes deployment on Russian devices, interacts with the Solana blockchain, gets personal data, and deploys a remote access trojan (RAT). In the final stage, RAT installs a data-stealing Google Chrome extension.
“The campaign has expanded repeatedly since then, compromising hundreds of projects across GitHub, npm, and VS Code, and most recently delivering a persistent RAT through a fake Chrome extension that logged keystrokes and dumped session cookies. The group keeps iterating, and they just made a meaningful jump,” cybersecurity firm aikido reported.
New research suggests the cryptocurrency industry may have less time than anticipated to prepare for the risks posed by quantum computing, with potential implications for Bitcoin, Ethereum, and other major digital assets.
A whitepaper released on March 31 by researchers at Google indicates that breaking the cryptographic systems securing these networks may require fewer than 500,000 physical qubits on a superconducting quantum computer. This marks a sharp reduction from earlier estimates, which placed the requirement in the millions.
The study brings together contributors from both academia and industry, including Justin Drake of the Ethereum Foundation and Dan Boneh, alongside Google Quantum AI researchers led by Ryan Babbush and Hartmut Neven. The research was also shared with U.S. government agencies prior to publication, with input from organizations such as Coinbase and the Ethereum Foundation.
At present, no quantum system is capable of carrying out such an attack. Google’s most advanced processor, Willow, operates with 105 qubits. However, researchers warn that the gap between current hardware and attack-capable machines is narrowing. Drake has estimated at least a 10% probability that a quantum computer could extract a private key from a public key by 2032.
The concern centers on how cryptocurrencies are secured. Bitcoin relies on a mathematical problem known as the Elliptic Curve Discrete Logarithm Problem, which is considered practically unsolvable using classical computers. However, Peter Shor demonstrated that quantum algorithms could solve this problem far more efficiently, potentially allowing attackers to recover private keys, forge signatures, and access funds.
Importantly, this threat does not extend to Bitcoin mining, which relies on the SHA-256 algorithm. Experts suggest that using quantum computing to meaningfully disrupt mining remains decades away. Instead, the vulnerability lies in signature schemes such as ECDSA and Schnorr, both based on the secp256k1.
The research outlines three potential attack scenarios. “On-spend” attacks target transactions in progress, where an attacker could intercept a transaction, derive the private key, and submit a fraudulent replacement before confirmation. With Bitcoin’s average block time of 10 minutes, the study estimates such an attack could be executed in roughly nine minutes using optimized quantum systems, with parallel processing increasing success rates. Faster blockchains such as Ethereum and Solana offer narrower windows but are not entirely immune.
“At-rest” attacks focus on wallets with already exposed public keys, such as reused or inactive addresses, where attackers have significantly more time. A third category, “on-setup” attacks, involves exploiting protocol-level parameters. While Bitcoin appears resistant to this method, certain Ethereum features and privacy tools like Tornado Cash may face higher exposure.
Technically, the researchers developed quantum circuits requiring fewer than 1,500 logical qubits and tens of millions of computational operations, translating to under 500,000 physical qubits under current assumptions. This is a substantial improvement over earlier estimates, such as a 2023 study that suggested around 9 million qubits would be needed. More optimistic models could reduce this further, though they depend on hardware capabilities not yet demonstrated.
In an unusual move, the team did not publish the full attack design. Instead, they used a zero-knowledge proof generated through the SP1 zero-knowledge virtual machine to validate their findings without exposing sensitive details. This approach, rarely used in quantum research, allows independent verification while limiting misuse.
The findings arrive as both industry and governments begin preparing for a post-quantum future. The National Security Agency has called for quantum-resistant systems by 2030, while Google has set a 2029 target for transitioning its own infrastructure. Ethereum has been actively working toward similar goals, aiming for a full migration within the same timeframe. Bitcoin, however, faces slower progress due to its decentralized governance model, where major upgrades can take years to implement.
Early mitigation efforts are underway. A recent Bitcoin proposal introduces new address formats designed to obscure public keys and support future quantum-resistant signatures. However, a full transition away from current cryptographic systems has not yet been finalized.
For now, users are advised to take precautionary steps. Moving funds to new addresses, avoiding address reuse, and monitoring updates from wallet providers can reduce exposure, particularly for long-term holdings. While the threat is not immediate, researchers emphasize that preparation must begin well in advance, as advances in quantum computing continue to accelerate.
Privacy issues have always bothered users and business organizations. With the rapid adoption of AI, the threats are also rising. DuckDuckGo’s Duck.ai chatbot benefits from this.
The latest report from Similarweb revealed that traffic to Duck.ai increased rapidly last month. The traffic recorded 11.1 million visits in February 2026, 300% more than January.
The statistics seem small when compared with the most popular chatbots such as ChatGPT, Claude, or Gemini.
Similarweb estimates that ChatGPT recorded 5.4 billion visits in February 2026, and Google’s Gemini recorded 2.1 billion, whereas Claude recorded 290.3 million.
For DuckDuckGo, the numbers show a good sign, as the bot was launched as beta in 2025, and has shown a sharp rise in visits.
DuckDuckGo browser is known for its privacy, and the company aims to apply the same principle to its AI bot. Duck.ai doesn't run a bespoke LLM, it uses frontier models from Meta, Anthropic, and OpenAI, but it doesn't expose your IP address and personal data.
Duck.ai's privacy policy reads, "In addition, we have agreements in place with all model providers that further limit how they can use data from these anonymous requests, including not using Prompts and Outputs to develop or improve their models, as well as deleting all information received once it is no longer necessary to provide Outputs (at most within 30 days, with limited exceptions for safety and legal compliance),”
What is the reason for this sudden surge? The bot has two advantages over individual commercial bots like ChatGPT and Gemini, it offers an option to toggle between multiple models and better privacy security. The privacy aspect sets it apart. Users on Reddit have praised Duck.ai, one person noting "it's way better than Google's," which means Gemini.
In March, Anthropic rejected a few applications of its technology for mass surveillance and weapons submitted by the Department of Defense. The DoD retaliated by breaking the contract. Soon after, OpenAI stepped in.
The incident stirred controversies around privacy concerns and ethical AI use. This explains why users may prefer chatbots like Duck.ai that safeguard user data from both the government and the big tech.
A China-based hacker is targeting European government and diplomatic entities; the attack started in mid-2025, after a two-year period of no targeting in the region. The campaign has been linked to TA416; the activities coincide with DarkPeony, Red Lich, RedDelta, SmugX, Vertigo Panda, and UNC6384.
According to Proofpoint, “This TA416 activity included multiple waves of web bug and malware delivery campaigns against diplomatic missions to the European Union and NATO across a range of European countries. Throughout this period, TA416 regularly altered its infection chain, including abusing Cloudflare Turnstile challenge pages, abusing OAuth redirects, and using C# project files, as well as frequently updating its custom PlugX payload."
Additionally, TA416 organized multiple campaigns against the government and diplomatic organizations in the Middle East after the US-Iran conflict in February 2026. The attack aimed to gather regional intelligence regarding the conflict.
TA416 also has a history of technical overlaps with a different group, Mustang Panda (UNK_SteadySplit, CerenaKeeper, and Red Ishtar). The two gangs are listed as Hive0154, Twill Typhoon, Earth Preta, Temp.HEX, Stately Taurus, and HoneyMyte.
TA416’s attacks use PlugX variants. The Mustang Panda group continually installed tools like COOLCLIENT, TONESHELL, and PUBLOAD. One common thing is using DLL side-loading to install malware.
TA416’s latest campaigns against European entities are pushing a mix of web bug and malware deployment operations, while threat actors use freemail sender accounts to do spying and install the PlugX backdoor through harmful archives via Google Drive, Microsoft Azure Blob Storage, and exploited SharePoint incidents. The PlugX malware campaigns were recently found by Arctic Wolf and StrikeReady in October 2025.
According to Proofpoint, “A web bug (or tracking pixel) is a tiny invisible object embedded in an email that triggers an HTTP request to a remote server when opened, revealing the recipient's IP address, user agent, and time of access, allowing the threat actor to assess whether the email was opened by the intended target.”
The TA416 attacks in December last year leveraged third-party Microsoft Entra ID cloud apps to start redirecting to the download of harmful archives. Phishing emails in this campaign link to Microsoft’s authentic OAuth authorization. Once opened, resends the user to the hacker-controlled domain and installs PlugX.
According to experts, "When the MSBuild executable is run, it searches the current directory for a project file and automatically builds it."
A massive credential-harvesting campaign was found abusing the React2Shell flaw as an initial infection vector to steal database credentials, shell command history, Amazon Web Services (AWS) secrets, GitHub, Stripe API keys.
Cisco Talos has linked the campaign to a threat cluster tracked as UAT-10608. At least 766 hosts around multiple geographic regions and cloud providers have been exploited as part of the operation.
According to experts, “Post-compromise, UAT-10608 leverages automated scripts for extracting and exfiltrating credentials from a variety of applications, which are then posted to its command-and-control (C2). The C2 hosts a web-based graphical user interface (GUI) titled 'NEXUS Listener' that can be used to view stolen information and gain analytical insights using precompiled statistics on credentials harvested and hosts compromised.”
The campaign targets Next.js instances that are vulnerable to CVE-2025-55182 (CVSS score: 10.0), a severe flaw in React Server Components and Next.js App Router that could enable remote code execution for access, and then deploy the NEXUS Listener collection framework.
This is achieved by a dropper that continues to play a multi-phase harvesting script that stores various details from the victim system.
SSH private keys and authorized_keys
JSON-parsed keys and authorized_keys
Kubernetes service account tokens
Environment variables
API keys
Docker container configurations
Running processes
IAM role-associated temporary credentials
The victims and the indiscriminate targeting pattern are consistent with automated scanning. The key thing in the framework is an application (password-protected) that makes all stolen data public to the user through a geographical user interface that has search functions to browse through the information. The present Nexus Listener version is V3, meaning the tool has gone through significant changes.
Talos managed to get data from an unknown NEXUS Listener incident. It had API keys linked with Stripe, AI platforms such as Anthropic, OpenAI, and NVIDIA NIM, communication services such as Brevo and SendGrid, webhook secrets, Telegram bot tokens, GitLab, and GitHub tokens, app secrets, and database connection strings.
But the problem has not disappeared completely, as users still face problems sometimes. To address the issue, user can use email aliases.
Email alias is an alternative email address that allows you to get mails without sharing your address. The alias reroutes all incoming mails to your primary account.
Plus addressing: For organizing mail efficiently, you are a + symbol and a category, you can also add rules to your mail and filter them by source.
Provider aliases: Mainly used for organizations to have particular emails for sections, while all mails go to the same inbox.
Masked/forwarding aliases: They are aimed at privacy. Users don't give their real email, instead, a random mail is generated, while the email is sent to your real inbox. This feature is available with services like Proton Mail.
Email aliases are helpful for organizing inbox, and can be effective for contacting business. But the main benefit is protecting your privacy.
There are several strategies to accomplish this, but the primary one is to minimize the amount of time your email is displayed online. Your aliases can be removed at any moment, but they will still be visible and used. The more aliases you use, the more difficult it is to identify your real core email address.
Because it keeps your address hidden from spammers, marketers, and phishing efforts, you will have more privacy. It is also simpler to determine who has exploited your data.
Giving email aliases in specific circumstances makes it simpler to find instances when they have been abused. Instead of having to deal with a ton of spam, you can remove an alias as soon as you discover someone is abusing it and start over.
Aliases can be helpful for privacy, but they are not a foolproof way to be safe online. They do not automatically encrypt emails, nor do they cease tracking cookies.
Court filings revealed that Apple Hide My Email, a function intended to protect genuine email addresses, does not keep users anonymous from law enforcement, raising new concerns about privacy.
With the use of this feature, which is accessible to iCloud+ subscribers, users can create arbitrary email aliases so that websites and applications never see their primary address. Apple claims it doesn't read messages; they are just forwarded. However, recent US cases show a clear limit: Apple was able to connect those anonymous aliases to identifiable accounts in response to legitimate court demands
Oasis Security found the issue and informed OpenClaw, a fix was then released in version 2026.2.26 on 26th February.
OpenClaw is a self-hosted AI tool that became famous recently for allowing AI agents to autonomously execute commands, send texts, and handle tasks across multiple platforms. Oasis security said that the flaw is caused by the OpenClaw gateway service linking with the localhost and revealing a WebSocket interface.
As cross-origin browser policies do not stop WebSocket connections to a localhost, a compromised website opened by an OpenClaw user can use Javascript to secretly open a connection to the local gateway and try verification without raising any alarms.
To stop attacks, OpenClaw includes rate limiting. But the loopback address (127.0.0.1) is excused by default. Therefore, local CLI sessions are not accidentally locked out.
Experts discovered that they could brute-force the OpenClaw management password at hundreds of attempts per second without any failed attempts being logged. When the correct password is guessed, the hacker can silently register as a verified device, because the gateway autonomously allows device pairings from localhost without needing user info.
“In our lab testing, we achieved a sustained rate of hundreds of password guesses per second from browser JavaScript alone At that speed, a list of common passwords is exhausted in under a second, and a large dictionary would take only minutes. A human-chosen password doesn't stand a chance,” Oasis said.
The attacker can now directly interact with the AI platform by identifying connected nodes, stealing credentials, dumping credentials, and reading application logs with an authenticated session and admin access.
According to Oasis, this might enable an attacker to give the agent instructions to perform arbitrary shell commands on paired nodes, exfiltrate files from linked devices, or scan chat history for important information. This would essentially result in a complete workstation compromise that is initiated from a browser tab.
Oasis provided an example of this attack, demonstrating how the OpenClaw vulnerability could be exploited to steal confidential information. The problem was resolved within a day of Oasis reporting it to OpenClaw, along with technical information and proof-of-concept code.
Cybersecurity experts have disclosed info about a suspected AI-based malware named “Slopoly” used by threat actor Hive0163 for financial motives.
IBM X-Force researcher Golo Mühr said, “Although still relatively unspectacular, AI-generated malware such as Slopoly shows how easily threat actors can weaponize AI to develop new malware frameworks in a fraction of the time it used to take,” according to the Hacker News.
Hive0163's attacks are motivated by extortion via large-scale data theft and ransomware. The gang is linked with various malicious tools like Interlock RAT, NodeSnake, Interlock ransomware, and Junk fiction loader.
In a ransomware incident found in early 2026, the gang was found installing Slopoly during the post-exploit phase to build access to gain persistent access to the compromised server.
Slopoly’s detection can be tracked back to PowerShell script that may be installed in the “C:\ProgramData\Microsoft\Windows\Runtime” folder via a builder. Persistence is made via a scheduled task called “Runtime Broker”.
Experts believe that that malware was made with an LLM as it contains extensive comments, accurately named variables, error handling, and logging.
There are signs that the malware was developed with the help of an as-yet-undetermined large language model (LLM). This includes the presence of extensive comments, logging, error handling, and accurately named variables.
The comments also describe the script as a "Polymorphic C2 Persistence Client," indicating that it's part of a command-and-control (C2) framework.
According to Mühr, “The script does not possess any advanced techniques and can hardly be considered polymorphic, since it's unable to modify its own code during execution. The builder may, however, generate new clients with different randomized configuration values and function names, which is standard practice among malware builders.”
The PowerShell script works as a backdoor comprising system details to a C2 server. There has been a rise in AI-assisted malware in recent times. Slopoly, PromptSpy, and VoidLink show how hackers are using the tool to speed up malware creation and expand their operations.
IBM X-Force says the “introduction of AI-generated malware does not pose a new or sophisticated threat from a technical standpoint. It disproportionately enables threat actors by reducing the time an operator needs to develop and execute an attack.”
Agentic web browsers that use AI tools to autonomously do tasks across various websites for a user could be trained and fooled into phishing attacks. Hackers exploit the AI browsers’ tendency to assert their actions and deploy them against the same model to remove security checks.
According to security expert Shaked Chen, “The AI now operates in real time, inside messy and dynamic pages, while continuously requesting information, making decisions, and narrating its actions along the way. Well, 'narrating' is quite an understatement - It blabbers, and way too much!,” the Hacker News reported. Agentic Blabbering is an AI browser that displays what it sees, thinks, and plans to do next, and what it deems safe or a threat.
By hacking the traffic between the AI services on the vendor’s servers and putting it as input to a Generative Adversarial Network (GAN), it made Perplexity’s Comet AI browser fall prey to a phishing attack within four minutes.
The research is based on established tactics such as Scamlexity and VibeScamming, which revealed that vibe-coding platforms and AI browsers can be coerced into generating scam pages and performing malicious tasks via prompt injection.
There is a change in the attack surface as a result of the AI agent managing the tasks without frequent human oversight, meaning that a scammer no longer has to trick a user. Instead, it seeks to deceive the AI model itself.
Chen said, “If you can observe what the agent flags as suspicious, hesitates on, and more importantly, what it thinks and blabbers about the page, you can use that as a training signal.” Chen added that the “scam evolves until the AI Browser reliably walks into the trap another AI set for it."
The aim is to make a “scamming machine” that improves and recreates a phishing page until the agentic browser accepts the commands and carries out the hacker’s command, like putting the victim’s passwords on a malicious web page built for refund scams.
Guardio is concerned about the development, saying that, “This reveals the unfortunate near future we are facing: scams will not just be launched and adjusted in the wild, they will be trained offline, against the exact model millions rely on, until they work flawlessly on first contact.”
Microsoft’s new Threat Intelligence report reveals that threat actors are using genAI tools for various tasks, such as phishing, surveillance, malware building, infrastructure development, and post-hack activity.
In various incidents, AI helps to create phishing emails, summarize stolen information, debug malware, translate content, and configure infrastructure. “Microsoft Threat Intelligence has observed that most malicious use of AI today centers on using language models for producing text, code, or media. Threat actors use generative AI to draft phishing lures, translate content, summarize stolen data, generate or debug malware, and scaffold scripts or infrastructure,” the report said.
"For these uses, AI functions as a force multiplier that reduces technical friction and accelerates execution, while human operators retain control over objectives, targeting, and deployment decisions,’ warns Microsoft.
Microsoft found different hacking gangs using AI in their cyberattacks, such as North Korean hackers known as Coral Sleet (Storm-1877) and Jasper Sleet (Storm-0287), who use the AI in their remote IT worker scams.
The AI helps to make realistic identities, communications, and resumes to get a job in Western companies and have access once hired. Microsoft also explained how AI is being exploited in malware development and infrastructure creation. Threat actors are using AI coding tools to create and refine malicious code, fix errors, and send malware components to different programming languages.
A few malware experiments showed traces of AI-enabled malware that create scripts or configure behaviour at runtime. Microsoft found Coral Sleet using AI to make fake company sites, manage infrastructure, and troubleshoot their installations.
When security analysts try to stop the use of AI in these attacks, Microsoft says hackers are using jailbreaking techniques to trick AI into creating malicious code or content.
Besides generative AI use, the report revealed that hackers experiment with agentic AI to do tasks autonomously. The AI is mainly used for decision-making currently. As IT worker campaigns depend on the exploitation of authentic access, experts have advised organizations to address these attacks as insider risks.
The operation starts with an email sent from an address hosted on ukr[.]net, a famous Ukrainian provider earlier exploited by the Russia based hacking group APT28 in older campaigns.
Experts at ClearSky have termed the malware “BadPaw.” The campaign starts when a receiver opens a link pretending to host a ZIP archive. Instead of starting a direct download, the target is redirected to a domain that installs a tracking pixel, letting the threat actor to verify engagement. Another redirect sends the ZIP file.
The archive pretends to consist of a standard HTML file, but ClearSky experts revealed that it is actually an HTA app in hiding. When deployed, the file shows a fake document related to a Ukrainian government border crossing request, where malicious processes are launched in the background.
Before starting, the malware verifies a Windows Registry key to set the system's installation date. If the OS is older than ten days, deployment stops, an attack tactic that escapes sandbox traps used by threat analysts.
If all the conditions are fulfilled, the malware looks for the original ZIP file and retrieves extra components. The malware builds its persistence via a scheduled task that runs a VBS script which deploys steganography to steal hidden executable code from an image file.
Only nine antivirus engines could spot the payload at the time of study.
After activation within a particular parameter, BadPaw links to a C2 server.
The following process happens:
Getting a numeric result from the /getcalendar endpoint.
Gaining access to a landing page called "Telemetry UP!” through /eventmanager.
Downloading the ASCII-encoded payload information installed within HTML.
In the end, the decrypted data launches a backdoor called "MeowMeowProgram[.]exe," which offers file system control and remote shell access.
Four protective layers are included in the MeowMeow backdoor: runtime parameter constraints, obfuscation of the.NET Reactor, sandbox detection, and monitoring for forensic tools like Wireshark, Procmon, Ollydbg, and Fiddler.
Incorrect execution results in a benign graphical user interface with a picture of a cat. The "MeowMeow" button only displays a harmless message when it is clicked.
OpenClaw addressed a high-severity security threat that could have been exploited to allow a malicious site to link with a locally running AI agent and take control. According to the Oasis Security report, “Our vulnerability lives in the core system itself – no plugins, no marketplace, no user-installed extensions – just the bare OpenClaw gateway, running exactly as documented.”
The threat was codenamed ClawJacked by the experts. CVE-2026-25253 could have become a severe vulnerability chain that would have allowed any site to hack a person’s AI agent. The vulnerability existed in the main gateway of the software. As OpenClaw is built to trust connections from the user’s system, it could have allowed hackers easy access.
On a developer's laptop, OpenClaw is installed and operational. Its gateway, a local WebSocket server, is password-protected and connected to localhost. When the developer visits a website that is controlled by the attacker via social engineering or another method, the attack begins. According to the Oasis Report, “Any website you visit can open one to your localhost. Unlike regular HTTP requests, the browser doesn't block these cross-origin connections. So while you're browsing any website, JavaScript running on that page can silently open a connection to your local OpenClaw gateway. The user sees nothing.”
The research revealed a smart trick using WebSockets. Generally, your browser is active at preventing different websites from meddling with your local files. But WebSockets are an exception as they are built to stay “always-on” to send data simultaneously.
The OpenClaw gateway assumed that the connection must be safe because it comes from the user's own computer (localhost). But it is dangerous because if a developer running OpenClaw mistakenly visits a malicious website, a hidden script installed in the webpage can connect via WebSocket and interact directly with the AI tool in the background. The user will be clueless.