A newly identified infostealer called Storm has emerged on underground cybercrime forums in early 2026, signalling a change in how attackers steal and use credentials. Priced at under $1,000 per month, the malware collects browser-stored data such as login credentials, session cookies, and cryptocurrency wallet information, then covertly transfers the data to attacker-controlled servers where it is decrypted outside the victim’s system.
This change becomes clearer when compared to earlier techniques. Traditionally, infostealers decrypted browser credentials directly on infected machines by loading SQLite libraries and accessing local credential databases. Because of this, endpoint security tools learned to treat such database access as one of the strongest indicators of malicious activity.
The approach began to break down after Google Chrome introduced App-Bound Encryption in version 127 in July 2024. This mechanism tied encryption keys to the browser environment itself, making local decryption exponentially more difficult. Initial bypass attempts relied on injecting into browser processes or exploiting debugging protocols, but these techniques still generated detectable traces.
Storm avoids this entirely by skipping local decryption. Instead, it extracts encrypted browser files and quietly sends them to attacker infrastructure, removing the behavioural signals that endpoint tools typically rely on. It extends this model by supporting both Chromium-based browsers and Gecko-based browsers such as Firefox, Waterfox, and Pale Moon, whereas tools like StealC V2 still handle Firefox data locally.
The data collected includes saved passwords, session cookies, autofill entries, Google account tokens, payment card details, and browsing history. This combination gives attackers everything required to rebuild authenticated sessions remotely. In practice, a single compromised employee browser can provide direct access to SaaS platforms, internal systems, and cloud environments without triggering any password-based alerts.
Storm also automates session hijacking. Once decrypted, credentials and cookies appear in the attacker’s control panel. By supplying a valid Google refresh token along with a geographically matched SOCKS5 proxy, the platform can silently recreate the victim’s active session.
This technique aligns with earlier research by Varonis Threat Labs. Its Cookie-Bite study showed that stolen Azure Entra ID session cookies can bypass multi-factor authentication, granting persistent access to Microsoft 365. Similarly, its SessionShark analysis demonstrated how phishing kits intercept session tokens in real time to defeat MFA protections. Storm packages these methods into a commercial subscription service.
Beyond credentials, the malware collects files from user directories, extracts session data from applications like Telegram, Signal, and Discord, and targets cryptocurrency wallets through browser extensions and desktop applications. It also gathers system information and captures screenshots across multiple monitors. Most operations run in memory, reducing the likelihood of detection.
Its infrastructure design adds resilience. Operators connect their own virtual private servers to Storm’s central system, routing stolen data through infrastructure they control. This setup limits the impact of takedowns, as enforcement actions are more likely to affect individual operator nodes rather than the core service.
Storm supports multi-user operations, allowing teams to divide responsibilities such as log access, malware build generation, and session restoration. It also automatically categorises stolen credentials by service, with visible rules for platforms including Google, Facebook, Twitter/X, and cPanel, helping attackers prioritise targets.
At the time of analysis, the control panel displayed 1,715 log entries linked to locations including India, the United States, Brazil, Indonesia, Ecuador, and Vietnam. While it is unclear whether all entries represent real victims or test data, variations in IP addresses, internet service providers, and data volumes suggest ongoing campaigns.
The logs include credentials associated with platforms such as Google, Facebook, Twitter/X, Coinbase, Binance, Blockchain.com, and Crypto.com. Such information often feeds into underground credential marketplaces, enabling account takeovers, fraud, and more targeted intrusions.
Storm is offered through a tiered pricing model: $300 for a seven-day trial, $900 per month for standard access, and $1,800 per month for a team licence supporting up to 100 operators and 200 builds. Use of an additional crypter is required. Notably, once deployed, malware builds continue operating even after a subscription expires, allowing ongoing data collection.
Security researchers view Storm as part of a broader evolution in credential theft. By shifting decryption to remote servers, attackers avoid detection mechanisms designed to identify on-device activity. At the same time, session cookie theft is increasingly replacing password theft as the primary objective.
The data collected by such tools often marks the beginning of further attacks, including logins from unusual locations, lateral movement within networks, and unauthorised access patterns.
Indicators of compromise include:
Alias: StormStealer
Forum ID: 221756
Registration date: December 12, 2025
Current version: v0.0.2.0 (Gunnar)
Build details: Developed in C++ (MSVC/msbuild), approximately 460 KB in size, targeting Windows systems
This advent of Storm underlines how cybercriminal tools are becoming more advanced, automated, and difficult to detect, requiring organisations to strengthen monitoring of sessions, user behaviour, and access patterns rather than relying solely on traditional credential protection methods.
Artificial intelligence is not only improving everyday technology but also strengthening both traditional and emerging scam techniques. As a result, avoiding fraud now requires greater awareness of how these schemes are taking new shapes.
Being able to identify scams is an essential skill for everyone, regardless of age. This is especially important as AI tools continue to advance rapidly, contributing to a noticeable increase in reported fraud cases. According to the Federal Bureau of Investigation’s 2025 Internet Crime Report, complaints linked to cryptocurrency and artificial intelligence ranked among the most financially damaging cybercrimes, with total losses approaching $21 billion. The agency also highlighted that, for the first time in its history, its Internet Crime Complaint Center included a dedicated section on artificial intelligence, documenting 22,364 cases that resulted in losses of nearly $893 million.
These scams are increasingly convincing. AI can generate realistic emails and replicate human voices through audio deepfakes, making fraudulent communication difficult to distinguish from legitimate interactions. Because of this, such threats should be treated as ongoing and persistent risks.
Protecting yourself, your family, and your finances requires both instinct and awareness. By training both your attention to detail and your ability to listen carefully, you can better identify suspicious activity. Below are seven warning signs that can help you recognize AI-driven scams and avoid serious consequences.
1. Messages that feel unusually personalized
AI can gather publicly available details, including your job, interests, or recent purchases, to create messages that appear tailored specifically to you. While these messages may seem accurate, they can still contain subtle errors or incorrect assumptions about your life, which should raise concern.
2. Requests that create urgency
Scammers often attempt to rush you with statements such as warnings that your account will be locked, demands for immediate payment, or requests for login credentials to restore access. This pressure is designed to force quick decisions without careful thinking.
3. Messages that appear overly polished
Unlike older scams filled with spelling or grammar mistakes, AI-generated messages are often clear and well-written. However, phrases like “confirm your information to avoid cancellation” or “we noticed unusual activity” should still be treated cautiously, especially if accompanied by suspicious visuals or a lack of supporting detail.
4. Audio that sounds slightly unnatural
Voice-cloning technology can imitate people you know, making phone-based scams more believable. Still, these voices may reveal themselves through unnatural pacing, limited emotional variation, or requests that seem out of character for the person being impersonated.
5. Deepfake videos that seem real but contain flaws
AI can also generate convincing videos of colleagues, family members, or even public figures. These may appear during video calls, workplace interactions, or through compromised social media accounts. Warning signs include inconsistent lighting, unusual shadows, or subtle distortions in facial movement.
6. Attempts to move conversations across platforms
Scammers may begin communication through email or professional platforms and then attempt to shift the interaction to messaging apps, payment platforms, or other channels. This tactic, often supported by chatbot-driven conversations, is used to appear credible while avoiding detection.
7. Unusual or suspicious payment requests
Requests for payment through gift cards, wire transfers, or cryptocurrency remain a major red flag. These methods are difficult to trace and are frequently used in fraudulent schemes, regardless of how legitimate the request may initially appear.
Why awareness matters
While AI has not changed the underlying tactics of scams, it has made them far more refined and scalable. Techniques such as impersonation, urgency, and trust-building are now enhanced through automation and data-driven personalization.
As these technologies continue to become an omnipresent aspect of our lives and keep developing, the risk will proportionately grow. Staying cautious, verifying unexpected requests, and sharing this knowledge with friends and family are critical steps in reducing exposure.
In a digital environment where scams increasingly resemble genuine communication, recognizing these warning signs remains one of the most effective ways to stay protected.
A newly tracked threat cluster identified as UNC6692 has been observed carrying out targeted intrusions by abusing Microsoft Teams, relying heavily on social engineering to deliver a sophisticated and multi-stage malware framework.
According to findings from Mandiant, the attackers impersonate internal IT help desk personnel and persuade employees to accept chat requests originating from accounts outside their organization. This method allows them to bypass traditional email-based phishing defenses by exploiting trust in workplace collaboration tools.
The attack typically begins with a deliberate email bombing campaign, where the victim’s inbox is flooded with large volumes of spam messages. This is designed to create confusion and urgency. Shortly after, the attacker initiates contact through Microsoft Teams, posing as technical support and offering assistance to resolve the email issue.
This combined tactic of inbox flooding followed by help desk impersonation is not entirely new. It has previously been linked to affiliates of the Black Basta ransomware group. Although that group ceased operations, the continued use of this playbook demonstrates how effective intrusion techniques often persist beyond the lifespan of the original actors.
Separate research published by ReliaQuest shows that these campaigns are increasingly focused on senior personnel. Between March 1 and April 1, 2026, 77% of observed incidents targeted executives and high-level employees, a notable increase from 59% earlier in the year. In some cases, attackers initiated multiple chat attempts within seconds, intensifying pressure on the victim to respond.
In many similar attacks, victims are convinced to install legitimate remote monitoring and management tools such as Quick Assist or Supremo Remote Desktop, which are then misused to gain direct system control. However, UNC6692 introduces a variation in execution.
Instead of deploying remote access software immediately, the attackers send a phishing link through Teams. The message claims that the link will install a patch to fix the email flooding problem. When clicked, the link directs the victim to download an AutoHotkey script hosted on an attacker-controlled Amazon S3 bucket. The phishing interface is presented as a tool named “Mailbox Repair and Sync Utility v2.1.5,” making it appear legitimate.
Once executed, the script performs initial reconnaissance to gather system information. It then installs a malicious browser extension called SNOWBELT on Microsoft Edge. This is achieved by launching the browser in headless mode and using command-line parameters to load the extension without user visibility.
To reduce the risk of detection, the attackers use a filtering mechanism known as a gatekeeper script. This ensures that only intended victims receive the full payload, helping evade automated security analysis environments. The script also verifies whether the victim is using Microsoft Edge. If not, the phishing page displays a persistent warning overlay, guiding the user to switch browsers.
After installation, SNOWBELT enables the download of additional malicious components, including SNOWGLAZE, SNOWBASIN, further AutoHotkey scripts, and a compressed archive containing a portable Python runtime with required libraries.
The phishing page also includes a fake configuration panel with a “Health Check” option. When users interact with it, they are prompted to enter their mailbox credentials under the assumption of authentication. In reality, this information is captured and transmitted to another attacker-controlled S3 storage location.
The SNOW malware framework operates as a coordinated system. SNOWBELT functions as a JavaScript-based backdoor that receives instructions from the attacker and forwards them for execution. SNOWGLAZE acts as a tunneling component written in Python, establishing a secure WebSocket connection between the compromised machine and the attacker’s command-and-control infrastructure. SNOWBASIN provides persistent remote access, allowing command execution through system shells, capturing screenshots, transferring files, and even removing itself when needed. It operates by running a local HTTP server on ports 8000, 8001, or 8002.
Once inside the network, the attackers expand their control through a series of post-exploitation activities. They scan for commonly used network ports such as 135, 445, and 3389 to identify opportunities for lateral movement. Using the SNOWGLAZE tunnel, they establish remote sessions through tools like PsExec and Remote Desktop.
Privilege escalation is achieved by extracting sensitive credential data from the system’s LSASS process, a critical Windows component responsible for storing authentication information. Attackers then use the Pass-the-Hash technique, which allows them to authenticate across systems using stolen password hashes without needing the actual passwords.
To extract valuable data, they deploy tools such as FTK Imager to capture sensitive files, including Active Directory databases. These files are staged locally before being exfiltrated using file transfer utilities like LimeWire.
Mandiant researchers note that this campaign reflects an evolution in attack strategy by combining social engineering, custom malware, and browser-based persistence mechanisms. A key element is the abuse of trusted cloud platforms for hosting malicious payloads and managing command-and-control operations. Because these services are widely used and trusted, malicious traffic can blend in with legitimate activity, making detection more difficult.
A related campaign reported by Cato Networks underlines similar tactics, where attackers use voice-based phishing within Teams to guide victims into executing a PowerShell script that deploys a WebSocket-based backdoor known as PhantomBackdoor.
Security experts emphasize that collaboration platforms must now be treated as primary attack surfaces. Controls such as verifying help desk communications, restricting external access, limiting screen sharing, and securing PowerShell execution are becoming essential defenses.
Microsoft has also warned that attackers are exploiting cross-organization communication within Teams to establish remote access using legitimate support tools. After initial compromise, they conduct reconnaissance, deploy additional payloads, and establish encrypted connections to their infrastructure.
To maintain persistence, attackers may deploy fallback remote management tools such as Level RMM. Data exfiltration is often carried out using synchronization tools like Rclone. They may also use built-in administrative protocols such as Windows Remote Management to move laterally toward high-value systems, including domain controllers.
These intrusion chains rely heavily on legitimate software and standard administrative processes, allowing attackers to remain hidden within normal enterprise activity across multiple stages of the attack lifecycle.