Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Fake Tax Emails Used to Target Indian Users in New Malware Campaign

 


A newly identified cyberattack campaign is actively exploiting trust in India’s tax system to infect computers with advanced malware designed for long-term surveillance and data theft. The operation relies on carefully crafted phishing emails that impersonate official tax communications and has been assessed as potentially espionage-driven, though no specific hacking group has been confirmed.

The attack begins with emails that appear to originate from the Income Tax Department of India. These messages typically warn recipients about penalties, compliance issues, or document verification, creating urgency and fear. Victims are instructed to open an attached compressed file, believing it to be an official notice.

Once opened, the attachment initiates a hidden infection process. Although the archive contains several components, only one file is visible to the user. This file is disguised as a legitimate inspection or review document. When executed, it quietly loads a concealed malicious system file that operates without the user’s awareness.

This hidden component performs checks to ensure it is not being examined by security analysts and then connects to an external server to download additional malicious code. The next stage exploits a Windows system mechanism to gain administrative privileges without triggering standard security prompts, allowing the attackers deeper control over the system.

To further avoid detection, the malware alters how it identifies itself within the operating system, making it appear as a normal Windows process. This camouflage helps it blend into everyday system activity.

The attackers then deploy another installer that adapts its behavior based on the victim’s security setup. If a widely used antivirus program is detected, the malware does not shut it down. Instead, it simulates user actions, such as mouse movements, to quietly instruct the antivirus to ignore specific malicious files. This allows the attack to proceed while the security software remains active, reducing suspicion.

At the core of the operation is a modified banking-focused malware strain known for targeting organizations across multiple countries. Alongside it, attackers install a legitimate enterprise management tool originally designed for system administration. In this campaign, the software is misused to remotely control infected machines, monitor user behavior, and manage stolen data centrally.

Supporting files are also deployed to strengthen control. These include automated scripts that change folder permissions, adjust user access rights, clean traces of activity, and enable detailed logging. A coordinating program manages these functions to ensure the attackers maintain persistent access.

Researchers note that the campaign combines deception, privilege escalation, stealth execution, and abuse of trusted software, reflecting a high level of technical sophistication and clear intent to maintain prolonged visibility into compromised systems.

WhatsApp-Based Astaroth Banking Trojan Targets Brazilian Users in New Malware Campaign

 

A fresh look at digital threats shows malicious software using WhatsApp to spread the Astaroth banking trojan, mainly affecting people in Brazil. Though messaging apps are common tools for connection, they now serve attackers aiming to steal financial data. This method - named Boto Cor-de-Rosa by analysts at Acronis Threat Research - stands out because it leans on social trust within widely used platforms. Instead of relying on email or fake websites, hackers piggyback on real conversations, slipping malware through shared links. 
While such tactics aren’t entirely new, their adaptation to local habits makes them harder to spot. In areas where nearly everyone uses WhatsApp daily, blending in becomes easier for cybercriminals. Researchers stress that ordinary messages can now carry hidden risks when sent from compromised accounts. Unlike older campaigns, this one avoids flashy tricks, favoring quiet infiltration over noise. As behavior shifts online, so do attack strategies - quietly, persistently adapting. 

Acronis reports that the malware targets WhatsApp contact lists, sending harmful messages automatically - spreading fast with no need for constant hacker input. Notably, even though the main Astaroth component sticks with Delphi, and the setup script remains in Visual Basic, analysts spotted a fresh worm-style feature built completely in Python. Starting off differently this time, the mix of languages shows how cyber attackers now build adaptable tools by blending code types for distinct jobs. Ending here: such variety supports stealthier, more responsive attack systems. 

Astaroth - sometimes called Guildma - has operated nonstop since 2015, focusing mostly on Brazil within Latin America. Stealing login details and enabling money scams sits at the core of its activity. By 2024, several hacking collectives, such as PINEAPPLE and Water Makara, began spreading it through deceptive email messages. This newest push moves away from that method, turning instead to WhatsApp; because so many people there rely on the app daily, fake requests feel far more believable. 

Although tactics shift, the aim stays unchanged. Not entirely new, exploiting WhatsApp to spread banking trojans has gained speed lately. Earlier, Trend Micro spotted the Water Saci group using comparable methods to push financial malware like Maverick and a version of Casbaneierio. Messaging apps now appear more appealing to attackers than classic email phishing. Later that year, Sophos disclosed details of an evolving attack series labeled STAC3150, closely tied to previous patterns. This operation focused heavily on individuals in Brazil using WhatsApp, distributing the Astaroth malware through deceptive channels. 

Nearly all infected machines - over 95 percent - were situated within Brazilian territory, though isolated instances appeared across the U.S. and Austria. Running uninterrupted from early autumn 2025, the method leaned on compressed archives paired with installer files, triggering script-based downloads meant to quietly embed the malicious software. What Acronis has uncovered fits well with past reports. Messages on WhatsApp now carry harmful ZIP files sent straight to users. Opening one reveals what seems like a safe document - but it is actually a Visual Basic Script. Once executed, the script pulls down further tools from remote servers. 

This step kicks off the full infection sequence. After activation, this malware splits its actions into two distinct functions. While one part spreads outward by pulling contact data from WhatsApp and distributing infected files without user input, the second runs hidden, observing online behavior - especially targeting visits to financial sites - to capture login details. 

It turns out the software logs performance constantly, feeding back live updates on how many messages succeed or fail, along with transmission speed. Attackers gain a constant stream of operational insight thanks to embedded reporting tools spotted by Acronis.

Looking Beyond the Hype Around AI Built Browser Projects


Cursor, the company that provides an artificial intelligence-integrated development environment, recently gained attention from the industry after suggesting that it had developed a fully functional browser using its own artificial intelligence agents, which is known as the Cursor AI-based development environment. In a series of public statements made by Cursor chief executive Michael Truell, it was claimed that the browser was built with the use of GPT-5.2 within the Cursor platform. 


Approximately three million lines of code are spread throughout thousands of files in Truell's project, and there is a custom rendering engine in Rust developed from scratch to implement this project. 

Moreover, he explained that the system also supports the main features of the browser, including HTML parsing, CSS cascading and layout, text shaping, painting, and a custom-built JavaScript virtual machine that is responsible for the rendering of HTML on the browser. 

Even though the statements did not explicitly assert that a substantial amount of human involvement was not involved with the creation of the browser, they have sparked a heated debate within the software development community about whether or not the majority of the work is truly attributed to autonomous AI systems, and whether or not these claims should be interpreted in light of the growing popularity of AI-based software development in recent years. 

There are a couple of things to note about the episode: it unfolds against a backdrop of intensifying optimism regarding generative AI, an optimism that has inspired unprecedented investment in companies across a variety of industries. In spite of the optimism, a more sobering reality is beginning to emerge in the process. 

A McKinsey study indicates that despite the fact that roughly 80 percent of companies report having adopted the most advanced AI tools, a similar percentage has seen little to no improvement in either revenue growth or profitability. 

In general, general-purpose AI applications are able to improve individual productivity, but they have rarely been able to translate their incremental time savings into tangible financial results. While higher value, domain-specific applications continue to stall in the experimental or pilot stage, analysts increasingly describe this disconnect as the generative AI value paradox since higher-value, domain-specific applications tend to stall in the experimental or pilot stages. 

There has been a significant increase in tension with the advent of so-called agentic artificial intelligence, which essentially is an autonomous system that is capable of planning, deciding, and acting independently in order to achieve predefined objectives. 

It is important to note, however, that these kinds of systems offer a range of benefits beyond assistive tools, as well as increasing the stakes for credibility and transparency in the case of Cursor's browser project, in which the decision to make its code publicly available was crucial. 

Developers who examined the repository found the software frequently failed to compile, rarely ran as advertised, and rarely exceeded the capabilities implied by the product's advertising despite enthusiastic headlines. 

If one inspects and tests the underlying code closely, it becomes evident that the marketing claims are not in line with the actual code. Ironically, most developers found the accompanying technical document—which detailed the project's limitations and partial successes—to be more convincing than the original announcement of the project. 

During a period of about a week, Cursor admits that it deployed hundreds of GPT-5.2-style agents, which generated about three million lines of code, assembling what on the surface amounted to a partially functional browser prototype. 

Several million dollars at prevailing prices for frontier AI models is the cost of the experiment, as estimated by Perplexity, an AI-driven search and analysis platform. At such times, it would be possible to consume between 10 and 20 trillion tokens during the experiment, which would translate into a cost of several million dollars at the current price. 

Although such figures demonstrate the ambition of the effort, they also emphasize the skepticism that exists within the industry at the moment: scale alone does not equate to sustained value or technical maturity. It can be argued that a number of converging forces are driving AI companies to increasingly target the web browser itself, rather than focusing on plug-ins or standalone applications.

For many years, browsers have served as the most valuable source of behavioral data - and, by extension, an excellent source of ad revenue - and this has been true for decades. They have been able to capture search queries, clicks, and browsing patterns for a number of years, which have paved the way for highly profitable ad targeting systems.

Google has gained its position as the world's most powerful search engine by largely following this model. The browser provides AI providers with direct access to this stream of data exhaust, which reduces the dependency on third party platforms and secures a privileged position in the advertising value chain. 

A number of analysts note that controlling the browser can also be a means of anchoring a company's search product and the commercial benefits that follow from it as well. It has been reported that OpenAI's upcoming browser is explicitly intended to collect information on users' web behavior from first-party sources, a strategy intended to challenge Google's ad-driven ecosystem. 

Insiders who have been contacted by the report suggest they were motivated to build a browser rather than an extension for Chrome or Edge because they wanted more control over their data. In addition to advertising, the continuous feedback loop that users create through their actions provides another advantage: each scroll, click, and query can be used to refine and personalize AI models, which in turn strengthens a product over time.

In the meantime, advertising remains one of the few scalable monetization paths for consumer-facing artificial intelligence, and both OpenAI and Perplexity appear to be positioning their browsers accordingly, as highlighted by recent hirings and the quiet development of ad-based services. 

Meanwhile, AI companies claim that browsers offer the chance to fundamentally rethink the user experience of the web, arguing that it can be remodeled in the future. Traditional browsing, which relied heavily on tabs, links, and manual comparison, has become increasingly viewed as an inefficient and cognitively fragmented activity. 

By replacing navigation-heavy workflows with conversational, context-aware interactions, artificial intelligence-first browsers aim to create a new type of browsing. It is believed that Perplexity's Comet browser, which is positioned as an “intelligent interface”, can be accessed by the user at any moment, enabling the artificial intelligence to research, summarize, and synthesize information in real time, thus creating a real-time “intelligent interface.” 

Rather than clicking through multiple pages, complex tasks are condensed into seamless interactions that maintain context across every step by reducing the number of pages needed to complete them. As with OpenAI's planned browser, it is likely to follow a similar approach by integrating a ChatGPT-like assistant directly into the browsing environment, allowing users to act on information without leaving the page. 

The browser is considered to be a constant co-pilot, one that will be able to draft messages, summarise content, or perform transactions on the user's behalf, rather than just performing searches. These shifts have been described by some as a shift from search to cognition. 

The companies who are deeply integrating artificial intelligence into everyday browsing hope that, in addition to improving convenience, they will be able to keep their users engaged in their ecosystems for longer periods of time, strengthening their brand recognition and boosting habitual usage. Having a proprietary browser also enables the integration of artificial intelligence services and agent-based systems that are difficult to deliver using third-party platforms. 

A comprehensive understanding of browser architecture provides companies with the opportunity to embed language models, plugins, and autonomous agents at a foundational level of the browser. OpenAI's browser, for instance, is expected to be integrated directly with the company's emerging agent platform, enabling software capable of navigating websites, completing forms, and performing multi-step actions on its own.

It is apparent that further ambitions are evident elsewhere too: 
The Browser Company's Dia features an AI assistant right in the address bar, offering a combination of search and chat functionality along with task automation, while maintaining awareness of the context of the user across multiple tabs. These types of browsers are an indicator of a broader trend toward building browsers around artificial intelligence rather than adding artificial intelligence features to existing browsers. 

By following such a method, a company's AI services become the default experience for users whenever they search or interact with the web. This ensures that the company's AI services are not optional enhancements, but rather the default experience. 

Last but not least, competitive pressure is a serious issue. Search and browser dominance by Google have long been mutually reinforcing each other, channeling data and traffic through Chrome into the company's advertising empire in an effort to consolidate its position.

A direct threat to this structure is the development of AI first browsers, whose aim is to divert users away from traditional search and towards AI-mediated discovery as a result. 

The browser that Perplexity is creating is part of a broader effort to compete with Google in search. However, Reuters reports that OpenAI is intensifying its rivalry with Google by moving into browsers. The ability to control the browser allows AI companies to intercept user intent at an earlier stage, so that they are not dependent on existing platforms and are protected from future changes in default settings and access rules that may be implemented. 

Furthermore, the smaller AI players must also be prepared to defend themselves from the growing integration of artificial intelligence into their browsers, as Google, Microsoft, and others are rapidly integrating it into their own browsers.

In a world where browsers remain a crucial part of our everyday lives as well as work, the race to integrate artificial intelligence into these interfaces is becoming increasingly important, and many observers are already beginning to describe this conflict as the beginning of a new era in browsers driven by artificial intelligence.

In the context of the Cursor episode and the trend toward AI-first browsers, it is imperative to note a cautionary mark for an industry rushing ahead of its own trials and errors. It is important to recognize, however, that open repositories and independent scrutiny continue to be the ultimate arbiters of technical reality, regardless of the public claims of autonomy and scale. 

It is becoming increasingly apparent that a number of companies are repositioning the browser as a strategic battleground, promising efficiency, personalization, and control - and that developers, enterprises, and users are being urged to separate ambition from implementation in real life. 

Among analysts, it appears that AI-powered browsers will not fail, but rather that their impact will be less dependent on headline-grabbing demonstrations than on evidence-based reliability, transparent attribution of human work to machine work, and a thoughtful evaluation of security and economic trade-offs. During this period of speed and spectacle in an industry that is known for its speed and spectacle, it may yet be the scariest resource of all.

India Cracks Down on Grok's AI Image Misuse

 

The Ministry of Electronics and Information Technology (MeitY) of India has found that the latest restrictions on Grok’s image generation tool by X are not adequate to prevent obscene content. The platform, owned by Elon Musk, restricted the controversial feature, known as Grok Imagine, to paid subscribers across the globe. The feature was removed to prevent free users on the platform from creating abusive images. However, officials have argued that allowing such image generation violates Indian laws on privacy and dignity, especially regarding women and children. 

Grok Imagine, available on X and as a separate app, has shown a rise in pornographic and abusive images, including non-consensual images of real people, including children, being naked. The feature, known as Spicy Mode, which produced such images, sparked anger across India, the United Kingdom, Türkiye, Malaysia, Brazil, and the European Union. The feature allowed users to create images of people being undressed, including images of women being dressed in bikinis. The feature sparked anger among members of Parliament in India. 

X's partial fixes fall short 

On 2 January 2026, MeitY ordered X to remove all vulgar images generated on the platform within 72 hours. The order also required X to provide a report on actions taken to comply with the order. The response from X mentioned stricter filters on images. However, officials have argued that X failed to provide adequate technical details on steps taken to prevent such images from being generated. The officials have also stated that the website of Grok allows users to create images for free. 

X now restricts image generation and editing via @Grok replies to premium users, but loopholes persist: the Grok app and website remain open to all, and X's image edit button is accessible platform-wide. Grok stated illegal prompts face the same penalties as uploads, yet regulators demand proactive safeguards. MeitY seeks comprehensive measures to block obscene outputs entirely. 

This clash highlights rising global scrutiny on AI tools lacking robust guardrails against deepfakes and harm. India's IT Rules 2021 mandate swift content removal, with non-compliance risking liability for platforms and executives.As X refines Grok, the case underscores the need for ethical AI design amid tech's rapid evolution, balancing innovation with societal protection.

Raspberry Pi Project Turns Wi-Fi Signals Into Visual Light Displays

 



Wireless communication surrounds people at all times, even though it cannot be seen. Signals from Wi-Fi routers, Bluetooth devices, and mobile networks constantly travel through homes and cities unless blocked by heavy shielding. A France-based digital artist has developed a way to visually represent this invisible activity using light and low-cost computing hardware.

The creator, Théo Champion, who is also known online as Rootkid, designed an installation called Spectrum Slit. The project captures radio activity from commonly used wireless frequency ranges and converts that data into a visual display. The system focuses specifically on the 2.4 GHz and 5 GHz bands, which are widely used for Wi-Fi connections and short-range wireless communication.

The artwork consists of 64 vertical LED filaments arranged in a straight line. Each filament represents a specific portion of the wireless spectrum. As radio signals are detected, their strength and density determine how brightly each filament lights up. Low signal activity results in faint and scattered illumination, while higher levels of wireless usage produce intense and concentrated light patterns.

According to Champion, quiet network conditions create a subtle glow that reflects the constant but minimal background noise present in urban environments. As wireless traffic increases, the LEDs become brighter and more saturated, forming dense visual bands that indicate heavy digital activity.

A video shared on YouTube shows the construction process and the final output of the installation inside Champion’s Paris apartment. The footage demonstrates a noticeable increase in brightness during evening hours, when nearby residents return home and connect phones, laptops, and other devices to their networks.

Champion explained in an interview that his work is driven by a desire to draw attention to technologies people often ignore, despite their significant influence on daily life. By transforming technical systems into physical experiences, he aims to encourage viewers to reflect on the infrastructure shaping modern society and to appreciate the engineering behind it.

The installation required both time and financial investment. Champion built the system using a HackRF One software-defined radio connected to a Raspberry Pi. The radio device captures surrounding wireless signals, while the Raspberry Pi processes the data and controls the lighting behavior. The software was written in Python, but other components, including the metal enclosure and custom circuit boards, had to be professionally manufactured.

He estimates that development involved several weeks of experimentation, followed by a dedicated build phase. The total cost of materials and fabrication was approximately $1,000.

Champion has indicated that Spectrum Slit may be publicly exhibited in the future. He is also known for creating other technology-focused artworks, including interactive installations that explore data privacy, artificial intelligence, and digital systems. He has stated that producing additional units of Spectrum Slit could be possible if requested.

Microsoft BitLocker Encryption Raises Privacy Questions After FBI Key Disclosure Case

 


Microsoft’s BitLocker encryption, long viewed as a safeguard for Windows users’ data, is under renewed scrutiny after reports revealed the company provided law enforcement with encryption keys in a criminal investigation.

The case, detailed in a government filing [PDF], alleges that individuals in Guam illegally claimed pandemic-related unemployment benefits. According to Forbes, this marks the first publicly documented instance of Microsoft handing over BitLocker recovery keys to law enforcement.

BitLocker is a built-in Windows security feature designed to encrypt data stored on devices. It operates through two configurations: Device Encryption, which offers a simplified setup, and BitLocker Drive Encryption, a more advanced option with greater control.

In both configurations, Microsoft generally stores BitLocker recovery keys on its servers when encryption is activated using a Microsoft account. As the company explains in its documentation, "If you use a Microsoft account, the BitLocker recovery key is typically attached to it, and you can access the recovery key online."

A similar approach applies to organizational devices. Microsoft notes, "If you're using a device that's managed by your work or school, the BitLocker recovery key is typically backed up and managed by your organization's IT department."

Users are not required to rely on Microsoft for key storage. Alternatives include saving the recovery key to a USB drive, storing it as a local file, or printing it. However, many customers opt for Microsoft’s cloud-based storage because it allows easy recovery if access is lost. This convenience, though, effectively places Microsoft in control of data access and reduces the user’s exclusive ownership of encryption keys.

Apple provides a comparable encryption solution through FileVault, paired with iCloud. Apple offers two protection levels: Standard Data Protection and Advanced Data Protection for iCloud.

Under Standard Data Protection, Apple retains the encryption keys for most iCloud data, excluding certain sensitive categories such as passwords and keychain data. With Advanced Data Protection enabled, Apple holds keys only for iCloud Mail, Contacts, and Calendar. Both Apple and Microsoft comply with lawful government requests, but neither can disclose encryption keys they do not possess.

Apple explicitly addresses this in its law enforcement guidelines [PDF]: "All iCloud content data stored by Apple is additionally encrypted at the location of the server. For data Apple can decrypt, Apple retains the encryption keys in its US data centers. Apple does not receive or retain encryption keys for [a] customer's end-to-end encrypted data."

This differs from BitLocker’s default behavior, where Microsoft may retain access to a customer’s encryption keys if the user enables cloud backup during setup.

Microsoft states that it does not share its own encryption keys with governments, but it stops short of extending that guarantee to customer-managed keys. In its law enforcement guidance, the company says, "We do not provide any government with our encryption keys or the ability to break our encryption." It further adds, "In most cases, our default is for Microsoft to securely store our customers' encryption keys. Even our largest enterprise customers usually prefer we keep their keys to prevent accidental loss or theft. However, in many circumstances we also offer the option for consumers or enterprises to keep their own keys, in which case Microsoft does not maintain copies."

Microsoft’s latest Government Requests for Customer Data Report, covering July 2024 through December 2024, shows the company received 128 law enforcement requests globally, including 77 from US agencies. Only four requests during that period—three from Brazil and one from Canada—resulted in content disclosure.

After the article was published, a Microsoft spokesperson clarified, “With BitLocker, customers can choose to store their encryption keys locally, in a location inaccessible to Microsoft, or in Microsoft’s cloud. We recognize that some customers prefer Microsoft’s cloud storage so we can help recover their encryption key if needed. While key recovery offers convenience, it also carries a risk of unwanted access, so Microsoft believes customers are in the best position to decide whether to use key escrow and how to manage their keys.”

Privacy advocates argue that this design reflects Microsoft’s priorities. As Erica Portnoy, senior staff technologist at the Electronic Frontier Foundation, stated in an email to The Register, "Microsoft is making a tradeoff here between privacy and recoverability. At a guess, I'd say that's because they're more focused on the business use case, where loss of data is much worse than Microsoft or governments getting access to that data. But by making that choice, they make their product less suitable for individuals and organizations with higher privacy needs. It's a clear message to activist organizations and law firms that Microsoft is not building their products for you."

Multi-Stage Phishing Campaign Deploys Amnesia RAT and Ransomware Using Cloud Services

 

One recently uncovered cyberattack is targeting individuals across Russia through a carefully staged deception campaign. Rather than exploiting software vulnerabilities, the operation relies on manipulating user behavior, according to analysis by Cara Lin of Fortinet FortiGuard Labs. The attack delivers two major threats: ransomware that encrypts files for extortion and a remote access trojan known as Amnesia RAT. Legitimate system tools and trusted services are repurposed as weapons, allowing the intrusion to unfold quietly while bypassing traditional defenses. By abusing real cloud platforms, the attackers make detection significantly more difficult, as nothing initially appears out of place. 

The attack begins with documents designed to resemble routine workplace material. On the surface, these files appear harmless, but they conceal code that runs without drawing attention. Visual elements within the documents are deliberately used to keep victims focused, giving the malware time to execute unseen. Fortinet researchers noted that these visuals are not cosmetic but strategic, helping attackers establish deeper access before suspicion arises. 

A defining feature of the campaign is its coordinated use of multiple public cloud services. Instead of relying on a single platform, different components are distributed across GitHub and Dropbox. Scripts are hosted on GitHub, while executable payloads such as ransomware and remote access tools are stored on Dropbox. This fragmented infrastructure improves resilience, as disabling one service does not interrupt the entire attack chain and complicates takedown efforts. 

Phishing emails deliver compressed archives that contain decoy documents alongside malicious Windows shortcut files labeled in Russian. These shortcuts use double file extensions to impersonate ordinary text files. When opened, they trigger a PowerShell command that retrieves additional code from a public GitHub repository, functioning as an initial installer. The process runs silently, modifies system settings to conceal later actions, and opens a legitimate-looking document to maintain the illusion of normal activity. 

After execution, the attackers receive confirmation via the Telegram Bot API. A deliberate delay follows before launching an obfuscated Visual Basic Script, which assembles later-stage payloads directly in memory. This approach minimizes forensic traces and allows attackers to update functionality without altering the broader attack flow. 

The malware then aggressively disables security protections. Microsoft Defender exclusions are configured, protection modules are shut down, and the defendnot utility is used to deceive Windows into disabling antivirus defenses entirely. Registry modifications block administrative tools, repeated prompts seek elevated privileges, and continuous surveillance is established through automated screenshots exfiltrated via Telegram. 

Once defenses are neutralized, Amnesia RAT is downloaded from Dropbox. The malware enables extensive data theft from browsers, cryptocurrency wallets, messaging apps, and system metadata, while providing full remote control of infected devices. In parallel, ransomware derived from the Hakuna Matata family encrypts files, manipulates clipboard data to redirect cryptocurrency transactions, and ultimately locks the system using WinLocker. 

Fortinet emphasized that the campaign reflects a broader shift in phishing operations, where attackers increasingly weaponize legitimate tools and psychological manipulation instead of exploiting software flaws. Microsoft advises enabling Tamper Protection and monitoring Defender changes to reduce exposure, as similar attacks are becoming more widespread across Russian organizations.

1Password Launches Pop-Up Alerts to Block Phishing Scams

 

1Password has introduced a new phishing protection feature that displays pop-up warnings when users visit suspicious websites, aiming to reduce the risk of credential theft and account compromise. This enhancement builds on the password manager’s existing safeguards and responds to growing phishing threats fueled by increasingly sophisticated attack techniques.

Traditionally, 1Password protects users by refusing to auto-fill credentials on sites whose URLs do not exactly match those stored in the user’s vault. While this helps block many phishing attempts, it still relies on users noticing that something is wrong when their password manager does not behave as expected, which is not always the case. Some users may assume the tool malfunctioned or that their vault is locked and proceed to type passwords manually, inadvertently handing them to attackers.

The new feature addresses this gap by adding a dedicated pop-up alert that appears when 1Password detects a potential phishing URL, such as a typosquatted or lookalike domain. For example, a domain with an extra character in the name may appear convincing at a glance, especially when the phishing page closely imitates the legitimate site’s design. The pop-up is designed to prompt users to slow down, double-check the URL, and reconsider entering their credentials, effectively adding a behavioral safety net on top of technical controls.

1Password is rolling out this capability automatically for individual and family subscribers, ensuring broad coverage for consumers without requiring configuration changes. In business environments, administrators can enable the feature for employees through Authentication Policies in the 1Password admin console, integrating it into existing access control strategies. This flexibility allows organizations to align phishing protection with their security policies and training programs.

The company underscores the importance of this enhancement with survey findings from 2,000 U.S. respondents, revealing that 61% had been successfully phished and 75% do not check URLs before clicking links. The survey also shows that one-third of employees reuse passwords on work accounts, nearly half have fallen for phishing at work, and many believe protection is solely the IT department’s responsibility. With 72% admitting to clicking suspicious links and over half choosing to delete rather than report questionable messages, 1Password’s new pop-up warnings aim to counter risky user behavior and strengthen overall phishing defenses.

Sandworm-Associated DynoWiper Malware Targets Polish Power Infrastructure


 

A cyber intrusion targeting the nation's energy infrastructure occurred in late 2025, which security experts have described as one of the largest cyberattacks the nation has faced in many years. It underscores the growing vulnerability of critical national systems in light of increasing geopolitical tensions, which are at odds with one another. 

ESET, a cybersecurity company specializing in cyber security, has uncovered new data indicating that the operation was carried out by Sandworm, an advanced persistent threat group closely aligned with Russia that has been associated with disrupting energy and industrial networks for decades. 

ESET researchers found that a deeper analysis of the malware used during the incident revealed operational patterns and code similarities that are consistent with Sandworm's past campaigns, indicating that the attack follows Sandworm's established playbook for damaging cyber activity. 

According to the assailants, they were planning to use a malware strain named DynoWiper that was designed to permanently destroy files and cripple affected systems by irreversibly destroying them, a strategy which could have caused widespread disruptions across the Poland electricity industry if it had been successful. 

At the time of publication, the Russian Embassy in Washington did not respond to requests for comment. According to cyber experts, Sandworm, which is also known as UAC-0113, APT44, or Seashell Blizzard in the cybersecurity community, has been active for more than a decade and is widely regarded as an act of state-sponsored hacking, most likely aimed at Russian military intelligence agencies. 

The group's ties to Unit 74455 of the Main Intelligence Directorate (GRU) have been established by security researchers after repeated accusations that the organization has committed high-impact cyber-operations intended to disrupt and degrade critical infrastructure systems. 

Throughout its history, Sandworm has been credited with some of the most significant cyber incidents against energy networks, most notably a devastating attack on the Ukraine's power grid nearly a decade ago, which used data-wiping malware and left around 230,000 people without power for a period of nearly 10 days.

It is important to note that this episode still remains a prototypical example of the group's capabilities and intentions, and it continues to shape the assessment of the group's role in more recent attempts to undermine energy systems beyond Ukraine's borders. 

As detailed in a recent report issued by ESET, they believed that the operation bore the hallmarks of Sandworm, a threat actor widely linked to Russia's military and intelligence apparatus, evidenced by its involvement in the operation. 

A data wiping malware, DynoWiper, dubbed DynoWiper, was identified by investigators and tracked as Win32/KillFiles.NMO, which had previously been undocumented, pointing the finger at the group. The wiper campaign was similar in both technical and operational aspects to earlier Sandworm wiper campaigns, especially those that were observed following Russian invasion of Ukraine in February of that year. 

In a statement published by ESET on December 29, 2025, the company stated that the malware had been detected during an attempt to disrupt Poland's energy sector, but that there are no indications that the attackers succeeded in causing outages or permanently damage the energy sector. 

In an email sent on December 29, the Polish authorities confirmed that there was activity observed in the area of two combined heat and power plants and a system used to manage the generation of electricity from renewable sources, such as the power of wind and sun. 

In a public statement, the Prime Minister said that the attacks were directed by groups “directly linked to Russian services,” citing the government's plans to strengthen national defenses through additional safeguards and cybersecurity legislation that will require more stringent requirements on risk management, information technology and operational technology security, and preparedness for incidents. Tusk said this legislation is expected to be implemented very soon. 

Moreover, the timing of the incident attracted the attention of analysts as it coincided with the tenth anniversary of Sandworm's historic attack on Ukraine's power grid in 2015. BlackEnergy and KillDisk malware were deployed during the attack, and the attack caused hours-long blackouts for thousands of people, something that was cited as a continuation of a pattern of disruption campaigns against critical infrastructure that has been occurring for years. 

A company named ESET stated that the attempted intrusion coincided with Sandworm's tenth anniversary of the devastating attack on Ukraine's power grid in the year 2000, though it only provided limited technical information beyond the identification of the malware involved. 

Researchers are pointing out that the use of a custom-built wiper, as well as the pattern of Russian cyber operations in which data-destroying malware has been a strategic tool, aligns with a broader pattern observed in cyber operations. The use of wipers in attacks linked to Moscow has increased significantly since 2022. 

The use of AcidRain to disable roughly 270,000 satellite modems in Ukraine has been an effort to disrupt the communication of the country. A number of campaigns targeting universities, critical infrastructure, and the like have been attributed to Sandworm. This is also true in the case of the NotPetya outbreak in 2017, a destructive worm that in its early stage was targeted at Ukrainian targets, but quickly spread worldwide, causing an estimated $10 billion in damage and securing its place as one of the highest-profile case studies in the history of cybercrime. 

There are no indications yet as to why DynoWiper had failed to trigger power outages in Poland; the investigation has left open the possibility that the operation may have been strategically calibrated to avoid escalation or that strong defenses within the country’s energy grid prevented it. 

In the aftermath of the incident, governments and operators of critical infrastructure across Europe have been reminded once again that energy systems continue to be an attractive target among state-sanctioned cyber operations even when those attacks do not result in immediate disruptions. 

It is noted that security analysts have noted the attempt to deploy DynoWiper in a strategic capacity reflects a continued reliance on destructive malware as a strategy tool, and emphasize the importance of investing in cyber resilience, real-time monitoring, and coordinated incident response across both the information technology as well as operational technologies. 

Although it appears that Polish officials are using the episode as a springboard in order to strengthen their defenses, experts point out that similar threats may not be bound by borders in the near future since geopolitical tensions are unlikely to ease at all. 

Despite the fact that the failure of the attack may offer some reassurance for the time being, it also emphasizes a more significant reality: adversaries continue to search energy networks for weaknesses, and it will be crucial to be prepared and cooperative if we wish to avoid future disruptions, as well as to be able to detect and neutralize malware before it becomes a major problem.

Dark Web Voice-Phishing Kits Supercharge Social Engineering and Account Takeovers

 

Cybercriminals are finding it easier than ever to run convincing social engineering schemes and identity theft operations, driven by the availability of customized voice-phishing (vishing) kits sold on dark web forums and private messaging channels.

According to a recent Okta Threat Intelligence blog published on Thursday, these phishing kits are being marketed as a service to “a growing number” of threat actors aiming to compromise Google, Microsoft, and Okta user accounts. Beyond fake login pages, the kits also provide real-time support that helps attackers capture login credentials and multi-factor authentication (MFA) codes while victims are actively being manipulated.

“There are at least two kits that implement the novel functionality observed,” Okta Threat Intelligence Vice President Brett Winterford told The Register.

“The phishing kits have been developed to closely mimic the authentication flows of identity providers and other identity systems used by organizations,” he said. “The kits allow the attacker to monitor the phishing page as the targeted user is interacting with it and trigger different custom pages that the target sees. This creates a more compelling pretext for asking the user to share credentials and accept multi-factor authentication challenges.”

Winterford noted that this form of attack has “evolved significantly since late 2025.” Some advertisements promoting these kits even seek to hire native English-speaking callers to make the scams more believable.

“These callers pretend to be from an organization's helpdesk and approach targets using the pretext of resolving a support ticket or performing a mandatory technical update,” Winterford said.

Similar tactics were observed last year when Scattered Spider-style IT support scams enabled attackers to breach dozens of Salesforce environments, resulting in mass data theft and extortion campaigns.

The attacks typically begin with reconnaissance. Threat actors collect details such as employee names, commonly used applications, and IT support contact numbers. This information is often sourced from company websites, LinkedIn profiles, and other publicly accessible platforms. Using chatbots to automate this research further accelerates the process.

Once prepared, attackers deploy the phishing kit to generate a convincing replica of a legitimate login page. Victims are contacted via spoofed company or helpdesk phone numbers and persuaded to visit the fraudulent site under the guise of IT assistance. “The attacks vary from there, depending on the attacker's motivation and their interactions with the user,” Winterford said.

When victims submit their login credentials, the data is instantly relayed to the attacker—often through a Telegram channel—granting access to the real service. While the victim remains on the call, the attacker attempts to log in and observes which MFA methods are triggered, modifying the phishing page in real time to match the experience.

Attackers then instruct victims to approve push notifications, enter one-time passcodes, or complete other MFA challenges. Because the fake site mirrors these requests, the deception becomes harder to detect.

“If presented a push notification (type of MFA challenge), for example, an attacker can verbally tell the user to expect a push notification, and select an option from their [command-and-control] panel that directs their target's browser to a new page that displays a message implying that a push message has been sent, lending plausibility to what would ordinarily be a suspicious request for the user to accept a challenge the user didn't initiate,” the report says.

Okta also warned that these kits can defeat number-matching MFA prompts by simply instructing users which number to enter, effectively neutralizing an added layer of security.

Once MFA is bypassed, attackers gain full control of the compromised account.

This research aligns with The Register’s previous reporting on “impersonation-as-a-service,” where cybercriminals bundle social engineering tools into subscription-based offerings.

“As a bad actor you can subscribe to get tools, training, coaching, scripts, exploits, everything in a box to go out and conduct your infiltration operation that often combine[s] these social engineering attacks with targeted ransomware, almost always with a financial motive,” security firm Nametag CEO Aaron Painter said in an earlier interview.

Attackers Hijack Microsoft Email Accounts to Launch Phishing Campaign Against Energy Firms

 


Cybercriminals have compromised Microsoft email accounts belonging to organizations in the energy sector and used those trusted inboxes to distribute large volumes of phishing emails. In at least one confirmed incident, more than 600 malicious messages were sent from a single hijacked account.

Microsoft security researchers explained that the attackers did not rely on technical exploits or system vulnerabilities. Instead, they gained access by using legitimate login credentials that were likely stolen earlier through unknown means. This allowed them to sign in as real users, making the activity harder to detect.

The attack began with emails that appeared routine and business-related. These messages included Microsoft SharePoint links and subject lines suggesting formal documents, such as proposals or confidentiality agreements. To view the files, recipients were asked to authenticate their accounts.

When users clicked the SharePoint link, they were redirected to a fraudulent website designed to look legitimate. The site prompted them to enter their Microsoft login details. By doing so, victims unknowingly handed over valid usernames and passwords to the attackers.

After collecting credentials, the attackers accessed the compromised email accounts from different IP addresses. They then created inbox rules that automatically deleted incoming emails and marked messages as read. This step helped conceal the intrusion and prevented account owners from noticing unusual activity.

Using these compromised inboxes, the attackers launched a second wave of phishing emails. These messages were sent not only to external contacts but also to colleagues and internal distribution lists. Recipients were selected based on recent email conversations found in the victim’s inbox, increasing the likelihood that the messages would appear trustworthy.

In this campaign, the attackers actively monitored inbox responses. They removed automated replies such as out-of-office messages and undeliverable notices. They also read replies from recipients and responded to questions about the legitimacy of the emails. All such exchanges were later deleted to erase evidence.

Any employee within an energy organization who interacted with the malicious links was also targeted for credential theft, allowing the attackers to expand their access further.

Microsoft confirmed that the activity began in January and described it as a short-duration, multi-stage phishing operation that was quickly disrupted. The company did not disclose how many organizations were affected, identify the attackers, or confirm whether the campaign is still active.

Security experts warn that simply resetting passwords may not be enough in these attacks. Because attackers can interfere with multi-factor authentication settings, they may maintain access even after credentials are changed. For example, attackers can register their own device to receive one-time authentication codes.

Despite these risks, multi-factor authentication remains a critical defense against account compromise. Microsoft also recommends using conditional access controls that assess login attempts based on factors such as location, device health, and user role. Suspicious sign-ins can then be blocked automatically.

Additional protection can be achieved by deploying anti-phishing solutions that scan emails and websites for malicious activity. These measures, combined with user awareness, are essential as attackers increasingly rely on stolen identities rather than software flaws.


Cisco Patches ISE XML Flaw with Public Exploit Code

 

Cisco has recently addressed a significant security vulnerability in its Identity Services Engine (ISE) and ISE Passive Identity Connector (ISE-PIC), tracked as CVE-2026-20029. This medium-severity issue, scored at 4.9 out of 10, stems from improper XML parsing in the web-based management interface. Attackers with valid admin credentials could upload malicious XML files, enabling arbitrary file reads from the underlying operating system and exposing sensitive data.

The flaw poses a substantial risk to enterprise networks, where ISE is widely deployed for centralized access control. Enterprises rely on ISE to manage who and what accesses their infrastructure, making it a prime target for cybercriminals seeking to steal credentials or configuration files.Although no wild exploitation has been confirmed, public proof-of-concept (PoC) exploit code heightens the urgency, echoing patterns from prior ISE vulnerabilities.

Past incidents underscore ISE's appeal to threat actors. In November 2025, sophisticated attackers exploited a maximum-severity zero-day (CVSS 10/10) to deploy custom backdoor malware, bypassing authentication entirely. Similarly, June 2025 patches fixed critical flaws with public PoCs, including arbitrary code execution risks in ISE and related platforms. These events highlight persistent scrutiny on Cisco's network access tools.

Mitigation demands immediate patching, as no workarounds exist. Affected versions require specific updates: migrate pre-3.2 releases to fixed ones; apply Patch 8 for 3.2 and 3.3; use Patch 4 for 3.4; and note 3.5 is unaffected.Administrators must verify their ISE version and apply the precise patch to prevent data leaks, especially given the admin-credential prerequisite that insiders or compromised accounts could fulfill.

Organizations should prioritize auditing ISE deployments amid rising enterprise-targeted attacks. Regular vulnerability scans, credential hygiene, and monitoring for anomalous XML uploads are essential defenses. As PoC code circulates, patching remains the sole bulwark, reinforcing the need for swift action in securing network identities.

Online Misinformation and AI-Driven Fake Content Raise Concerns for Election Integrity

 

With elections drawing near, unease is spreading about how digital falsehoods might influence voter behavior. False narratives on social platforms may skew perception, according to officials and scholars alike. As artificial intelligence advances, deceptive content grows more convincing, slipping past scrutiny. Trust in core societal structures risks erosion under such pressure. Warnings come not just from academics but also from community leaders watching real-time shifts in public sentiment.  

Fake messages have recently circulated online, pretending to be from the City of York Council. Though they looked real, officials later stated these ads were entirely false. One showed a request for people willing to host asylum seekers; another asked volunteers to take down St George flags. A third offered work fixing road damage across neighborhoods. What made them convincing was their design - complete with official logos, formatting, and contact information typical of genuine notices. 

Without close inspection, someone scrolling quickly might believe them. Despite their authentic appearance, none of the programs mentioned were active or approved by local government. The resemblance to actual council material caused confusion until authorities stepped in to clarify. Blurred logos stood out immediately when BBC Verify examined the pictures. Wrong fonts appeared alongside misspelled words, often pointing toward artificial creation. 

Details like fingers looked twisted or incomplete - a frequent issue in computer-made visuals. One poster included an email tied to a real council employee, though that person had no knowledge of the material. Websites referenced in some flyers simply did not exist online. Even so, plenty of individuals passed the content along without questioning its truth. A single fabricated post managed to spread through networks totaling over 500,000 followers. False appearances held strong appeal despite clear warning signs. 

What spreads fast online isn’t always true - Clare Douglas, head of City of York Council, pointed out how today’s tech amplifies old problems in new ways. False stories once moved slowly; now they race across devices at a pace that overwhelms fact-checking efforts. Trust fades when people see conflicting claims everywhere, especially around health or voting matters. Institutions lose ground not because facts disappear, but because attention scatters too widely. When doubt sticks longer than corrections, participation dips quietly over time.  

Ahead of public meetings, tensions surfaced in various regions. Misinformation targeting asylum seekers and councils emerged online in Barnsley, according to Sir Steve Houghton, its council head. False stories spread further due to influencers who keep sharing them - profit often outweighs correction. Although government outlets issued clarifications, distorted messages continue flooding digital spaces. Their sheer number, combined with how long they linger, threatens trust between groups and raises risks for everyday security. Not everyone checks facts these days, according to Ilya Yablokov from the University of Sheffield’s Disinformation Research Cluster. Because AI makes it easier than ever, faking believable content takes little effort now. 

With just a small setup, someone can flood online spaces fast. What helps spread falsehoods is how busy people are - they skip checking details before passing things along. Instead, gut feelings or existing opinions shape what gets shared. Fabricated stories spreading locally might cost almost nothing to create, yet their impact on democracy can be deep. 

When misleading accounts reach more voters, specialists emphasize skills like questioning sources, checking facts, or understanding media messages - these help preserve confidence in public processes while supporting thoughtful engagement during voting events.

OpenAI Faces Court Order to Disclose 20 Million Anonymized ChatGPT Chats


OpenAI, a company that is pushing to redefine how courts balance innovation, privacy, and the enforcement of copyright in the current legal battle over artificial intelligence and intellectual property, has brought a lawsuit challenging a sweeping discovery order. 

It was announced on Wednesday that the artificial intelligence company requested a federal judge to overturn a ruling that requires it to disclose 20 million anonymized ChatGPT conversation logs, warning even de-identified records may reveal sensitive information about users. 

In the current dispute, the New York Times and several other news organizations have filed a lawsuit alleging that OpenAI is violating copyright terms in its large language models by illegally using their content. The claim is that OpenAI has violated its copyright rights by using their copyrighted content. 

A federal district court in New York upheld two discovery orders on January 5, 2026 that required OpenAI to produce a substantial sample of the interactions with ChatGPT by the end of the year, a consequential milestone in an ongoing litigation that is situated at the intersection of copyright law, data privacy, and the emergence of artificial intelligence. 

According to the court's decision, this case concludes that there is a growing willingness by judicial authorities to critically examine the internal data practices of AI developers, while corporations argue that disclosure of this sort could have far-reaching implications for both user trust and the confidentiality of platforms themselves. As part of the controversy, plaintiffs are requesting access to ChatGPT's conversation logs that record both user prompts and the system's response to those prompts. 

Those logs, they argue, are crucial in evaluating claims of copyright infringement as well as OpenAI's asserted defenses, including fair use, since they capture both user prompts and system responses. In July 2025, when OpenAI filed a motion seeking the production of a 120-million-log sample, citing the scale and the privacy concerns involved in the request, it refused.

OpenAI, which maintains billions of logs as part of its normal operations, initially resisted the request. It responded by proposing to produce 20 million conversations, stripped of all personally identifiable information and sensitive information, using a proprietary process that would ensure the data would not be manipulated. 

A reduction of this sample was agreed upon by plaintiffs as an interim measure, however they reserved the right to continue their pursuit of a broader sample if the data were not sufficient. During October 2025, tensions escalated as OpenAI changed its position, offering instead to search for targeted words within the 20-million-log dataset and only to find conversations that directly implicated the plaintiff's work based on those search terms.

In their opinion, limiting disclosure to filtered results would be a better safeguard for user privacy, preventing the exposure of unnecessary unrelated communications. Plaintiffs, however, swiftly rejected this approach, filing a new motion to demand the release of the entire de-identified dataset. 

On November 7, 2025, a U.S. Magistrate Judge Ona Wang sided with the plaintiffs, ordering OpenAI to provide all of the sample data in addition to denying the company's request to reconsider. A judge ruled that obtaining access to both relevant and ostensibly irrelevant logs was necessary in order to conduct a comprehensive and fair analysis of OpenAI's claims. 

Accordingly, even conversations which are not directly referencing copyrighted material can be taken into account by OpenAI when attempting to prove fair use. As part of its assessment of privacy risks, the court deemed that the dataset had been reduced from billions to 20 million records by applying de-identification measures and enforcing a standing protective order, all of which were adequate to mitigate them. 

In light of the fact that the litigation is entering a more consequential phase, Keker Van Nest, Latham & Watkins, and Morrison & Foerster are representing OpenAI in the matter, which is approaching court-imposed production deadlines. 

In light of the fact that the order reflects a broader judicial posture toward artificial intelligence disputes, legal observers have noticed that courts are increasingly willing to compel extensive discovery - even if it is anonymous - to examine the process by which large language models are trained and whether copyrighted material may be involved.

A crucial aspect of this ruling is that it strengthens the procedural avenues for publishers and other content owners to challenge alleged copyright violations by AI developers. The ruling highlights the need for technology companies to be vigilant with their stewardship of large repositories of user-generated data, and the legal risks associated with retaining, processing, and releasing such data. 

Additionally, the dispute has intensified since there have been allegations that OpenAI was not able to suspend certain data deletion practices after the litigation commenced, therefore perhaps endangering evidence relevant to claims that some users may have bypassed publisher paywalls through their use of OpenAI products. 

As a result of the deletions, plaintiffs claim that they disproportionately affected free and subscription-tier user records, raising concerns about whether evidence preservation obligations were met fully. The company, which has been named as a co-defendant in the case, has been required to produce more than eight million anonymized Copilot interaction logs in response to the lawsuit and has not faced similar data preservation complaints.

A statement by Dr. Ilia Kolochenko, CEO of ImmuniWeb, on the implications of the ruling was given by CybersecurityNews. He said that while the ruling represents a significant legal setback for OpenAI, it could also embolden other plaintiffs to pursue similar discovery strategies or take advantage of stronger settlement positions in parallel proceedings. 

In response to the allegations, several courts have requested a deeper investigation into OpenAI's internal data governance practices, including a request for injunctions preventing further deletions until it is clear what remains and what is potentially recoverable and what can be done. Aside from the courtroom, the case has been accompanied by an intensifying investor scrutiny that has swept the artificial intelligence industry nationwide. 

In the midst of companies like SpaceX and Anthropic preparing for a possible public offering at a valuation that could reach hundreds of billions of dollars, market confidence is becoming increasingly dependent upon the ability of companies to cope with regulatory exposure, rising operational costs, and competitive pressures associated with rapid artificial intelligence development. 

Meanwhile, speculation around strategic acquisitions that could reshape the competitive landscape continues to abound in the industry. The fact that reports suggest OpenAI is exploring Pinterest may highlight the strategic value that large amounts of user interaction data have for enhancing product search capabilities and increasing ad revenue—both of which are increasingly critical considerations in the context of the competition between major technology companies for real-time consumer engagement and data-driven growth.

In view of the detailed allegations made by the news organizations, the litigation has gained added urgency due to the fact that a significant volume of potentially relevant data has been destroyed as a consequence of OpenAI's failure to preserve key evidence after the lawsuit was filed. 

A court filing indicates that plaintiffs learned nearly 11 months ago that large quantities of ChatGPT output logs, which reportedly affected a considerable number of Free, Pro, and Plus user conversations, had been deleted at an alarming rate after the suit was filed, and they were reportedly doing so at a disproportionately high rate. 

It is argued by plaintiffs that users trying to circumvent paywalls were more likely to enable chat deletion, which indicates this category of data is most likely to contain infringing material. Furthermore, the filings assert that despite OpenAI's attempt to justify the deletion of approximately one-third of all user conversations after the New York Times' complaint, OpenAI failed to provide any rationale other than citing what appeared to be an anomalous drop in usage during the period around the New Year of 2024. 

While news organizations have alleged OpenAI has continued routine deletion practices without implementing litigation holds despite two additional spikes in mass deletions that have been attributed to technical issues, they have selectively retained outputs relating to accounts mentioned in the publishers' complaints and continue to do so. 

During a testimony by OpenAI's associate general counsel, Mike Trinh, plaintiffs argue that the trial documents preserved by OpenAI substantiate the defenses of OpenAI, whereas the records that could substantiate the claims of third parties were not preserved. 

According to the researchers, the precise extent of the loss of the data remains unclear, because OpenAI still refuses to disclose even basic details about what it does and does not erase, an approach that they believe contrasts with Microsoft's ability to preserve Copilot log files without having to go through similar difficulties.

Consequently, as a result of Microsoft's failure to produce searchable Copilot logs, and in light of OpenAI's deletion of mass amounts of data, the news organizations are seeking a court order for Microsoft to produce searchable Copilot logs as soon as possible. 

It has also been requested that the court maintain the existing preservation orders which prevent further permanent deletions of output data as well as to compel OpenAI to accurately reflect the extent to which output data has been destroyed across the company's products as well as clarify whether any of that information can be restored and examined for its legal purposes.

How Generative AI Is Accelerating Password Attacks on Active Directory

 

Active Directory remains the backbone of identity management for most organizations, which is why it continues to be a prime target for cyberattacks. What has shifted is not the focus on Active Directory itself, but the speed and efficiency with which attackers can now compromise it.

The rise of generative AI has dramatically reduced the cost and complexity of password-based attacks. Tasks that once demanded advanced expertise and substantial computing resources can now be executed far more easily and at scale.

Tools such as PassGAN mark a significant evolution in password-cracking techniques. Instead of relying on static wordlists or random brute-force attempts, these systems use adversarial learning to understand how people actually create passwords. With every iteration, the model refines its predictions based on real-world behavior.

The impact is concerning. Research indicates that PassGAN can crack 51% of commonly used passwords in under one minute and 81% within a month. The pace at which these models improve only increases the risk.

When trained using organization-specific breach data, public social media activity, or information from company websites, AI models can produce highly targeted password guesses that closely mirror employee habits.

How generative AI is reshaping password attack methods

Earlier password attacks followed predictable workflows. Attackers relied on dictionary lists, applied rule-based tweaks—such as replacing letters with symbols or appending numbers—and waited for successful matches. This approach was slow and computationally expensive.
  • Pattern recognition at scale: Machine learning systems identify nuanced behaviors in password creation, including keyboard habits, substitutions, and the use of personal references. Instead of wasting resources on random guesses, attackers concentrate computing power on the most statistically likely passwords.
  • Smart credential variation: When leaked credentials are obtained from external breaches, AI can generate environment-specific variations. If “Summer2024!” worked elsewhere, the model can intelligently test related versions such as “Winter2025!” or “Spring2025!” rather than guessing blindly.
  • Automated intelligence gathering: Large language models can rapidly process publicly available data—press releases, LinkedIn profiles, product names—and weave that context into phishing campaigns and password spray attacks. What once took hours of manual research can now be completed in minutes.
  • Reduced technical barriers: Pre-trained AI models and accessible cloud infrastructure mean attackers no longer need specialized skills or costly hardware. The increased availability of high-performance consumer GPUs has unintentionally strengthened attackers’ capabilities, especially when organizations rent out unused GPU capacity.
Today, for roughly $5 per hour, attackers can rent eight RTX 5090 GPUs capable of cracking bcrypt hashes about 65% faster than previous generations.

Even when strong hashing algorithms and elevated cost factors are used, the sheer volume of password guesses now possible far exceeds what was realistic just a few years ago. Combined with AI-generated, high-probability guesses, the time needed to break weak or moderately strong passwords has dropped significantly.

Why traditional password policies are no longer enough

Many Active Directory password rules were designed before AI-driven threats became mainstream. Common complexity requirements—uppercase letters, lowercase letters, numbers, and symbols—often result in predictable structures that AI models are well-equipped to exploit.

"Password123!" meets complexity rules but follows a pattern that generative models can instantly recognize.

Similarly, enforced 90-day password rotations have lost much of their defensive value. Users frequently make minor, predictable changes such as adjusting numbers or referencing seasons. AI systems trained on breach data can anticipate these habits and test them during credential stuffing attacks.

While basic multi-factor authentication (MFA) adds protection, it does not eliminate the risks posed by compromised passwords. If attackers bypass MFA through tactics like social engineering, session hijacking, or MFA fatigue, access to Active Directory may still be possible.

Defending Active Directory against AI-assisted attacks

Countering AI-enhanced threats requires moving beyond compliance-driven controls and focusing on how passwords fail in real-world attacks. Password length is often more effective than complexity alone.

AI models struggle more with long, random passphrases than with short, symbol-heavy strings. An 18-character passphrase built from unrelated words presents a much stronger defense than an 8-character complex password.

Equally critical is visibility into whether employee passwords have already appeared in breach datasets. If a password exists in an attacker’s training data, hashing strength becomes irrelevant—the attacker simply uses the known credential.

Specops Password Policy and Breached Password Protection help organizations defend against over 4 billion known unique compromised passwords, including those that technically meet complexity rules but have already been stolen by malware.

The solution updates daily using real-world attack intelligence, ensuring protection against newly exposed credentials. Custom dictionaries that block company-specific terminology—such as product names, internal jargon, and brand references—further reduce the effectiveness of AI-driven reconnaissance.

When combined with passphrase support and robust length requirements, these measures significantly increase resistance to AI-generated password guessing.

Before applying new controls, organizations should assess their existing exposure. Specops Password Auditor provides a free, read-only scan of Active Directory to identify weak passwords, compromised credentials, and policy gaps—without altering the environment.

This assessment helps pinpoint where AI-powered attacks are most likely to succeed.

Generative AI has fundamentally shifted the balance of effort in password attacks, giving adversaries a clear advantage.

The real question is no longer whether defenses need to be strengthened, but whether organizations will act before their credentials appear in the next breach.