Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Threat Actors Target Misconfigured Proxies for Paid LLM Access

 

GreyNoise, a cybersecurity company, has discovered two campaigns against the infrastructure of large language models (LLMs) where the attackers used misconfigured proxies to gain illicit access to commercial AI services. Starting late December 2025, the attackers scanned over 73 LLM endpoints and had more than 80,000 sessions in 11 days, using harmless queries to evade detection. These efforts highlight the growing threat to AI systems as attackers begin to map vulnerable systems for potential exploitation. 

The first campaign, which started in October 2025, focused on server-side request forgery (SSRF) vulnerabilities in Ollama honeypots, resulting in a cumulative 91,403 attack sessions. The attackers used malicious registry URLs via Ollama’s model pull functionality and manipulated Twilio SMS webhooks to trigger outbound connections to their own infrastructure. A significant spike during Christmas resulted in 1,688 sessions over 48 hours from 62 IP addresses in 27 countries, using ProjectDiscovery’s OAST tools, indicating the involvement of grey-hat researchers rather than full-fledged malware attacks.

The second campaign began on December 28 from IP addresses 45.88.186.70 and 204.76.203.125. This campaign systematically scanned endpoints that supported OpenAI and Google Gemini API formats. The targets included leading providers such as OpenAI’s GPT-4o, Anthropic’s Claude series, Meta’s Llama 3.x, Google’s Gemini, Mistral, Google’s Gemini, Alibaba’s Qwen, Alibaba’s DeepSeek-R1, and xAI’s Grok. The attackers used low-noise queries like basic greetings or factual questions like “How many states in the US?” to identify models while avoiding detection systems. 

GreyNoise links the scanning IPs to prior CVE exploits, including CVE-2025-55182, indicating professional reconnaissance rather than casual probing.While no immediate exploitation or data theft was observed, the scale signals preparation for abuse, like free-riding on paid APIs or injecting malicious prompts. "Threat actors don't map infrastructure at this scale without plans to use that map," the report warns.

Organizations should restrict Ollama pulls to trusted registries, implement egress filtering, and block OAST domains like *.oast.live at DNS. Additional defenses include rate-limiting suspicious ASNs (e.g., AS210558, AS51396), monitoring JA4 fingerprints, and alerting on multi-endpoint probes. As AI surfaces expand, proactive securing of proxies and APIs is crucial to thwart these evolving threats.

Cybercriminals Report Monetizing Stolen Data From US Medical Company


Modern healthcare operations are frequently plagued by ransomware attacks, but the recent attack on Change Healthcare marks a major turning point in terms of scale and consequence. In the context of an industry that is increasingly relying on digital platforms, there is a growing threat environment characterized by organized cybercrime, fragile third-party dependency, and an increasing data footprint as a result of an increasingly hostile threat environment. 

With hundreds of ransomware incidents and broader security incidents already occurring in a matter of months, recent figures from 2025 illustrate just how serious this shift is. It is important to note that a breach will not only disrupt clinical and administrative workflows, but also put highly sensitive patient information at risk, which can result in cascading operational, financial, and legal consequences for organizations. 

The developments highlighted here highlight a stark reality: safeguarding healthcare data does not just require technical safeguards; it now requires a coordinated risk management strategy that anticipates breaches, limits their impacts, and ensures institutional resilience should prevention fail. 

Connecticut's Community Health Center (CHC) recently disclosed a significant data breach that occurred when an unauthorized access to its internal systems was allowed to result in a significant data breach, which exemplifies the sector's ongoing vulnerability to cyber risk. 

In January 2025, the organization was alerted to irregular network activity, resulting in an urgent forensic investigation that confirmed there was a criminal on site. Upon further analysis, it was found that the attacker had maintained undetected access to the system from mid-October 2024, thereby allowing a longer window for data exfiltration before the breach was contained and publicly disclosed later that month. 

There was no ransomware or disruption of operations during the incident, but the extent of the data accessed was significant, including names, dates of birth, Social Security numbers, health insurance details, and clinical records of patients and employees, which included sensitive patient and employee information.

More than one million people, including several thousand employees, were affected according to CHC, demonstrating the difficulties that persist in early detection of threats and data protection across healthcare networks, and highlighting the urgent need for strengthened security measures as medical records continue to attract cybercriminals. 

According to Cytek Biosciences' notification to affected individuals, it was learned in early November 2025 that an outside party had gained access to portions of the Biotechnology company's systems and that the company later determined that personal information had been obtained by an outside party. 

As soon as the company became aware of the extent of the exposure, it took immediate steps to respond, including offering free identity theft protection and credit monitoring services for up to two years to eligible individuals, which the company said it had been working on. 

As part of efforts to mitigate potential harm resulting from the incident, enrollment in the program continues to be open up until the end of April 2026. Threat intelligence sources have identified the breach as being connected to Rhysida, which is known for being a ransomware group that first emerged in 2023 and has since established itself as a prolific operation within the cybercrime ecosystem.

A ransomware-as-a-service model is employed by the group which combines data theft with system encryption, as well as allowing affiliates to conduct attacks using its malware and infrastructure in return for a share of the revenue. 

The Rhysida malware has been responsible for a number of attacks across several sectors since its inception, and healthcare is one of the most frequent targets. A number of the group's intrusions have previously been credited to hospitals and care providers, but the Cytek incident is the group's first confirmed attack on a healthcare manufacturer, aligning with a trend which is increasingly involving ransomware activity that extends beyond direct patient care companies to include medical suppliers and technology companies. 

Research indicates that these types of attacks are capable of exposing millions of records, disrupting critical services, and amplifying risks to patient privacy as well as operational continuity, which highlights that the threat landscape facing the U.S. healthcare system is becoming increasingly complex. 

As a result of the disruption that occurred in the U.S. healthcare system, organizations and individuals affected by the incident have stepped back and examined how Change Healthcare fits into the system and why its outage was so widespread. 

With over 15 years of experience in healthcare technology and payment processing under the UnitedHealth Group umbrella, Change Healthcare has played a critical role as a vital intermediary between healthcare providers, insurers, and pharmacists by verifying eligibility, getting prior authorizations, submitting claims, and facilitating payment processes. 

A failure of this organization in its role at the heart of these transactions can lead to cascading delays in prescription, reimbursement, and claim processing across the country when its operational failure extends far beyond the institution at fault. 

According to findings from a survey conducted by the American Medical Association, which documented widespread financial and administrative stress among physician practices, this impact was of a significant magnitude. There have been numerous reports of suspended or delayed claims payments, the inability to submit claims, or the inability to receive electronic remittance advice, and widespread service interruptions as a consequence. 

Several practices cited significant revenue losses, forcing some to rely on personal funds or find an alternative clearinghouse in order to continue to operate. There have been some relief measures relating to emergency funding and advance payments, but disruptions continue to persist, prompting UnitedHealth Group to disburse more than $2 billion towards these efforts. 

Moreover, patients have suffered indirect effects not only through billing delays, unexpected charges, and notifications about potential data exposures but also outside the provider community. This has contributed to increased public concern and renewed scrutiny of the systemic risks posed by the compromise of an organization's central healthcare infrastructure provider. 

The fact that the incidents have been combined in this fashion highlights a clear and cautionary message for healthcare stakeholders: it is imperative to treat cyber resilience as a strategic priority, rather than a purely technical function. 

Considering that large-scale ransomware campaigns have been running for some time now, undetected intrusions for a prolonged period of time, as well as failures at critical intermediaries, it is evident that even a single breach can escalate into a systemic disruption that affects providers, manufacturers, and patients. 

A growing number of industry leaders and regulators are called upon to improve the oversight of third parties, enhance the tools available for breach detection, and integrate financial, legal, and operational preparedness into their cybersecurity strategies. 

It is imperative that healthcare organizations adopt proactive, enterprise-wide approaches to risk management as the volume and value of healthcare data continues to grow. Organizations that fail to adopt this approach may not only find themselves unable to cope with cyber incidents, but also struggle to maintain trust, continuity, and care delivery in the aftermath of them.

Indonesia Temporarily Blocks Grok After AI Deepfake Misuse Sparks Outrage

 

A sudden pause in accessibility marks Indonesia’s move against Grok, Elon Musk’s AI creation, following claims of misuse involving fabricated adult imagery. News of manipulated visuals surfaced, prompting authorities to act - Reuters notes this as a world-first restriction on the tool. Growing unease about technology aiding harm now echoes across borders. Reaction spreads, not through policy papers, but real-time consequences caught online.  

A growing number of reports have linked Grok to incidents where users created explicit imagery of women - sometimes involving minors - without consent. Not long after these concerns surfaced, Indonesia’s digital affairs minister, Meutya Hafid, labeled the behavior a severe breach of online safety norms. 

As cited by Reuters, she described unauthorized sexually suggestive deepfakes as fundamentally undermining personal dignity and civil rights in digital environments. Her office emphasized that such acts fall under grave cyber offenses, demanding urgent regulatory attention Temporary restrictions appeared in Indonesia after Antara News highlighted risks tied to AI-made explicit material. 

Protection of women, kids, and communities drove the move, aimed at reducing mental and societal damage. Officials pointed out that fake but realistic intimate imagery counts as digital abuse, according to statements by Hafid. Such fabricated visuals, though synthetic, still trigger actual consequences for victims. The state insists artificial does not mean harmless - impact matters more than origin. Following concerns over Grok's functionality, officials received official notices demanding explanations on its development process and observed harms. 

Because of potential risks, Indonesian regulators required the firm to detail concrete measures aimed at reducing abuse going forward. Whether the service remains accessible locally hinges on adoption of rigorous filtering systems, according to Hafid. Compliance with national regulations and adherence to responsible artificial intelligence practices now shape the outcome. 

Only after these steps are demonstrated will operation be permitted to continue. Last week saw Musk and xAI issue a warning: improper use of the chatbot for unlawful acts might lead to legal action. On X, he stated clearly - individuals generating illicit material through Grok assume the same liability as those posting such content outright. Still, after rising backlash over the platform's inability to stop deepfake circulation, his stance appeared to shift slightly. 

A re-shared post from one follower implied fault rests more with people creating fakes than with the system hosting them. The debate spread beyond borders, reaching American lawmakers. A group of three Senate members reached out to both Google and Apple, pushing for the removal of Grok and X applications from digital marketplaces due to breaches involving explicit material. Their correspondence framed the request around existing rules prohibiting sexually charged imagery produced without consent. 

What concerned them most was an automated flood of inappropriate depictions focused on females and minors - content they labeled damaging and possibly unlawful. When tied to misuse - like deepfakes made without consent - AI tools now face sharper government reactions, Indonesia's move part of this rising trend. Though once slow to act, officials increasingly treat such technology as a risk needing strong intervention. 

A shift is visible: responses that were hesitant now carry weight, driven by public concern over digital harm. Not every nation acts alike, yet the pattern grows clearer through cases like this one. Pressure builds not just from incidents themselves, but how widely they spread before being challenged.

Fake Tax Emails Used to Target Indian Users in New Malware Campaign

 


A newly identified cyberattack campaign is actively exploiting trust in India’s tax system to infect computers with advanced malware designed for long-term surveillance and data theft. The operation relies on carefully crafted phishing emails that impersonate official tax communications and has been assessed as potentially espionage-driven, though no specific hacking group has been confirmed.

The attack begins with emails that appear to originate from the Income Tax Department of India. These messages typically warn recipients about penalties, compliance issues, or document verification, creating urgency and fear. Victims are instructed to open an attached compressed file, believing it to be an official notice.

Once opened, the attachment initiates a hidden infection process. Although the archive contains several components, only one file is visible to the user. This file is disguised as a legitimate inspection or review document. When executed, it quietly loads a concealed malicious system file that operates without the user’s awareness.

This hidden component performs checks to ensure it is not being examined by security analysts and then connects to an external server to download additional malicious code. The next stage exploits a Windows system mechanism to gain administrative privileges without triggering standard security prompts, allowing the attackers deeper control over the system.

To further avoid detection, the malware alters how it identifies itself within the operating system, making it appear as a normal Windows process. This camouflage helps it blend into everyday system activity.

The attackers then deploy another installer that adapts its behavior based on the victim’s security setup. If a widely used antivirus program is detected, the malware does not shut it down. Instead, it simulates user actions, such as mouse movements, to quietly instruct the antivirus to ignore specific malicious files. This allows the attack to proceed while the security software remains active, reducing suspicion.

At the core of the operation is a modified banking-focused malware strain known for targeting organizations across multiple countries. Alongside it, attackers install a legitimate enterprise management tool originally designed for system administration. In this campaign, the software is misused to remotely control infected machines, monitor user behavior, and manage stolen data centrally.

Supporting files are also deployed to strengthen control. These include automated scripts that change folder permissions, adjust user access rights, clean traces of activity, and enable detailed logging. A coordinating program manages these functions to ensure the attackers maintain persistent access.

Researchers note that the campaign combines deception, privilege escalation, stealth execution, and abuse of trusted software, reflecting a high level of technical sophistication and clear intent to maintain prolonged visibility into compromised systems.

WhatsApp-Based Astaroth Banking Trojan Targets Brazilian Users in New Malware Campaign

 

A fresh look at digital threats shows malicious software using WhatsApp to spread the Astaroth banking trojan, mainly affecting people in Brazil. Though messaging apps are common tools for connection, they now serve attackers aiming to steal financial data. This method - named Boto Cor-de-Rosa by analysts at Acronis Threat Research - stands out because it leans on social trust within widely used platforms. Instead of relying on email or fake websites, hackers piggyback on real conversations, slipping malware through shared links. 
While such tactics aren’t entirely new, their adaptation to local habits makes them harder to spot. In areas where nearly everyone uses WhatsApp daily, blending in becomes easier for cybercriminals. Researchers stress that ordinary messages can now carry hidden risks when sent from compromised accounts. Unlike older campaigns, this one avoids flashy tricks, favoring quiet infiltration over noise. As behavior shifts online, so do attack strategies - quietly, persistently adapting. 

Acronis reports that the malware targets WhatsApp contact lists, sending harmful messages automatically - spreading fast with no need for constant hacker input. Notably, even though the main Astaroth component sticks with Delphi, and the setup script remains in Visual Basic, analysts spotted a fresh worm-style feature built completely in Python. Starting off differently this time, the mix of languages shows how cyber attackers now build adaptable tools by blending code types for distinct jobs. Ending here: such variety supports stealthier, more responsive attack systems. 

Astaroth - sometimes called Guildma - has operated nonstop since 2015, focusing mostly on Brazil within Latin America. Stealing login details and enabling money scams sits at the core of its activity. By 2024, several hacking collectives, such as PINEAPPLE and Water Makara, began spreading it through deceptive email messages. This newest push moves away from that method, turning instead to WhatsApp; because so many people there rely on the app daily, fake requests feel far more believable. 

Although tactics shift, the aim stays unchanged. Not entirely new, exploiting WhatsApp to spread banking trojans has gained speed lately. Earlier, Trend Micro spotted the Water Saci group using comparable methods to push financial malware like Maverick and a version of Casbaneierio. Messaging apps now appear more appealing to attackers than classic email phishing. Later that year, Sophos disclosed details of an evolving attack series labeled STAC3150, closely tied to previous patterns. This operation focused heavily on individuals in Brazil using WhatsApp, distributing the Astaroth malware through deceptive channels. 

Nearly all infected machines - over 95 percent - were situated within Brazilian territory, though isolated instances appeared across the U.S. and Austria. Running uninterrupted from early autumn 2025, the method leaned on compressed archives paired with installer files, triggering script-based downloads meant to quietly embed the malicious software. What Acronis has uncovered fits well with past reports. Messages on WhatsApp now carry harmful ZIP files sent straight to users. Opening one reveals what seems like a safe document - but it is actually a Visual Basic Script. Once executed, the script pulls down further tools from remote servers. 

This step kicks off the full infection sequence. After activation, this malware splits its actions into two distinct functions. While one part spreads outward by pulling contact data from WhatsApp and distributing infected files without user input, the second runs hidden, observing online behavior - especially targeting visits to financial sites - to capture login details. 

It turns out the software logs performance constantly, feeding back live updates on how many messages succeed or fail, along with transmission speed. Attackers gain a constant stream of operational insight thanks to embedded reporting tools spotted by Acronis.

Looking Beyond the Hype Around AI Built Browser Projects


Cursor, the company that provides an artificial intelligence-integrated development environment, recently gained attention from the industry after suggesting that it had developed a fully functional browser using its own artificial intelligence agents, which is known as the Cursor AI-based development environment. In a series of public statements made by Cursor chief executive Michael Truell, it was claimed that the browser was built with the use of GPT-5.2 within the Cursor platform. 


Approximately three million lines of code are spread throughout thousands of files in Truell's project, and there is a custom rendering engine in Rust developed from scratch to implement this project. 

Moreover, he explained that the system also supports the main features of the browser, including HTML parsing, CSS cascading and layout, text shaping, painting, and a custom-built JavaScript virtual machine that is responsible for the rendering of HTML on the browser. 

Even though the statements did not explicitly assert that a substantial amount of human involvement was not involved with the creation of the browser, they have sparked a heated debate within the software development community about whether or not the majority of the work is truly attributed to autonomous AI systems, and whether or not these claims should be interpreted in light of the growing popularity of AI-based software development in recent years. 

There are a couple of things to note about the episode: it unfolds against a backdrop of intensifying optimism regarding generative AI, an optimism that has inspired unprecedented investment in companies across a variety of industries. In spite of the optimism, a more sobering reality is beginning to emerge in the process. 

A McKinsey study indicates that despite the fact that roughly 80 percent of companies report having adopted the most advanced AI tools, a similar percentage has seen little to no improvement in either revenue growth or profitability. 

In general, general-purpose AI applications are able to improve individual productivity, but they have rarely been able to translate their incremental time savings into tangible financial results. While higher value, domain-specific applications continue to stall in the experimental or pilot stage, analysts increasingly describe this disconnect as the generative AI value paradox since higher-value, domain-specific applications tend to stall in the experimental or pilot stages. 

There has been a significant increase in tension with the advent of so-called agentic artificial intelligence, which essentially is an autonomous system that is capable of planning, deciding, and acting independently in order to achieve predefined objectives. 

It is important to note, however, that these kinds of systems offer a range of benefits beyond assistive tools, as well as increasing the stakes for credibility and transparency in the case of Cursor's browser project, in which the decision to make its code publicly available was crucial. 

Developers who examined the repository found the software frequently failed to compile, rarely ran as advertised, and rarely exceeded the capabilities implied by the product's advertising despite enthusiastic headlines. 

If one inspects and tests the underlying code closely, it becomes evident that the marketing claims are not in line with the actual code. Ironically, most developers found the accompanying technical document—which detailed the project's limitations and partial successes—to be more convincing than the original announcement of the project. 

During a period of about a week, Cursor admits that it deployed hundreds of GPT-5.2-style agents, which generated about three million lines of code, assembling what on the surface amounted to a partially functional browser prototype. 

Several million dollars at prevailing prices for frontier AI models is the cost of the experiment, as estimated by Perplexity, an AI-driven search and analysis platform. At such times, it would be possible to consume between 10 and 20 trillion tokens during the experiment, which would translate into a cost of several million dollars at the current price. 

Although such figures demonstrate the ambition of the effort, they also emphasize the skepticism that exists within the industry at the moment: scale alone does not equate to sustained value or technical maturity. It can be argued that a number of converging forces are driving AI companies to increasingly target the web browser itself, rather than focusing on plug-ins or standalone applications.

For many years, browsers have served as the most valuable source of behavioral data - and, by extension, an excellent source of ad revenue - and this has been true for decades. They have been able to capture search queries, clicks, and browsing patterns for a number of years, which have paved the way for highly profitable ad targeting systems.

Google has gained its position as the world's most powerful search engine by largely following this model. The browser provides AI providers with direct access to this stream of data exhaust, which reduces the dependency on third party platforms and secures a privileged position in the advertising value chain. 

A number of analysts note that controlling the browser can also be a means of anchoring a company's search product and the commercial benefits that follow from it as well. It has been reported that OpenAI's upcoming browser is explicitly intended to collect information on users' web behavior from first-party sources, a strategy intended to challenge Google's ad-driven ecosystem. 

Insiders who have been contacted by the report suggest they were motivated to build a browser rather than an extension for Chrome or Edge because they wanted more control over their data. In addition to advertising, the continuous feedback loop that users create through their actions provides another advantage: each scroll, click, and query can be used to refine and personalize AI models, which in turn strengthens a product over time.

In the meantime, advertising remains one of the few scalable monetization paths for consumer-facing artificial intelligence, and both OpenAI and Perplexity appear to be positioning their browsers accordingly, as highlighted by recent hirings and the quiet development of ad-based services. 

Meanwhile, AI companies claim that browsers offer the chance to fundamentally rethink the user experience of the web, arguing that it can be remodeled in the future. Traditional browsing, which relied heavily on tabs, links, and manual comparison, has become increasingly viewed as an inefficient and cognitively fragmented activity. 

By replacing navigation-heavy workflows with conversational, context-aware interactions, artificial intelligence-first browsers aim to create a new type of browsing. It is believed that Perplexity's Comet browser, which is positioned as an “intelligent interface”, can be accessed by the user at any moment, enabling the artificial intelligence to research, summarize, and synthesize information in real time, thus creating a real-time “intelligent interface.” 

Rather than clicking through multiple pages, complex tasks are condensed into seamless interactions that maintain context across every step by reducing the number of pages needed to complete them. As with OpenAI's planned browser, it is likely to follow a similar approach by integrating a ChatGPT-like assistant directly into the browsing environment, allowing users to act on information without leaving the page. 

The browser is considered to be a constant co-pilot, one that will be able to draft messages, summarise content, or perform transactions on the user's behalf, rather than just performing searches. These shifts have been described by some as a shift from search to cognition. 

The companies who are deeply integrating artificial intelligence into everyday browsing hope that, in addition to improving convenience, they will be able to keep their users engaged in their ecosystems for longer periods of time, strengthening their brand recognition and boosting habitual usage. Having a proprietary browser also enables the integration of artificial intelligence services and agent-based systems that are difficult to deliver using third-party platforms. 

A comprehensive understanding of browser architecture provides companies with the opportunity to embed language models, plugins, and autonomous agents at a foundational level of the browser. OpenAI's browser, for instance, is expected to be integrated directly with the company's emerging agent platform, enabling software capable of navigating websites, completing forms, and performing multi-step actions on its own.

It is apparent that further ambitions are evident elsewhere too: 
The Browser Company's Dia features an AI assistant right in the address bar, offering a combination of search and chat functionality along with task automation, while maintaining awareness of the context of the user across multiple tabs. These types of browsers are an indicator of a broader trend toward building browsers around artificial intelligence rather than adding artificial intelligence features to existing browsers. 

By following such a method, a company's AI services become the default experience for users whenever they search or interact with the web. This ensures that the company's AI services are not optional enhancements, but rather the default experience. 

Last but not least, competitive pressure is a serious issue. Search and browser dominance by Google have long been mutually reinforcing each other, channeling data and traffic through Chrome into the company's advertising empire in an effort to consolidate its position.

A direct threat to this structure is the development of AI first browsers, whose aim is to divert users away from traditional search and towards AI-mediated discovery as a result. 

The browser that Perplexity is creating is part of a broader effort to compete with Google in search. However, Reuters reports that OpenAI is intensifying its rivalry with Google by moving into browsers. The ability to control the browser allows AI companies to intercept user intent at an earlier stage, so that they are not dependent on existing platforms and are protected from future changes in default settings and access rules that may be implemented. 

Furthermore, the smaller AI players must also be prepared to defend themselves from the growing integration of artificial intelligence into their browsers, as Google, Microsoft, and others are rapidly integrating it into their own browsers.

In a world where browsers remain a crucial part of our everyday lives as well as work, the race to integrate artificial intelligence into these interfaces is becoming increasingly important, and many observers are already beginning to describe this conflict as the beginning of a new era in browsers driven by artificial intelligence.

In the context of the Cursor episode and the trend toward AI-first browsers, it is imperative to note a cautionary mark for an industry rushing ahead of its own trials and errors. It is important to recognize, however, that open repositories and independent scrutiny continue to be the ultimate arbiters of technical reality, regardless of the public claims of autonomy and scale. 

It is becoming increasingly apparent that a number of companies are repositioning the browser as a strategic battleground, promising efficiency, personalization, and control - and that developers, enterprises, and users are being urged to separate ambition from implementation in real life. 

Among analysts, it appears that AI-powered browsers will not fail, but rather that their impact will be less dependent on headline-grabbing demonstrations than on evidence-based reliability, transparent attribution of human work to machine work, and a thoughtful evaluation of security and economic trade-offs. During this period of speed and spectacle in an industry that is known for its speed and spectacle, it may yet be the scariest resource of all.

India Cracks Down on Grok's AI Image Misuse

 

The Ministry of Electronics and Information Technology (MeitY) of India has found that the latest restrictions on Grok’s image generation tool by X are not adequate to prevent obscene content. The platform, owned by Elon Musk, restricted the controversial feature, known as Grok Imagine, to paid subscribers across the globe. The feature was removed to prevent free users on the platform from creating abusive images. However, officials have argued that allowing such image generation violates Indian laws on privacy and dignity, especially regarding women and children. 

Grok Imagine, available on X and as a separate app, has shown a rise in pornographic and abusive images, including non-consensual images of real people, including children, being naked. The feature, known as Spicy Mode, which produced such images, sparked anger across India, the United Kingdom, Türkiye, Malaysia, Brazil, and the European Union. The feature allowed users to create images of people being undressed, including images of women being dressed in bikinis. The feature sparked anger among members of Parliament in India. 

X's partial fixes fall short 

On 2 January 2026, MeitY ordered X to remove all vulgar images generated on the platform within 72 hours. The order also required X to provide a report on actions taken to comply with the order. The response from X mentioned stricter filters on images. However, officials have argued that X failed to provide adequate technical details on steps taken to prevent such images from being generated. The officials have also stated that the website of Grok allows users to create images for free. 

X now restricts image generation and editing via @Grok replies to premium users, but loopholes persist: the Grok app and website remain open to all, and X's image edit button is accessible platform-wide. Grok stated illegal prompts face the same penalties as uploads, yet regulators demand proactive safeguards. MeitY seeks comprehensive measures to block obscene outputs entirely. 

This clash highlights rising global scrutiny on AI tools lacking robust guardrails against deepfakes and harm. India's IT Rules 2021 mandate swift content removal, with non-compliance risking liability for platforms and executives.As X refines Grok, the case underscores the need for ethical AI design amid tech's rapid evolution, balancing innovation with societal protection.

Raspberry Pi Project Turns Wi-Fi Signals Into Visual Light Displays

 



Wireless communication surrounds people at all times, even though it cannot be seen. Signals from Wi-Fi routers, Bluetooth devices, and mobile networks constantly travel through homes and cities unless blocked by heavy shielding. A France-based digital artist has developed a way to visually represent this invisible activity using light and low-cost computing hardware.

The creator, Théo Champion, who is also known online as Rootkid, designed an installation called Spectrum Slit. The project captures radio activity from commonly used wireless frequency ranges and converts that data into a visual display. The system focuses specifically on the 2.4 GHz and 5 GHz bands, which are widely used for Wi-Fi connections and short-range wireless communication.

The artwork consists of 64 vertical LED filaments arranged in a straight line. Each filament represents a specific portion of the wireless spectrum. As radio signals are detected, their strength and density determine how brightly each filament lights up. Low signal activity results in faint and scattered illumination, while higher levels of wireless usage produce intense and concentrated light patterns.

According to Champion, quiet network conditions create a subtle glow that reflects the constant but minimal background noise present in urban environments. As wireless traffic increases, the LEDs become brighter and more saturated, forming dense visual bands that indicate heavy digital activity.

A video shared on YouTube shows the construction process and the final output of the installation inside Champion’s Paris apartment. The footage demonstrates a noticeable increase in brightness during evening hours, when nearby residents return home and connect phones, laptops, and other devices to their networks.

Champion explained in an interview that his work is driven by a desire to draw attention to technologies people often ignore, despite their significant influence on daily life. By transforming technical systems into physical experiences, he aims to encourage viewers to reflect on the infrastructure shaping modern society and to appreciate the engineering behind it.

The installation required both time and financial investment. Champion built the system using a HackRF One software-defined radio connected to a Raspberry Pi. The radio device captures surrounding wireless signals, while the Raspberry Pi processes the data and controls the lighting behavior. The software was written in Python, but other components, including the metal enclosure and custom circuit boards, had to be professionally manufactured.

He estimates that development involved several weeks of experimentation, followed by a dedicated build phase. The total cost of materials and fabrication was approximately $1,000.

Champion has indicated that Spectrum Slit may be publicly exhibited in the future. He is also known for creating other technology-focused artworks, including interactive installations that explore data privacy, artificial intelligence, and digital systems. He has stated that producing additional units of Spectrum Slit could be possible if requested.

Microsoft BitLocker Encryption Raises Privacy Questions After FBI Key Disclosure Case

 


Microsoft’s BitLocker encryption, long viewed as a safeguard for Windows users’ data, is under renewed scrutiny after reports revealed the company provided law enforcement with encryption keys in a criminal investigation.

The case, detailed in a government filing [PDF], alleges that individuals in Guam illegally claimed pandemic-related unemployment benefits. According to Forbes, this marks the first publicly documented instance of Microsoft handing over BitLocker recovery keys to law enforcement.

BitLocker is a built-in Windows security feature designed to encrypt data stored on devices. It operates through two configurations: Device Encryption, which offers a simplified setup, and BitLocker Drive Encryption, a more advanced option with greater control.

In both configurations, Microsoft generally stores BitLocker recovery keys on its servers when encryption is activated using a Microsoft account. As the company explains in its documentation, "If you use a Microsoft account, the BitLocker recovery key is typically attached to it, and you can access the recovery key online."

A similar approach applies to organizational devices. Microsoft notes, "If you're using a device that's managed by your work or school, the BitLocker recovery key is typically backed up and managed by your organization's IT department."

Users are not required to rely on Microsoft for key storage. Alternatives include saving the recovery key to a USB drive, storing it as a local file, or printing it. However, many customers opt for Microsoft’s cloud-based storage because it allows easy recovery if access is lost. This convenience, though, effectively places Microsoft in control of data access and reduces the user’s exclusive ownership of encryption keys.

Apple provides a comparable encryption solution through FileVault, paired with iCloud. Apple offers two protection levels: Standard Data Protection and Advanced Data Protection for iCloud.

Under Standard Data Protection, Apple retains the encryption keys for most iCloud data, excluding certain sensitive categories such as passwords and keychain data. With Advanced Data Protection enabled, Apple holds keys only for iCloud Mail, Contacts, and Calendar. Both Apple and Microsoft comply with lawful government requests, but neither can disclose encryption keys they do not possess.

Apple explicitly addresses this in its law enforcement guidelines [PDF]: "All iCloud content data stored by Apple is additionally encrypted at the location of the server. For data Apple can decrypt, Apple retains the encryption keys in its US data centers. Apple does not receive or retain encryption keys for [a] customer's end-to-end encrypted data."

This differs from BitLocker’s default behavior, where Microsoft may retain access to a customer’s encryption keys if the user enables cloud backup during setup.

Microsoft states that it does not share its own encryption keys with governments, but it stops short of extending that guarantee to customer-managed keys. In its law enforcement guidance, the company says, "We do not provide any government with our encryption keys or the ability to break our encryption." It further adds, "In most cases, our default is for Microsoft to securely store our customers' encryption keys. Even our largest enterprise customers usually prefer we keep their keys to prevent accidental loss or theft. However, in many circumstances we also offer the option for consumers or enterprises to keep their own keys, in which case Microsoft does not maintain copies."

Microsoft’s latest Government Requests for Customer Data Report, covering July 2024 through December 2024, shows the company received 128 law enforcement requests globally, including 77 from US agencies. Only four requests during that period—three from Brazil and one from Canada—resulted in content disclosure.

After the article was published, a Microsoft spokesperson clarified, “With BitLocker, customers can choose to store their encryption keys locally, in a location inaccessible to Microsoft, or in Microsoft’s cloud. We recognize that some customers prefer Microsoft’s cloud storage so we can help recover their encryption key if needed. While key recovery offers convenience, it also carries a risk of unwanted access, so Microsoft believes customers are in the best position to decide whether to use key escrow and how to manage their keys.”

Privacy advocates argue that this design reflects Microsoft’s priorities. As Erica Portnoy, senior staff technologist at the Electronic Frontier Foundation, stated in an email to The Register, "Microsoft is making a tradeoff here between privacy and recoverability. At a guess, I'd say that's because they're more focused on the business use case, where loss of data is much worse than Microsoft or governments getting access to that data. But by making that choice, they make their product less suitable for individuals and organizations with higher privacy needs. It's a clear message to activist organizations and law firms that Microsoft is not building their products for you."

Multi-Stage Phishing Campaign Deploys Amnesia RAT and Ransomware Using Cloud Services

 

One recently uncovered cyberattack is targeting individuals across Russia through a carefully staged deception campaign. Rather than exploiting software vulnerabilities, the operation relies on manipulating user behavior, according to analysis by Cara Lin of Fortinet FortiGuard Labs. The attack delivers two major threats: ransomware that encrypts files for extortion and a remote access trojan known as Amnesia RAT. Legitimate system tools and trusted services are repurposed as weapons, allowing the intrusion to unfold quietly while bypassing traditional defenses. By abusing real cloud platforms, the attackers make detection significantly more difficult, as nothing initially appears out of place. 

The attack begins with documents designed to resemble routine workplace material. On the surface, these files appear harmless, but they conceal code that runs without drawing attention. Visual elements within the documents are deliberately used to keep victims focused, giving the malware time to execute unseen. Fortinet researchers noted that these visuals are not cosmetic but strategic, helping attackers establish deeper access before suspicion arises. 

A defining feature of the campaign is its coordinated use of multiple public cloud services. Instead of relying on a single platform, different components are distributed across GitHub and Dropbox. Scripts are hosted on GitHub, while executable payloads such as ransomware and remote access tools are stored on Dropbox. This fragmented infrastructure improves resilience, as disabling one service does not interrupt the entire attack chain and complicates takedown efforts. 

Phishing emails deliver compressed archives that contain decoy documents alongside malicious Windows shortcut files labeled in Russian. These shortcuts use double file extensions to impersonate ordinary text files. When opened, they trigger a PowerShell command that retrieves additional code from a public GitHub repository, functioning as an initial installer. The process runs silently, modifies system settings to conceal later actions, and opens a legitimate-looking document to maintain the illusion of normal activity. 

After execution, the attackers receive confirmation via the Telegram Bot API. A deliberate delay follows before launching an obfuscated Visual Basic Script, which assembles later-stage payloads directly in memory. This approach minimizes forensic traces and allows attackers to update functionality without altering the broader attack flow. 

The malware then aggressively disables security protections. Microsoft Defender exclusions are configured, protection modules are shut down, and the defendnot utility is used to deceive Windows into disabling antivirus defenses entirely. Registry modifications block administrative tools, repeated prompts seek elevated privileges, and continuous surveillance is established through automated screenshots exfiltrated via Telegram. 

Once defenses are neutralized, Amnesia RAT is downloaded from Dropbox. The malware enables extensive data theft from browsers, cryptocurrency wallets, messaging apps, and system metadata, while providing full remote control of infected devices. In parallel, ransomware derived from the Hakuna Matata family encrypts files, manipulates clipboard data to redirect cryptocurrency transactions, and ultimately locks the system using WinLocker. 

Fortinet emphasized that the campaign reflects a broader shift in phishing operations, where attackers increasingly weaponize legitimate tools and psychological manipulation instead of exploiting software flaws. Microsoft advises enabling Tamper Protection and monitoring Defender changes to reduce exposure, as similar attacks are becoming more widespread across Russian organizations.

1Password Launches Pop-Up Alerts to Block Phishing Scams

 

1Password has introduced a new phishing protection feature that displays pop-up warnings when users visit suspicious websites, aiming to reduce the risk of credential theft and account compromise. This enhancement builds on the password manager’s existing safeguards and responds to growing phishing threats fueled by increasingly sophisticated attack techniques.

Traditionally, 1Password protects users by refusing to auto-fill credentials on sites whose URLs do not exactly match those stored in the user’s vault. While this helps block many phishing attempts, it still relies on users noticing that something is wrong when their password manager does not behave as expected, which is not always the case. Some users may assume the tool malfunctioned or that their vault is locked and proceed to type passwords manually, inadvertently handing them to attackers.

The new feature addresses this gap by adding a dedicated pop-up alert that appears when 1Password detects a potential phishing URL, such as a typosquatted or lookalike domain. For example, a domain with an extra character in the name may appear convincing at a glance, especially when the phishing page closely imitates the legitimate site’s design. The pop-up is designed to prompt users to slow down, double-check the URL, and reconsider entering their credentials, effectively adding a behavioral safety net on top of technical controls.

1Password is rolling out this capability automatically for individual and family subscribers, ensuring broad coverage for consumers without requiring configuration changes. In business environments, administrators can enable the feature for employees through Authentication Policies in the 1Password admin console, integrating it into existing access control strategies. This flexibility allows organizations to align phishing protection with their security policies and training programs.

The company underscores the importance of this enhancement with survey findings from 2,000 U.S. respondents, revealing that 61% had been successfully phished and 75% do not check URLs before clicking links. The survey also shows that one-third of employees reuse passwords on work accounts, nearly half have fallen for phishing at work, and many believe protection is solely the IT department’s responsibility. With 72% admitting to clicking suspicious links and over half choosing to delete rather than report questionable messages, 1Password’s new pop-up warnings aim to counter risky user behavior and strengthen overall phishing defenses.

Sandworm-Associated DynoWiper Malware Targets Polish Power Infrastructure


 

A cyber intrusion targeting the nation's energy infrastructure occurred in late 2025, which security experts have described as one of the largest cyberattacks the nation has faced in many years. It underscores the growing vulnerability of critical national systems in light of increasing geopolitical tensions, which are at odds with one another. 

ESET, a cybersecurity company specializing in cyber security, has uncovered new data indicating that the operation was carried out by Sandworm, an advanced persistent threat group closely aligned with Russia that has been associated with disrupting energy and industrial networks for decades. 

ESET researchers found that a deeper analysis of the malware used during the incident revealed operational patterns and code similarities that are consistent with Sandworm's past campaigns, indicating that the attack follows Sandworm's established playbook for damaging cyber activity. 

According to the assailants, they were planning to use a malware strain named DynoWiper that was designed to permanently destroy files and cripple affected systems by irreversibly destroying them, a strategy which could have caused widespread disruptions across the Poland electricity industry if it had been successful. 

At the time of publication, the Russian Embassy in Washington did not respond to requests for comment. According to cyber experts, Sandworm, which is also known as UAC-0113, APT44, or Seashell Blizzard in the cybersecurity community, has been active for more than a decade and is widely regarded as an act of state-sponsored hacking, most likely aimed at Russian military intelligence agencies. 

The group's ties to Unit 74455 of the Main Intelligence Directorate (GRU) have been established by security researchers after repeated accusations that the organization has committed high-impact cyber-operations intended to disrupt and degrade critical infrastructure systems. 

Throughout its history, Sandworm has been credited with some of the most significant cyber incidents against energy networks, most notably a devastating attack on the Ukraine's power grid nearly a decade ago, which used data-wiping malware and left around 230,000 people without power for a period of nearly 10 days.

It is important to note that this episode still remains a prototypical example of the group's capabilities and intentions, and it continues to shape the assessment of the group's role in more recent attempts to undermine energy systems beyond Ukraine's borders. 

As detailed in a recent report issued by ESET, they believed that the operation bore the hallmarks of Sandworm, a threat actor widely linked to Russia's military and intelligence apparatus, evidenced by its involvement in the operation. 

A data wiping malware, DynoWiper, dubbed DynoWiper, was identified by investigators and tracked as Win32/KillFiles.NMO, which had previously been undocumented, pointing the finger at the group. The wiper campaign was similar in both technical and operational aspects to earlier Sandworm wiper campaigns, especially those that were observed following Russian invasion of Ukraine in February of that year. 

In a statement published by ESET on December 29, 2025, the company stated that the malware had been detected during an attempt to disrupt Poland's energy sector, but that there are no indications that the attackers succeeded in causing outages or permanently damage the energy sector. 

In an email sent on December 29, the Polish authorities confirmed that there was activity observed in the area of two combined heat and power plants and a system used to manage the generation of electricity from renewable sources, such as the power of wind and sun. 

In a public statement, the Prime Minister said that the attacks were directed by groups “directly linked to Russian services,” citing the government's plans to strengthen national defenses through additional safeguards and cybersecurity legislation that will require more stringent requirements on risk management, information technology and operational technology security, and preparedness for incidents. Tusk said this legislation is expected to be implemented very soon. 

Moreover, the timing of the incident attracted the attention of analysts as it coincided with the tenth anniversary of Sandworm's historic attack on Ukraine's power grid in 2015. BlackEnergy and KillDisk malware were deployed during the attack, and the attack caused hours-long blackouts for thousands of people, something that was cited as a continuation of a pattern of disruption campaigns against critical infrastructure that has been occurring for years. 

A company named ESET stated that the attempted intrusion coincided with Sandworm's tenth anniversary of the devastating attack on Ukraine's power grid in the year 2000, though it only provided limited technical information beyond the identification of the malware involved. 

Researchers are pointing out that the use of a custom-built wiper, as well as the pattern of Russian cyber operations in which data-destroying malware has been a strategic tool, aligns with a broader pattern observed in cyber operations. The use of wipers in attacks linked to Moscow has increased significantly since 2022. 

The use of AcidRain to disable roughly 270,000 satellite modems in Ukraine has been an effort to disrupt the communication of the country. A number of campaigns targeting universities, critical infrastructure, and the like have been attributed to Sandworm. This is also true in the case of the NotPetya outbreak in 2017, a destructive worm that in its early stage was targeted at Ukrainian targets, but quickly spread worldwide, causing an estimated $10 billion in damage and securing its place as one of the highest-profile case studies in the history of cybercrime. 

There are no indications yet as to why DynoWiper had failed to trigger power outages in Poland; the investigation has left open the possibility that the operation may have been strategically calibrated to avoid escalation or that strong defenses within the country’s energy grid prevented it. 

In the aftermath of the incident, governments and operators of critical infrastructure across Europe have been reminded once again that energy systems continue to be an attractive target among state-sanctioned cyber operations even when those attacks do not result in immediate disruptions. 

It is noted that security analysts have noted the attempt to deploy DynoWiper in a strategic capacity reflects a continued reliance on destructive malware as a strategy tool, and emphasize the importance of investing in cyber resilience, real-time monitoring, and coordinated incident response across both the information technology as well as operational technologies. 

Although it appears that Polish officials are using the episode as a springboard in order to strengthen their defenses, experts point out that similar threats may not be bound by borders in the near future since geopolitical tensions are unlikely to ease at all. 

Despite the fact that the failure of the attack may offer some reassurance for the time being, it also emphasizes a more significant reality: adversaries continue to search energy networks for weaknesses, and it will be crucial to be prepared and cooperative if we wish to avoid future disruptions, as well as to be able to detect and neutralize malware before it becomes a major problem.

Dark Web Voice-Phishing Kits Supercharge Social Engineering and Account Takeovers

 

Cybercriminals are finding it easier than ever to run convincing social engineering schemes and identity theft operations, driven by the availability of customized voice-phishing (vishing) kits sold on dark web forums and private messaging channels.

According to a recent Okta Threat Intelligence blog published on Thursday, these phishing kits are being marketed as a service to “a growing number” of threat actors aiming to compromise Google, Microsoft, and Okta user accounts. Beyond fake login pages, the kits also provide real-time support that helps attackers capture login credentials and multi-factor authentication (MFA) codes while victims are actively being manipulated.

“There are at least two kits that implement the novel functionality observed,” Okta Threat Intelligence Vice President Brett Winterford told The Register.

“The phishing kits have been developed to closely mimic the authentication flows of identity providers and other identity systems used by organizations,” he said. “The kits allow the attacker to monitor the phishing page as the targeted user is interacting with it and trigger different custom pages that the target sees. This creates a more compelling pretext for asking the user to share credentials and accept multi-factor authentication challenges.”

Winterford noted that this form of attack has “evolved significantly since late 2025.” Some advertisements promoting these kits even seek to hire native English-speaking callers to make the scams more believable.

“These callers pretend to be from an organization's helpdesk and approach targets using the pretext of resolving a support ticket or performing a mandatory technical update,” Winterford said.

Similar tactics were observed last year when Scattered Spider-style IT support scams enabled attackers to breach dozens of Salesforce environments, resulting in mass data theft and extortion campaigns.

The attacks typically begin with reconnaissance. Threat actors collect details such as employee names, commonly used applications, and IT support contact numbers. This information is often sourced from company websites, LinkedIn profiles, and other publicly accessible platforms. Using chatbots to automate this research further accelerates the process.

Once prepared, attackers deploy the phishing kit to generate a convincing replica of a legitimate login page. Victims are contacted via spoofed company or helpdesk phone numbers and persuaded to visit the fraudulent site under the guise of IT assistance. “The attacks vary from there, depending on the attacker's motivation and their interactions with the user,” Winterford said.

When victims submit their login credentials, the data is instantly relayed to the attacker—often through a Telegram channel—granting access to the real service. While the victim remains on the call, the attacker attempts to log in and observes which MFA methods are triggered, modifying the phishing page in real time to match the experience.

Attackers then instruct victims to approve push notifications, enter one-time passcodes, or complete other MFA challenges. Because the fake site mirrors these requests, the deception becomes harder to detect.

“If presented a push notification (type of MFA challenge), for example, an attacker can verbally tell the user to expect a push notification, and select an option from their [command-and-control] panel that directs their target's browser to a new page that displays a message implying that a push message has been sent, lending plausibility to what would ordinarily be a suspicious request for the user to accept a challenge the user didn't initiate,” the report says.

Okta also warned that these kits can defeat number-matching MFA prompts by simply instructing users which number to enter, effectively neutralizing an added layer of security.

Once MFA is bypassed, attackers gain full control of the compromised account.

This research aligns with The Register’s previous reporting on “impersonation-as-a-service,” where cybercriminals bundle social engineering tools into subscription-based offerings.

“As a bad actor you can subscribe to get tools, training, coaching, scripts, exploits, everything in a box to go out and conduct your infiltration operation that often combine[s] these social engineering attacks with targeted ransomware, almost always with a financial motive,” security firm Nametag CEO Aaron Painter said in an earlier interview.

Attackers Hijack Microsoft Email Accounts to Launch Phishing Campaign Against Energy Firms

 


Cybercriminals have compromised Microsoft email accounts belonging to organizations in the energy sector and used those trusted inboxes to distribute large volumes of phishing emails. In at least one confirmed incident, more than 600 malicious messages were sent from a single hijacked account.

Microsoft security researchers explained that the attackers did not rely on technical exploits or system vulnerabilities. Instead, they gained access by using legitimate login credentials that were likely stolen earlier through unknown means. This allowed them to sign in as real users, making the activity harder to detect.

The attack began with emails that appeared routine and business-related. These messages included Microsoft SharePoint links and subject lines suggesting formal documents, such as proposals or confidentiality agreements. To view the files, recipients were asked to authenticate their accounts.

When users clicked the SharePoint link, they were redirected to a fraudulent website designed to look legitimate. The site prompted them to enter their Microsoft login details. By doing so, victims unknowingly handed over valid usernames and passwords to the attackers.

After collecting credentials, the attackers accessed the compromised email accounts from different IP addresses. They then created inbox rules that automatically deleted incoming emails and marked messages as read. This step helped conceal the intrusion and prevented account owners from noticing unusual activity.

Using these compromised inboxes, the attackers launched a second wave of phishing emails. These messages were sent not only to external contacts but also to colleagues and internal distribution lists. Recipients were selected based on recent email conversations found in the victim’s inbox, increasing the likelihood that the messages would appear trustworthy.

In this campaign, the attackers actively monitored inbox responses. They removed automated replies such as out-of-office messages and undeliverable notices. They also read replies from recipients and responded to questions about the legitimacy of the emails. All such exchanges were later deleted to erase evidence.

Any employee within an energy organization who interacted with the malicious links was also targeted for credential theft, allowing the attackers to expand their access further.

Microsoft confirmed that the activity began in January and described it as a short-duration, multi-stage phishing operation that was quickly disrupted. The company did not disclose how many organizations were affected, identify the attackers, or confirm whether the campaign is still active.

Security experts warn that simply resetting passwords may not be enough in these attacks. Because attackers can interfere with multi-factor authentication settings, they may maintain access even after credentials are changed. For example, attackers can register their own device to receive one-time authentication codes.

Despite these risks, multi-factor authentication remains a critical defense against account compromise. Microsoft also recommends using conditional access controls that assess login attempts based on factors such as location, device health, and user role. Suspicious sign-ins can then be blocked automatically.

Additional protection can be achieved by deploying anti-phishing solutions that scan emails and websites for malicious activity. These measures, combined with user awareness, are essential as attackers increasingly rely on stolen identities rather than software flaws.


Cisco Patches ISE XML Flaw with Public Exploit Code

 

Cisco has recently addressed a significant security vulnerability in its Identity Services Engine (ISE) and ISE Passive Identity Connector (ISE-PIC), tracked as CVE-2026-20029. This medium-severity issue, scored at 4.9 out of 10, stems from improper XML parsing in the web-based management interface. Attackers with valid admin credentials could upload malicious XML files, enabling arbitrary file reads from the underlying operating system and exposing sensitive data.

The flaw poses a substantial risk to enterprise networks, where ISE is widely deployed for centralized access control. Enterprises rely on ISE to manage who and what accesses their infrastructure, making it a prime target for cybercriminals seeking to steal credentials or configuration files.Although no wild exploitation has been confirmed, public proof-of-concept (PoC) exploit code heightens the urgency, echoing patterns from prior ISE vulnerabilities.

Past incidents underscore ISE's appeal to threat actors. In November 2025, sophisticated attackers exploited a maximum-severity zero-day (CVSS 10/10) to deploy custom backdoor malware, bypassing authentication entirely. Similarly, June 2025 patches fixed critical flaws with public PoCs, including arbitrary code execution risks in ISE and related platforms. These events highlight persistent scrutiny on Cisco's network access tools.

Mitigation demands immediate patching, as no workarounds exist. Affected versions require specific updates: migrate pre-3.2 releases to fixed ones; apply Patch 8 for 3.2 and 3.3; use Patch 4 for 3.4; and note 3.5 is unaffected.Administrators must verify their ISE version and apply the precise patch to prevent data leaks, especially given the admin-credential prerequisite that insiders or compromised accounts could fulfill.

Organizations should prioritize auditing ISE deployments amid rising enterprise-targeted attacks. Regular vulnerability scans, credential hygiene, and monitoring for anomalous XML uploads are essential defenses. As PoC code circulates, patching remains the sole bulwark, reinforcing the need for swift action in securing network identities.

Online Misinformation and AI-Driven Fake Content Raise Concerns for Election Integrity

 

With elections drawing near, unease is spreading about how digital falsehoods might influence voter behavior. False narratives on social platforms may skew perception, according to officials and scholars alike. As artificial intelligence advances, deceptive content grows more convincing, slipping past scrutiny. Trust in core societal structures risks erosion under such pressure. Warnings come not just from academics but also from community leaders watching real-time shifts in public sentiment.  

Fake messages have recently circulated online, pretending to be from the City of York Council. Though they looked real, officials later stated these ads were entirely false. One showed a request for people willing to host asylum seekers; another asked volunteers to take down St George flags. A third offered work fixing road damage across neighborhoods. What made them convincing was their design - complete with official logos, formatting, and contact information typical of genuine notices. 

Without close inspection, someone scrolling quickly might believe them. Despite their authentic appearance, none of the programs mentioned were active or approved by local government. The resemblance to actual council material caused confusion until authorities stepped in to clarify. Blurred logos stood out immediately when BBC Verify examined the pictures. Wrong fonts appeared alongside misspelled words, often pointing toward artificial creation. 

Details like fingers looked twisted or incomplete - a frequent issue in computer-made visuals. One poster included an email tied to a real council employee, though that person had no knowledge of the material. Websites referenced in some flyers simply did not exist online. Even so, plenty of individuals passed the content along without questioning its truth. A single fabricated post managed to spread through networks totaling over 500,000 followers. False appearances held strong appeal despite clear warning signs. 

What spreads fast online isn’t always true - Clare Douglas, head of City of York Council, pointed out how today’s tech amplifies old problems in new ways. False stories once moved slowly; now they race across devices at a pace that overwhelms fact-checking efforts. Trust fades when people see conflicting claims everywhere, especially around health or voting matters. Institutions lose ground not because facts disappear, but because attention scatters too widely. When doubt sticks longer than corrections, participation dips quietly over time.  

Ahead of public meetings, tensions surfaced in various regions. Misinformation targeting asylum seekers and councils emerged online in Barnsley, according to Sir Steve Houghton, its council head. False stories spread further due to influencers who keep sharing them - profit often outweighs correction. Although government outlets issued clarifications, distorted messages continue flooding digital spaces. Their sheer number, combined with how long they linger, threatens trust between groups and raises risks for everyday security. Not everyone checks facts these days, according to Ilya Yablokov from the University of Sheffield’s Disinformation Research Cluster. Because AI makes it easier than ever, faking believable content takes little effort now. 

With just a small setup, someone can flood online spaces fast. What helps spread falsehoods is how busy people are - they skip checking details before passing things along. Instead, gut feelings or existing opinions shape what gets shared. Fabricated stories spreading locally might cost almost nothing to create, yet their impact on democracy can be deep. 

When misleading accounts reach more voters, specialists emphasize skills like questioning sources, checking facts, or understanding media messages - these help preserve confidence in public processes while supporting thoughtful engagement during voting events.