Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

Rising Prompt Injection Threats and How Users Can Stay Secure

  The generative AI revolution is reshaping the foundations of modern work in an age when organizations are increasingly relying on large la...

All the recent news you need to know

UK Cyber Agency says AI Prompt-injection Attacks May Persist for Years

 



The United Kingdom’s National Cyber Security Centre has issued a strong warning about a spreading weakness in artificial intelligence systems, stating that prompt-injection attacks may never be fully solved. The agency explained that this risk is tied to the basic design of large language models, which read all text as part of a prediction sequence rather than separating instructions from ordinary content. Because of this, malicious actors can insert hidden text that causes a system to break its own rules or execute unintended actions.

The NCSC noted that this is not a theoretical concern. Several demonstrations have already shown how attackers can force AI models to reveal internal instructions or sensitive prompts, and other tests have suggested that tools used for coding, search, or even résumé screening can be manipulated by embedding concealed commands inside user-supplied text.

David C, a technical director at the NCSC, cautioned that treating prompt injection as a familiar software flaw is a mistake. He observed that many security professionals compare it to SQL injection, an older type of vulnerability that allowed criminals to send harmful instructions to databases by placing commands where data was expected. According to him, this comparison is dangerous because it encourages the belief that both problems can be fixed in similar ways, even though the underlying issues are completely different.

He illustrated this difference with a practical scenario. If a recruiter uses an AI system to filter applications, a job seeker could hide a message in the document that tells the model to ignore existing rules and approve the résumé. Since the model does not distinguish between what it should follow and what it should simply read, it may carry out the hidden instruction.

Researchers are trying to design protective techniques, including systems that attempt to detect suspicious text or training methods that help models recognise the difference between instructions and information. However, the agency emphasised that all these strategies are trying to impose a separation that the technology does not naturally have. Traditional solutions for similar problems, such as Confused Deputy vulnerabilities, do not translate well to language models, leaving large gaps in protection.

The agency also stressed upon a security idea recently shared on social media that attempted to restrict model behaviour. Even the creator of that proposal admitted that it would sharply reduce the abilities of AI systems, showing how complex and limiting effective safeguards may become.

The NCSC stated that prompt-injection threats are likely to remain a lasting challenge rather than a fixable flaw. The most realistic path is to reduce the chances of an attack or limit the damage it can cause through strict system design, thoughtful deployment, and careful day-to-day operation. The agency pointed to the history of SQL injection, which once caused widespread breaches until better security standards were adopted. With AI now being integrated into many applications, they warned that a similar wave of compromises could occur if organisations do not treat prompt injection as a serious and ongoing risk.


CastleLoader Widens Its Reach as GrayBravo’s MaaS Infrastructure Fuels Multiple Threat Clusters

 

Researchers have now identified four distinct threat activity clusters associated with the malware loader CastleLoader, bolstering previous estimates that the tool was being supplied to multiple cybercriminal groups through a malware-as-a-service model. In this, the operator of this ecosystem has been dubbed GrayBravo by Recorded Future's Insikt Group, which had previously tracked the same actor under the identifier TAG-150. 

CastleLoader emerged in early 2025 and has since evolved into a dynamically developing malware distribution apparatus. Recorded Future's latest analysis underscores GrayBravo's technical sophistication, the ability to promptly adapt operations after public reporting, and the growing infrastructure currently supporting multiple threat campaigns. 

GrayBravo's toolkit consists of several components, including a remote access trojan dubbed CastleRAT and a modular malware framework named CastleBot. CastleBot is composed of three interconnected main elements: a shellcode stager, a loader, and a core backdoor. The loader injects the backdoor into memory, following which the malware communicates with command-and-control servers to receive instructions. These further enable downloading and executing a variety of payloads in the form of DLL, EXE, and PE files. CastleLoader has been used to distribute various well-known malware families, including RedLine Stealer, StealC, DeerStealer, NetSupport RAT, SectopRAT, MonsterV2, WARMCOOKIE, and other loaders, such as Hijack Loader, which demonstrates how well the CastleBot and CastleLoader combo serves as a widely useful tool.  

Recorded Future's new discoveries uncover four separate operational clusters, each using CastleLoader for its purposes. One cluster, attributed to TAG-160, has been operational since March 2025, targeting the logistics industry by leveraging phishing lures and ClickFix for CastleLoader delivery. Another one, referred to as TAG-161, started its operations in June 2025 and has used Booking.com-themed ClickFix campaigns for spreading CastleLoader and Matanbuchus 3.0. One more cluster has utilized infrastructure that spoofs Booking.com, complementing the spoofing with ClickFix and leveraging Steam Community pages as dead-drop resolvers to distribute CastleRAT via CastleLoader. A fourth cluster, which has been active since April 2025, leverages malvertising and fake update notices posing as Zabbix and RVTools for delivering CastleLoader together with NetSupport RAT. 

The actor's infrastructure spans from victim-facing command-and-control servers attributed to CastleLoader, CastleRAT, SectopRAT, and WARMCOOKIE to several other VPS servers, presumably held as spares. Of special interest are the TAG-160 operations, which feature the use of hijacked or fake accounts on freight-matching platforms, including DAT Freight & Analytics and Loadlink Technologies, to create rather plausible phishing messages. The customised lures suggest that the operators have extensive domain knowledge of logistics processes and related communication practices in the industry. 

Recorded Future concluded that the continued expansion in the use of CastleLoader by independent threat groups testifies to how rapidly such advanced and adaptive tools can diffuse in the cybercrime ecosystem once they get credit. Supporting this trend, the recent case documented by the researchers at Blackpoint involved a Python-based dropper chain in which the attackers used ClickFix to download an archive, stage files in the AppData directory, and execute a Python stager that rebuilt and launched a CastleLoader payload. Continued evolution of these delivery methods shows that the malware-as-a-service model behind CastleLoader is really enabling broader and more sophisticated operations through multiple threat actors.

OpenAI Vendor Breach Exposes API User Data

 

OpenAI revealed a security incident in late- November 2025 that allowed hackers to access data about users via its third-party analytics provider, Mixpanel. The breach, which took place on November 9, 2025, exposed a small amount of personally identifiable information for some OpenAI API users, although OpenAI stressed that its own systems had not been the target of the attack.

Breach details 

The breach occurred completely within Mixpanel’s own infrastructure, when an attacker was able to gain access and exfiltrate a dataset containing customer data. Mixpanel became aware of the compromise on 9 November 2025, and following an investigation, shared the breached dataset with OpenAI on 25 November, allowing the technology firm to understand the extent of potential exposure. 

The breach specifically affected users who accessed OpenAI's API via platform.openai.com, rather than regular ChatGPT users. The compromised data included several categories of user information collected through Mixpanel's analytics platform. Names provided to accounts on platform.openai.com were exposed, along with email addresses linked to API accounts. 

Additionally, coarse approximate location data determined by IP addresses, operating system and browser types, referring websites, and organization and user IDs saved in API accounts were part of the breach. However, OpenAI confirmed that more sensitive information remained secure, including chat content, API requests, API usage data, passwords, credentials, API keys, payment details, and government IDs. 

Following the incident, OpenAI took immediate action by removing Mixpanel from its services while conducting its investigation. The company notified affected users on November 26, 2025, right before Thanksgiving, providing details about the breach and emphasizing that it was not a compromise of OpenAI's own systems. OpenAI has suspended its integration with Mixpanel pending a thorough investigation of the incident.

Recommended measures 

OpenAI also encouraged the affected users to stay on guard for potential second wave attacks using the stolen information. Users need to be especially vigilant for phishing and social engineer attacks that could be facilitated by the leaked information, such as names, e-mail addresses and company information. A class action has also been brought against OpenAI and Mixpanel, claiming the companies did nothing to stop the breach of data that revealed personally identifiable information for thousands of users.

Researchers Find Massive Increase in Hypervisor Ransomware Incidents


Rise in hypervisor ransomware incidents 

Cybersecurity experts from Huntress have noticed a sharp rise in ransomware incidents on hypervisors and have asked users to be safe and have proper back-up. 

The Huntress case data has disclosed a surprising increase in hypervisor ransomware. It was involved in malicious encryption and rose from a mere three percent in the first half to a staggering 25 percent in 2025. 

Akira gang responsible 

Experts think that the Akira ransomware gang is the primary threat actor behind this, other players are also going after hypervisors to escape endpoint and network security controls. According to Huntress threat hunters, players are going after hypervisors as they are not secure and hacking them can allow hackers to trigger virtual machines and manage networks.

Why hypervisors?

“This shift underscores a growing and uncomfortable trend: Attackers are targeting the infrastructure that controls all hosts, and with access to the hypervisor, adversaries dramatically amplify the impact of their intrusion," experts said. The attack tactic follows classic playbook. Researchers have "seen it with attacks on VPN appliances: Threat actors realize that the host operating system is often proprietary or restricted, meaning defenders cannot install critical security controls like EDR [Endpoint Detection and Response]. This creates a significant blind spot.”

Other instances 

The experts have also found various cases where ransomware actors install ransomware payloads directly via hypervisors, escaping endpoint security. In a few cases, threat actors used built-in-tools like OpenSSL to run encryption of the virtual machine volume without having to upload custom ransomware binaries.

Attack tactic 

Huntress researchers have also found attackers disrupting a network to steal login credentials and then attack hypervisors.

“We’ve seen misuse of Hyper-V management utilities to modify VM settings and undermine security features,” they add. “This includes disabling endpoint defenses, tampering with virtual switches, and preparing VMs for ransomware deployment at scale," they said.

Mitigation strategies 

Due to the high level of attacks on hypervisors, experts have suggested admins to revisit infosec basics such as multi-factor authentication and password patch updates. Admins should also adopt hypervisor-specific safety measures like only allow-listed binaries can run on a host.

For decades, the Infosec community has known hypervisors to be an easy target. In a worst-case scenario of a successful VM evasion where an attack on a guest virtual machine allows hijacking of the host and its hypervisor, things can go further south. If this were to happen, the impact could be massive as the entire hyperscale clouds depend on hypervisors to isolate tenants' virtual systems.

Europol’s OTF GRIMM Arrests Nearly 200 in Crackdown on “Violence-as-a-Service” Crime Networks

 

Nearly 200 people — including several minors linked to murder attempts — have been taken into custody over the past six months under Europol’s Operational Taskforce (OTF) GRIMM. The initiative focuses on dismantling what authorities describe as “violence-as-a-service” networks, where criminal groups lure young people online to execute contract killings and other violent attacks.

According to Europol, "These individuals are groomed or coerced into committing a range of violent crimes, from acts of intimidation and torture to murder," the agency said on Monday.

Launched in April, OTF GRIMM brings together specialists from Belgium, Denmark, Finland, France, Germany, Iceland, the Netherlands, Norway, Spain, Sweden, the UK, and Europol, alongside several online platforms.

In its first half-year, the taskforce reported arresting 63 suspects accused of planning or committing violent offenses, 40 individuals believed to be “enablers” of violence-for-hire operations, 84 recruiters, and six alleged “instigators.” Five of these instigators have been identified by investigators as “high-value targets.” Among those apprehended were three individuals in Sweden and Germany suspected of fatally shooting three victims on March 28 in Oosterhout, the Netherlands.

Authorities also detained two more suspects, aged 26 and 27, in the Netherlands in October for allegedly attempting a murder in Tamm, Germany, on May 12.

On July 1, Spanish police arrested six people — one of them a minor — who were allegedly plotting a murder. Firearms and ammunition were recovered, and investigators believe the operation prevented a “potential tragedy.”

In Denmark, seven individuals aged between 14 and 26 were either arrested or voluntarily surrendered in June. They are accused of using encrypted messaging platforms to recruit teenagers for contract killings.

These cases arise amid what cybersecurity experts describe as a significant rise in Europe-based cybercrime operations that spill into real-world violence. One of the most notable examples occurred in January, when Ledger co-founder David Balland and his wife, Amandine, were kidnapped in Vierzon, France. During the ordeal, their captors severed Balland’s finger while demanding ransom from another Ledger co-founder; the details of the ransom request have not been publicly disclosed.

Many suspects involved in violence-for-hire schemes have been linked to The Com — an informal group of English-speaking hackers, SIM swappers, and extortionists operating across several overlapping criminal networks. The organization’s influence has expanded internationally, prompting the FBI to issue a recent warning.

According to the bureau, a faction known as In Real Life (IRL) Com poses an increasing danger to young people in the U.S. The FBI’s alert highlighted IRL Com groups offering swatting services — incidents in which criminals file fake reports of shootings or bomb threats to provoke armed police responses at victims’ homes.

Crimes Extorting Ransoms by Manipulating Online Photos

 


It is estimated that there are more than 1,000 sophisticated virtual kidnapping scams being perpetrated right now, prompting fresh warnings from the FBI, as criminals are increasingly using facial recognition software to create photos, videos, and sound files designed to fool victims into believing that their loved ones are in immediate danger. 

As a result of increasing difficulty in distinguishing authentic content from digital manipulation, fraudsters are now blending stolen images with hyper-realistic artificial intelligence tools to fabricate convincing evidence of abductions, exploiting the growing difficulty of distinguishing authentic content from digital manipulation in the current era.

It is quite common for victims to be notified via text message that a family member had been kidnapped and that escalating threats demand that an immediate ransom be paid. 

A scammer often delivers what appears to be genuine images of the supposed victim when the victim requests proof, often sent through disappearing messages so that the fake identity cannot be inspected. This evolving approach, according to the FBI, represents a troubling escalation of extortion campaigns, one that takes advantage of panic as well as the blurred line between real and fake identity as it relates to digital identities. 

The FBI has released a public service announcement stating that criminals are increasingly manipulating photos from social media to manufacture convincing "proof-of-life" materials for use in virtual kidnapping schemes based on photos taken from social media and other open sources. As a rule, offenders contact victims by text, claim to have abducted their loved ones, and request an immediate payment while simultaneously using threats of violence as a way to heighten fear. 

It has been reported that scammers often alter photos or generate videos using Artificial Intelligence that appear authentic at first glance, but when compared to verified images of the supposed victim, inconsistencies are revealed—such as missing tattoos, incorrect scars, or distorted facial or body proportions—and thus make the images appear authentic. 

Often, counterfeit materials are sent out through disappearing message features so that careful analysis is limited. As part of the PSA, malicious actors often exploit emotionally charged situations, such as public searches for missing persons, by posing as credible witnesses or supplying fabricated information. Several tips from the FBI have been offered by the FBI to help individuals reduce vulnerability in the event of a cyber incident. 

The FBI advises people to be cautious when posting personal images online, avoid giving sensitive information to strangers, and develop a private verification method - like a family code word - for communication during times of crisis. When faced with ransom demands, the agency advises anyone targeted to do so to remain calm, take a photo or a message of the purported victim, and attempt to contact the purported victim directly before responding to the demand. 

As a result of recent incidents shared by investigators and cybersecurity analysts, it has become increasingly apparent just how convincing it is for criminals to exploit both human emotions and new technological advances to create schemes that blur the line between reality and fiction. 

A Florida woman was defrauded of $15,000 after receiving a phone call from scammers in which the voice of her daughter was cloned by artificial intelligence and asked for help. There was a separate case where parents almost became victims of the same scheme, when they were approached by criminals who impersonated their son and claimed that he was involved in a car accident and needed immediate assistance in order to recover from that situation. 

However, the similarities and differences between these situations reflect a wider pattern: fraud operations are becoming increasingly sophisticated, impersonating the sounds, appearances, and behaviors of loved ones with alarming accuracy, causing families to make hasty decisions under the pressure of fear and confusion, which pushes the victim into making hasty decisions. Experts have stressed that vigilance must go beyond just basic precautions as these tactics evolve. 

There is a recommendation to limit the amount of personal information you share on social media, especially travel plans, identifying information or real-time location updates, and to review your privacy settings to restrict access to trusted contacts. 

In addition, families should be encouraged to establish a private verification word or phrase that will help them verify their identity when in an emergency, and to try to reach out to the alleged victim through a separate device before taking any action at all. There are many ways in which people can minimize our exposure to cybercriminals, including maintaining strong, unique passwords, using reputable password managers, and securing all our devices with reliable security software. 

The authorities emphasize that it is imperative that peopl resist the urgency created by these scams; slowing down, verifying claims, documenting communications and involving law enforcement are crucial steps in preventing financial and emotional harm caused by these scams. 

According to the investigators, even though public awareness of digital threats is on the rise, meaningful security depends on converting that awareness into deliberate, consistent precautions. Despite the fact that it has yet to be widely spread, the investigation notes that the scheme has been around for several years and early reports surfacing in outlets such as The Guardian much before the latest warnings were issued.

Despite the rapid advancement of generative AI tools, experts say that what has changed is that these tactics have become much easier to implement and more convincing, prompting the FBI to re-issue a new alert. As the FBI points out, the fabricated images and videos used in these schemes are rarely flawless, and when one carefully examines them, one can often find evidence that they are manipulated, such as missing tattoos, altered scars, and subtle distortions in the proportions of the body.

A scammer who is aware of these vulnerabilities will often send the material using timed or disappearing message features, so that a victim cannot carefully examine the content before it disappears, making it very difficult for him or her to avoid being duped. 

In this PSA, it is stressed that it is crucial to maintain good digital hygiene to prevent such scams from occurring: limiting personal imagery shared online, being cautious when giving out personal information while traveling, and establishing a private family code word for verifying the identity of a loved one in an emergency. Before considering any financial response, the FBI advises potential targets to take a moment to attempt to speak directly to the supposedly endangered family member. 

In an era when these threats are being constantly tracked by law enforcement and cybersecurity experts, they are cautioning that the responsibility for prevention has increasingly fallen on the public and their proactive habits. 

By strengthening digital literacy—such as learning how to recognize subtle signs of synthetic media, identifying messages that are intended to provoke fear, and maintaining regular communication routines within the family people can provide powerful layers of protection against cybercrime. Moreover, online experts recommend that people diversify their online presence by not using the same profile photograph on every platform they use and by reviewing their social media archives for any old posts that may inadvertently expose personal patterns or personal relationships.

There are many ways in which communities can contribute to cybersafety, including sharing verified information, reporting suspicious events quickly, and encouraging open discussion about online safety among children, parents, and elderly relatives who are often targeted as a result of their trust in technology or lack of familiarity with it. 

Despite the troubling news of the FBI's warning regarding digital extortion, it also suggests that a clear path to reducing the impact and reach of these emotionally exploitative schemes can be found if people remain vigilant, behave thoughtfully online, and keep ourselves aware of our surroundings.

Featured