Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

Why Europe Is Rethinking Its Dependence on US Cloud Providers

Concerns around digital sovereignty are rapidly becoming one of the most important debates shaping the future of cloud computing, artificial...

All the recent news you need to know

Remote Exploitation Risk Emerges From Ollama Out-of-Bounds Read Flaw


 

Increasing reliance on large language model infrastructure deployed locally has prompted a renewed focus on self-hosted artificial intelligence platforms' security posture after researchers revealed a critical vulnerability in Ollama that could lead to remote attackers gaining access to sensitive process memory without authorization. 

CVE-2026-7482, a security vulnerability with a CVSS severity score of 9,1 describes an out-of-bounds read vulnerability that can expose large portions of memory associated with running Ollama processes, including user prompts, system instructions, configuration data, and environment variables, as a result of an out-of-bounds read. Because Ollama is widely used as a local inference platform for open-source large language models such as Llama and Mistral, the disclosure has raised significant concerns among artificial intelligence and cybersecurity communities.

By using their own infrastructure rather than using external cloud providers, organizations and developers are able to run AI workloads directly. There are approximately 170,000 stars on GitHub, over 100 million Docker Hub downloads, and deployment footprints on nearly 300,000 servers accessible through the internet, which highlight the growing security risks associated with rapidly adopted artificial intelligence ecosystems as well as the sensitive operational data they process. 

Cyera has identified the vulnerability, dubbed Bleeding Llama, to originate from an insecure handling of GGUF model files within Ollama, in which the server implicitly trusts tensor dimension values embedded inside uploaded models without performing adequate boundary validations. Through this design weakness, an application can manipulate memory access operations during model processing by creating specially crafted GGUF files, forcing it to read data outside the application's intended memory buffers and incorporating fragments of sensitive runtime information into model artifacts generated by the application.

It is clear that the underlying problem is linked to the GPT-Generated Unified Format (GGUF), which is widely used to package and distribute large language models that can be efficiently executed locally. Similar to PyTorch's .pt and .pth models, safetensors, and ONNX models, GGUF enables developers to store and execute open-source models directly on local computers without the need for external resources. 

The vulnerability is identified as a result of the manner Ollama processes these files during model creation, specifically by using Go's unsafe package within a function known as WriteTo(). The implementation inadvertently exposes the heap to out-of-bounds reads when malicious tensor metadata is supplied because it relies on low-level memory operations that bypass standard language safety protections. 

It is possible to exploit this vulnerability by crafting a GGUF file with intentionally oversized tensor shape values and sending it to an exposed Ollama instance via the /api/create endpoint in an attack scenario. By manipulating dimensions, the application is forced to access memory regions outside the allocated boundaries during parsing and model generation. As a result, sensitive information contained within the Ollama process space is unintentionally disclosed. 

According to researchers, exposed memory may contain environment variables, authentication tokens, API credentials, system prompts, as well as portions of concurrent user interactions processed by the same instance. CVE-2026-7482 functions differently from conventional exploitation techniques, as it is a silent disclosure mechanism preventing data leakage without crashes, visible failures, or immediate forensic indicators, as opposed to conventional exploitation techniques. 

In internet-accessible deployments, the attack chain itself is considered relatively straightforward, significantly reducing the difficulty of remote exploitation. In order to manipulate Ollama into harvesting unintended memory regions during parsing and artifact generation, attackers can upload malicious GGUF models via the unauthenticated /api/create endpoint. These manipulated tensor dimensions then coerce Ollama into uploading the malicious model. 

An artifact containing sensitive process data can then be exported through the unauthenticated /api/push endpoint, allowing covert exfiltration of stolen information. According to security researchers, since many Ollama instances remain directly exposed to the Internet without adequate access restrictions, the vulnerability poses a particularly serious risk to enterprises and developers using local AI infrastructure assuming self-hosted deployments provide a higher degree of data isolation. 

Analysts warn that the “Bleeding Llama” vulnerability significantly increases the risks associated with self-hosted artificial intelligence infrastructure since unauthenticated attackers will have direct access to the active memory space of the Ollama process without the need for prior access or user involvement. 

In combination with the widespread adoption by enterprises and developers of the platform, the simplicity of exploitation transforms the issue from a single software defect into a large-scale exposure concern for organizations whose sensitive workloads rely on locally deployed language models. In contrast to conventional vulnerabilities causing service disruption, memory disclosure flaws of this nature are capable of silently compromising valuable operational and proprietary data for extended periods of time. 

A research study indicates that attackers could potentially extract confidential model weights, allowing for intellectual property theft or reconstruction of customized AI systems internally, as well as gathering sensitive prompts, business data, and user inputs processed by active models. 

In addition to infrastructure details and authentication tokens, exposed memory may reveal API credentials, runtime configuration information, and API credentials that could facilitate further network compromises. As well as the immediate technical risks, such incidents are also likely to adversely affect organizations increasingly integrating artificial intelligence systems into critical operations, especially those where privacy and local data control are important components of their deployments. 

Security teams across the industry have actively tracked this issue despite the absence of an official CVE identification number, which initially complicated the vulnerability disclosure process. According to defenders, organizations should prioritize rapid mitigation strategies, including immediately upgrading to patched Ollama releases once they are available, limiting public network exposure, implementing strict firewall and access control policies, and ensuring that the service operates under least privilege conditions to reduce access after a compromise has occurred. 

Further, security professionals recommend that network anomalies be monitored continuously, infrastructure audits for misconfigurations be conducted, and deployment within isolated or segmented networks in highly sensitive environments to reduce the attack surface of internet-accessible artificial intelligence systems. 

Furthermore, Striga researchers have identified two separate vulnerabilities that can be chained to result in persistent code execution within the Windows implementation of Ollama, compounding the disclosure surrounding "Bleeding Llama". Researchers have determined that the Windows desktop client is automatically launched during login through the Windows Startup folder and listens locally at 127.0.0.1:11434. 

After checking for updates from the /api/update endpoint periodically, the pending installers are executed the next time the application is started. It is characterized by a combination of a missing signature verification flaw - CVE-2026-42288 - and a path traversal vulnerability - CVE-2026-42249 - both of which have been assigned CVSS scores of 7.7.

According to researchers, the installer signatures are not validated before execution and staging paths are constructed directly from HTTP response headers without proper sanitization, enabling malicious files to be written to locations controlled by the attacker. The flaws may allow arbitrary executables to be silently deployed and executed during system login in scenarios in which an adversary could manipulate update responses, including redirecting the OLLAMA_UPDATE_URL configuration to a controlled HTTP server, while automatic updates remain enabled by default.

 The signature verification issue alone may allow temporary code to be executed from the staging directory, but when combined with a path traversal weakness, persistence can be achieved by writing payloads outside the expected update path, preventing subsequent legitimate updates from overwriting them. 

Ollama for Windows versions 0.12.10 through 0.17.5 are affected by this vulnerability and should be disabled automatically by Microsoft. Users are advised to remove Ollama shortcuts from the Windows Startup directory until patches can be made available. 

A broader security challenge is emerging across the rapidly evolving artificial intelligence ecosystem, which is being increasingly challenged by convenience-driven deployment models colliding with enterprise-grade security expectations as Ollama vulnerabilities develop in scope. 

In response to organizations' increasing adoption of self-hosted large language model infrastructure for the purposes of retaining greater control over sensitive data and inference workloads, researchers warn that insufficient hardening, exposed interfaces, and insecure update mechanisms can result in locally deployed AI environments becoming high-value attack targets. 

As a result of memory disclosure flaws, unauthenticated attack paths, and weaknesses within update workflows, AI infrastructure is becoming increasingly attractive to malicious actors looking to gain access to proprietary models, credentials, and operational intelligence, both opportunistic and sophisticated. 

Several security experts maintain that artificial intelligence platforms cannot be considered experimental development tools operating outside the traditional security governance framework, but rather need to be integrated into the same rigorous vulnerability management, network segmentation, monitoring, and software lifecycle practices that are used for critical enterprise systems.

Purple Team Myth Exposed: Why It's Just Red vs Blue in 2026

 

Many organizations tout their "purple teams" as the pinnacle of cybersecurity collaboration, blending offensive red team tactics with defensive blue team strategies. However, a critical issue persists: these teams often remain siloed, functioning more like red and blue in disguise rather than a true integrated purple force. This misnomer stems from superficial exercises where attackers simulate breaches while defenders watch passively, failing to foster real-time learning or adaptive defenses. 

The problem intensifies in 2026's threat landscape, where exploit windows have shrunk dramatically to just 10 hours on average, demanding rapid response capabilities. Traditional purple teaming, limited to periodic workshops, cannot keep pace with agile adversaries exploiting zero-days and supply chain vulnerabilities. Without genuine fusion, red teams uncover flaws that blue teams log but rarely operationalize, leading to repeated failures during live incidents. This disconnect leaves enterprises exposed, as detections remain unrefined and defenses static. 

At its core, authentic purple teaming requires shared goals, continuous feedback loops, and joint ownership of outcomes, not just shared meeting rooms. Many setups falter here, with red teams prioritizing stealthy attacks over teachable moments and blue teams focusing on alerts without contextual adversary emulation. The result is a performative exercise that boosts resumes but not resilience, ignoring metrics like mean-time-to-respond or coverage of MITRE ATT&CK frameworks. 

To evolve, organizations must shift to autonomous, continuous purple teaming powered by AI agents that simulate attacks, investigate alerts, and map to real-world tactics. This approach validates detections in real-time, bridges the red-blue gap, and scales beyond human bandwidth. Forward-thinking teams are adopting adversarial exposure validation, ensuring defenses evolve proactively rather than reactively. Ultimately, ditching the purple label for hollow collaborations unlocks true synergy, fortifying organizations against 2026's relentless threats. By measuring success through integrated KPIs and embracing automation, security programs can transform from fragmented efforts into unified powerhouses.

Apricorn Launches 32TB Encrypted Drive to Strengthen Offline Data Security Against Cyber Threats

 

Security feels stronger when data is scrambled, yet that strength vanishes if login steps or secret codes fall into the wrong hands. Instead of relying on system files tucked inside computers - where sneaky programs like spyware or digital snoopers lurk - real protection means keeping those pieces far away from risk. Enter a fresh take from Apricorn: their updated Aegis Padlock DT FIPS line now includes a 32TB model built to lock out the host machine completely. 

This shift sidesteps common traps by handling safeguards directly on the drive itself. Authentication happens right on the device, using keys embedded into the drive's own interface. Rather than typing codes through the host machine, individuals enter their access number straight into the unit. Because of this setup, login details do not pass through the computer’s software layer, lowering risks tied to infected endpoints. 

According to Apricorn, cryptographic operations are managed entirely within the hardware via custom-built AegisWare code, ensuring private information stays separate from vulnerable environments. Isolated encrypted storage remains key for strong cyber defenses, says Apricorn's Kurt Markley. Not limited to online solutions, the device fits into wider efforts for securing data without connectivity. 

Instead of relying on the host system, access control moves directly onto the hardware itself. Threats often exploit weaknesses in software-driven methods - this design helps avoid those pitfalls. With every file saved, encryption happens instantly on the Aegis Padlock DT FIPS. Even at rest, both data and access codes stay locked down through strong encoding. Firmware tampering? Not possible - Apricorn built it so updates can’t sneak in. 

That wall keeps out threats like BadUSB, which twists ordinary USB gear into tools for system breaches. Priced close to $2,000, the 32TB model enters alongside lower-capacity encrypted drives. With built-in 256-bit AES XTS encryption, it operates directly through hardware protection. Verified under FIPS 140-2 Level 2 by NIST, its design meets strict governmental requirements. Compatibility spans across Windows, Linux, macOS, Android, and ChromeOS - no extra software needed. Despite higher cost, access remains smooth on multiple platforms out of the box. 

Despite limitations in certain setups, the device works reliably where standard encryption methods fail - think medical scanners, factory machines, isolated storage units, or built-in controllers. Transfer rates reach 5 gigabits per second thanks to a USB 3.2 Gen 1 connection. Inside, vital parts are shielded by a dense epoxy layer, resisting drops, impacts, and deliberate interference. Built tough, it handles rough conditions without compromising security. 

Even with strong built-in protections, the device cannot block all digital threats. Though separating encryption and login checks from the host machine lowers infection chances, firms have to protect where the drive is kept. Should someone get hold of the unit physically, how it's managed day-to-day matters as much as its coded defenses. Firms relying on this tool must enforce clear rules for where it's stored, who can reach it, and which verified machines link to it. 

Security hardware gains traction amid rising digital risks, driven by frequent attacks on weak software defenses and leaked login data. A surge in complex breaches pushes companies to adopt built-in protection methods instead of relying solely on traditional programs. This move reflects deeper changes across sectors aiming to reduce exposure through physical safeguards. Growing reliance on embedded tools marks a departure from older models dependent on patch-prone applications.

North Korean Hackers Hack US Crytpo Executives in Just Five Minutes

 

About Arctic Wolf 


Cybersecurity experts at Arctic Wolf have disclosed information about an advanced campaign attacking North American Web3 and cryptocurrency organizations. State-sponsored group BlueNoroff launched the attack campaign, it is a financially motivated gang associated with the infamous North Korean Lazarus Group. The aim is to make persistent access on the victim device.  

The gang does this by fooling the victim into deploying malware on the systems; however, their tactic is quite advanced.  

The discovery 


Arctic Wolf found an active malicious intrusion in which the threat actor used spear-phishing to send an altered Calendly calendar invite with a typo-squatted Zoom link while posing as a respectable person in the Fintech legal industry. When the victim clicked the link, they were shown a phony Zoom meeting interface that simultaneously launched a ClickFix-style clipboard injection attack and secretly exfiltrated their live camera feed to use as a lure in subsequent attacks. 

After that, information was stolen from the victim's device and browsers via a multi-stage credential extraction pipeline that concentrated on cryptocurrency wallet extensions.v Now enters ClickFix 

While launching the attack campaign, the hackers use real, high-profile people from the Web3 world, create fake headshots (that look real) via ChatGPT, and generate animated videos via Adobe Premiere Pro. 

After this, the hackers would make a fake Zoom video call website similar to the actual Zoom call page, and would show the video to make it all look real.  

Attack tactic 


After this, BlueNoroff gang would invite the actual victim via Calendly, six months prior (to make it all look real and convincing) as prominent people are busy.  

Once the victim opens the Zoom link, they see the usual: a video call webpage with the user on the other side moving and acting like they are real people (remember they are all fake sem-animated video)but, after eight seconds on call, a notification comes up, saying their “SDK is deprecated” and showing users “Update Now” option. 

“The technical execution chain in this campaign is both efficient and operationally disciplined. From initial URL click to full system compromise, including C2 establishment, Telegram session theft, browser credential harvesting, and persistence, the attacker completed in under five minutes,” Arctic Wolf said.

U.S. Marines Reportedly Targeted by Iranian-Linked Hackers in New Data Exposure Incident

 



Iran-linked hacking group Handala has allegedly leaked personal information belonging to thousands of U.S. Marines deployed across the Persian Gulf region, shortly after American military personnel in the Middle East began receiving threatening messages from the group.

According to posts published on Handala’s website, the hackers claim to have released the names and phone numbers of 2,379 U.S. Marines as proof of what they described as their “intelligence superiority.” The group further claimed that the exposed information represents only a small sample from a much larger collection of data allegedly tied to American military personnel stationed in the region.

Handala asserted that it possesses additional details related to military members and their families, including home addresses, movement patterns, military base affiliations, commuting routines, shopping behavior, and other personal activities. These claims have not been independently verified by U.S. authorities.

The alleged leak surfaced days after several U.S. service members reportedly received threatening WhatsApp messages warning that they were under surveillance. The messages referenced Iranian drone and missile systems and attempted to intimidate military personnel by claiming their identities and movements were being tracked. Similar threatening communications believed to be linked to Handala were also reportedly sent to civilians in Israel earlier this week, suggesting a broader psychological and cyber influence campaign connected to escalating tensions in the Middle East.

Since the regional conflict involving Iran, Israel, and the United States intensified earlier this year, Handala has repeatedly claimed responsibility for several high-profile cyber incidents. Last month, the group allegedly leaked hundreds of emails said to have originated from the personal Gmail account of Kash Patel. The hackers have also been linked to a cyberattack targeting medical technology company Stryker, an operation that reportedly resulted in data being erased from tens of thousands of employee devices globally.

However, questions remain regarding the authenticity and quality of the newly leaked Marine data. An analysis of the published sample reportedly identified multiple inconsistencies, including incomplete phone numbers and entries that appeared to contain military contract identifiers rather than personal names. Several listed numbers reportedly connected only to automated voicemail systems.

In a limited number of cases, voicemail names reportedly matched information included in the leak. One individual contacted by reporters allegedly confirmed their identity before ending the call, while others declined to comment or redirected inquiries to military public affairs officials.

U.S. Central Command referred media questions regarding the incident to the Naval Criminal Investigative Service, which had not publicly commented on the matter at the time of reporting.

The incident comes amid growing concerns over cyber-enabled psychological operations targeting military personnel and their families. Earlier this month, Navy Secretary John Phelan urged sailors to strengthen the security of their mobile devices and social media accounts amid concerns over phishing attacks and malicious online activity. In an internal warning, he noted that threat actors may attempt to manipulate military personnel into opening harmful files or clicking malicious links designed to compromise personal accounts and devices.

Handala publicly portrays itself as a pro-Palestinian hacktivist organization. However, multiple cybersecurity firms and recent assessments from the U.S. Department of Justice have alleged that the group operates as a front tied to Iran’s Ministry of Intelligence and Security (MOIS).

Cybersecurity experts note that modern cyber campaigns increasingly combine data leaks, online intimidation, and misinformation tactics to create psychological pressure rather than relying solely on technical disruption. Analysts also caution that hacker groups sometimes exaggerate the scale or sensitivity of stolen data to amplify fear and media attention.

Although U.S. authorities have previously seized domains associated with Handala, the group continues to remain active by turning to new websites and communication platforms, including Telegram, allowing it to sustain its cyber and propaganda operations online.

Investigation Uncovers Thousands of Accounts Tied to Digital Arrest Fraud Networks

 

Indian authorities have launched a massive enforcement response to the escalation of extortion and impersonation fraud resulting from cyber technology. The government informed the Supreme Court in January 2026 that over 9,400 WhatsApp accounts linked to so-called "digital arrest" scams had been banned following a focused 12-week operation. 

Organizing and implementing a coordinated crackdown on organized fraud networks, in partnership with government agencies, reflects a growing concern about organizations exploiting communication platforms to impersonate law enforcement and regulatory authorities in cybercrime campaigns that are financially motivated. 

The WhatsApp countermeasure strategy consists of a combination of behavioural detection technologies and intelligence-driven monitoring systems. In addition to logo-matching capability, account name logging, large language model-based scam pattern analysis, and a repeat offender database, WhatsApp has implemented a combination of these technologies in its countermeasure strategy, in order to identify and disrupt evolving fraud infrastructures. 

Attorney General Venkataramani explained the government's position before the apex court by stating that the enforcement measures and account suspensions were documented in the detailed status report that the Indian Cybercrime Coordination Centre (I4C) under the Ministry of Home Affairs submitted on February 9th. This submission was made to comply with Supreme Court directives aimed at curbing the rapid increase in digital arrest fraud in the country that were issued on February 9. 

Chief Justice Surya Kant's bench is monitoring the case, which was previously brought up suo motu by another bench, which had taken notice of escalating online financial crimes involving impersonation-based extortion schemes and fraudulent virtual detentions. 

The court, as part of a wider institutional response, directed key regulatory and infrastructure agencies, such as the Reserve Bank of India and the Department of Telecommunications, to develop a unified operational framework for victim compensation and cyber fraud response mechanisms, signaling an emerging policy push towards regulating digital risk and mitigation of financial fraud between agencies. It has been reported that the case relates to a coordinated fraud operation that involves impersonating law enforcement officials to manipulate victims into believing that they are under active investigation. 

The accused individuals allegedly used digital communication platforms to fabricate fear, urgency, and intimidation against potential victims. A former bank official has been arrested along with two suspected associates who were allegedly involved in the execution of the scam infrastructure with the Central Bureau of Investigation. These "digital arrest" schemes typically involve prolonged voice or video interactions that isolate target groups from external verification channels. 

As a result, fraudsters remain psychologically in control while coercing victims to transfer funds in the guise of legal clearances, compliance verifications, or settlements. In light of the involvement of a banking insider, investigators have intensified their investigation into the potential misuse of financial systems, as they examine whether privileged access to transaction mechanisms or sensitive financial data permitted illegal funds to be transferred and withdrawn rapidly. 

Forensic analysis of communication logs, transactional paths, and digital evidence is being conducted as part of the ongoing investigation to map the criminal ecosystem supporting the operation as well as identify additional facilitators, beneficiaries, and individuals affected by it. According to law enforcement agencies, digital arrest frauds are on the rise across the nation, incorporating social engineering, identity appropriation, and coordinated cyber-enabled deception techniques to exploit victims.

In addition, legitimate government agencies will never ask for financial payments in order to prevent criminal or legal action from occurring. When investigative inputs were shared by the Indian Cyber Crime Coordination Centre, the Ministry of Electronics and Information Technology, and the Department of Telecommunications, enforcement efforts intensified, leading to a broader intelligence-driven disruption campaign that targeted the ecosystem of organised digital fraud. 

According to WhatsApp, government-reported accounts are not handled as isolated abuse incidents, but rather are analyzed as behavioural indicators to identify interconnected criminal infrastructures and their associated threat networks.

Nearly 3,800 accounts were originally flagged by the government, but the company's internal detection system greatly expanded the scope of the investigation, leading to the removal of thousands of additional accounts associated with suspected scam activities. 

In conjunction with a parallel preventive strategy, the platform has implemented several product-level safeguards in an effort to intercept fraud attempts during early contact stages of the fraud process. Alerts for suspicious first-time interactions, visibility indicators that provide account age information for unknown users, suppression of profile photographs when high-risk conversations occur, and expanded caller identification features are included in this strategy. 

The company expressed confidence that these interventions could help reduce the number of digital arrest frauds. However, it acknowledged that many operations are supported by cross-border criminal infrastructure, unauthorised payment channels, and external communication networks outside of its direct control, and stressed that multijurisdictional law enforcement actions would be required to prevent long-term disruptions. 

Aside from its submission to the Supreme Court, the Center also proposed the establishment of an extensive multi-agency enforcement framework designed to strengthen telecom verification systems, financial fraud response protocols, and cybercrime prevention systems nationally. Following consultation with regulatory and enforcement stakeholders, the report urged the court to direct telecommunications, electronics, and information technology authorities, as well as the Reserve Bank of India to establish standardized and time-bound safeguards against digital arrest scams. 

An important element of the proposal is the rapid implementation of Telecommunications (User Identification) Rules along with a Biometric Identity Verification System in order to establish nationwide traceability and visibility into SIM issuance processes. 

The Department of Telecommunications has instructed telecom service providers to enforce stricter compliance measures and Point of Sale vendors that activate SIM cards are required to meet enhanced verification and accountability requirements in accordance with a circular dated August 31, 2023 issued by the Department of Telecommunications.

Further, the report recommends that suspicious SIM cards associated with cybercrime investigations are blocked immediately. It also recommends that subscriber activation records and point of sale data be shared in real time with investigative agencies in order to improve the effectiveness of emergency response operations. 

During the course of monitoring the rapid expansion of digital arrest scams across India, the Supreme Court requested coordinated national action and periodic status updates from the enforcement and regulatory bodies responsible for the mitigation of cybercrime in India.

One of India's most significant institutional responses to digital arrest fraud has been the coordinated crackdown, reflecting the increasing convergence of cybercrime enforcement, telecommunication regulation, financial oversight, and platform-level security interventions, as well as the increasing threat of digital arrest frauds.

Investigative agencies continue to trace broader criminal networks, as well as regulatory agencies implementing stricter identity verification and fraud prevention guidelines, authorities believe sustained inter-agency coordination is crucial in disrupting organized scam ecosystems across digital communication networks and financial infrastructures. 

Moreover, these developments suggest that India’s cybercrime response strategy has also evolved, in which technology platforms, telecom operators, banks, and law enforcement agencies are collaborating in an effort to counter increasingly sophisticated forms of cybercrime-enabled financial fraud.

Featured