Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Cyber Security. Show all posts

Hackers Use Look-Alike Domain Trick to Imitate Microsoft and Capture User Credentials

 




A new phishing operation is misleading users through an extremely subtle visual technique that alters the appearance of Microsoft’s domain name. Attackers have registered the look-alike address “rnicrosoft(.)com,” which replaces the single letter m with the characters r and n positioned closely together. The small difference is enough to trick many people into believing they are interacting with the legitimate site.

This method is a form of typosquatting where criminals depend on how modern screens display text. Email clients and browsers often place r and n so closely that the pair resembles an m, leading the human eye to automatically correct the mistake. The result is a domain that appears trustworthy at first glance although it has no association with the actual company.

Experts note that phishing messages built around this tactic often copy Microsoft’s familiar presentation style. Everything from symbols to formatting is imitated to encourage users to act without closely checking the URL. The campaign takes advantage of predictable reading patterns where the brain prioritizes recognition over detail, particularly when the user is scanning quickly.

The deception becomes stronger on mobile screens. Limited display space can hide the entire web address and the address bar may shorten or disguise the domain. Criminals use this opportunity to push malicious links, deliver invoices that look genuine, or impersonate internal departments such as HR teams. Once a victim believes the message is legitimate, they are more likely to follow the link or download a harmful attachment.

The “rn” substitution is only one example of a broader pattern. Typosquatting groups also replace the letter o with the number zero, add hyphens to create official-sounding variations, or register sites with different top level domains that resemble the original brand. All of these are intended to mislead users into entering passwords or sending sensitive information.

Security specialists advise users to verify every unexpected message before interacting with it. Expanding the full sender address exposes inconsistencies that the display name may hide. Checking links by hovering over them, or using long-press previews on mobile devices, can reveal whether the destination is legitimate. Reviewing email headers, especially the Reply-To field, can also uncover signs that responses are being redirected to an external mailbox controlled by attackers.

When an email claims that a password reset or account change is required, the safest approach is to ignore the provided link. Instead, users should manually open a new browser tab and visit the official website. Organisations are encouraged to conduct repeated security awareness exercises so employees do not react instinctively to familiar-looking alerts.


Below are common variations used in these attacks:

Letter Pairing: r and n are combined to imitate m as seen in rnicrosoft(.)com.

Number Replacement: the letter o is switched with the number zero in addresses like micros0ft(.)com.

Added Hyphens: attackers introduce hyphens to create domains that appear official, such as microsoft-support(.)com.

Domain Substitution: similar names are created by altering only the top level domain, for example microsoft(.)co.


This phishing strategy succeeds because it relies on human perception rather than technical flaws. Recognising these small changes and adopting consistent verification habits remain the most effective protections against such attacks.



North Korean APT Collaboration Signals Escalating Cyber Espionage and Financial Cybercrime

 

Security analysts have identified a new escalation in cyber operations linked to North Korea, as two of the country’s most well-known threat actors—Kimsuky and Lazarus—have begun coordinating attacks with unprecedented precision. A recent report from Trend Micro reveals that the collaboration merges Kimsuky’s extensive espionage methods with Lazarus’s advanced financial intrusion capabilities, creating a two-part operation designed to steal intelligence, exploit vulnerabilities, and extract funds at scale. 

Rather than operating independently, the two groups are now functioning as a complementary system. Kimsuky reportedly initiates most campaigns by collecting intelligence and identifying high-value victims through sophisticated phishing schemes. One notable 2024 campaign involved fraudulent invitations to a fake “Blockchain Security Symposium.” Attached to the email was a malicious Hangul Word Processor document embedded with FPSpy malware, which stealthily installed a keylogger called KLogEXE. This allowed operators to record keystrokes, steal credentials, and map internal systems for later exploitation. 

Once reconnaissance was complete, data collected by Kimsuky was funneled to Lazarus, which then executed the second phase of attacks. Investigators found Lazarus leveraged an unpatched Windows zero-day vulnerability, identified as CVE-2024-38193, to obtain full system privileges. The group distributed infected Node.js repositories posing as legitimate open-source tools to compromise server environments. With this access, the InvisibleFerret backdoor was deployed to extract cryptocurrency wallet contents and transactional logs. Advanced anti-analysis techniques, including Fudmodule, helped the malware avoid detection by enterprise security tools. Researchers estimate that within a 48-hour window, more than $30 million in digital assets were quietly stolen. 

Further digital forensic evidence reveals that both groups operated using shared command-and-control servers and identical infrastructure patterns previously observed in earlier North Korean cyberattacks, including the 2014 breach of a South Korean nuclear operator. This shared ecosystem suggests a formalized, state-aligned operational structure rather than ad-hoc collaboration.  

Threat activity has also expanded beyond finance and government entities. In early 2025, European energy providers received a series of targeted phishing attempts aimed at collecting operational power grid intelligence, signaling a concerning pivot toward critical infrastructure sectors. Experts believe this shift aligns with broader strategic motivations: bypassing sanctions, funding state programs, and positioning the regime to disrupt sensitive systems if geopolitical tensions escalate. 

Cybersecurity specialists advise organizations to strengthen resilience through aggressive patch management, multi-layered email security, secure cryptocurrency storage practices, and active monitoring for indicators of compromise such as unexpected execution of winlogon.exe or unauthorized access to blockchain-related directories. 

Researchers warn that the coordinated activity between Lazarus and Kimsuky marks a new phase in North Korea’s cyber posture—one blending intelligence gathering with highly organized financial theft, creating a sustained and evolving global threat.

X’s New Location Feature Exposes Foreign Manipulation of US Political Accounts

 

X's new location feature has revealed that many high-engagement US political accounts, particularly pro-Trump ones, are actually operated from countries outside the United States such as Russia, Iran, and Kenya. 

This includes accounts that strongly claim to represent American interests but are based abroad, misleading followers and potentially influencing US political discourse. Similarly, some anti-Trump accounts that seemed to be run by Americans are also found to be foreign-operated. For example, a prominent anti-Trump account with 52,000 followers was based in Kenya and was deleted after exposure. 

The feature exposed widespread misinformation and deception as these accounts garner millions of interactions, often resulting in financial compensation through X's revenue-sharing scheme, allowing both individuals and possibly state-backed groups to exploit the platform for monetary or political gain.

Foreign influence and misinformation

The new location disclosure highlighted significant foreign manipulation of political conversations on X, which raises concerns about authenticity and trust in online discourse. Accounts that present themselves as authentic American voices may actually be linked to troll farms or nation-state actors aiming to amplify divisive narratives or to profit financially. 

This phenomenon is exacerbated by X’s pay-for-play blue tick verification system, which some experts, including Alexios Mantzarlis from Cornell Tech, criticize as a revenue scheme rather than a meaningful validation effort. Mantzarlis emphasizes that financial incentives often motivate such deceptive activities, with operators stoking America's cultural conflicts on social media.

Additional geographic findings

Beyond US politics, BBC Verify found accounts supporting Scottish independence that are purportedly based in Iran despite having smaller followings. This pattern aligns with previous coordinated networks flagged for deceptive political influence. Such accounts often use AI-generated profile images and post highly similar content, generating substantial views while hiding their actual geographic origins.

While the location feature is claimed to be about 99% accurate, there are limitations such as the use of VPNs, proxies, and other methods that can mask true locations, causing some data inaccuracies. The tool's launch also sparked controversy as some users claim their locations are inaccurately displayed, causing breaches of user trust. Experts caution that despite the added transparency, it is a developing tool, and bad actors will likely find ways to circumvent these measures.

Platform responses and transparency efforts

X’s community notes feature, allowing users to add context to viral posts, is viewed as a step toward enhanced transparency, though deception remains widespread. The platform indicates ongoing efforts to introduce more ways to authenticate content and maintain integrity in the "global town square" of social media.

However, researchers emphasize the need for continuous scrutiny given the high stakes of political misinformation and manipulation.This new feature exposes deep challenges in ensuring authenticity and trust in political discourse on X, uncovering foreign manipulation that spans multiple political ends, and revealing the complexities of combating misinformation amid financial and geopolitical motives.

AI Emotional Monitoring in the Workplace Raises New Privacy and Ethical Concerns

 

As artificial intelligence becomes more deeply woven into daily life, tools like ChatGPT have already demonstrated how appealing digital emotional support can be. While public discussions have largely focused on the risks of using AI for therapy—particularly for younger or vulnerable users—a quieter trend is unfolding inside workplaces. Increasingly, companies are deploying generative AI systems not just for productivity but to monitor emotional well-being and provide psychological support to employees. 

This shift accelerated after the pandemic reshaped workplaces and normalized remote communication. Now, industries including healthcare, corporate services and HR are turning to software that can identify stress, assess psychological health and respond to emotional distress. Unlike consumer-facing mental wellness apps, these systems sit inside corporate environments, raising questions about power dynamics, privacy boundaries and accountability. 

Some companies initially introduced AI-based counseling tools that mimic therapeutic conversation. Early research suggests people sometimes feel more validated by AI responses than by human interaction. One study found chatbot replies were perceived as equally or more empathetic than responses from licensed therapists. This is largely attributed to predictably supportive responses, lack of judgment and uninterrupted listening—qualities users say make it easier to discuss sensitive topics. 

Yet the workplace context changes everything. Studies show many employees hesitate to use employer-provided mental health tools due to fear that personal disclosures could resurface in performance reviews or influence job security. The concern is not irrational: some AI-powered platforms now go beyond conversation, analyzing emails, Slack messages and virtual meeting behavior to generate emotional profiles. These systems can detect tone shifts, estimate personal stress levels and map emotional trends across departments. 

One example involves workplace platforms using facial analytics to categorize emotional expression and assign wellness scores. While advocates claim this data can help organizations spot burnout and intervene early, critics warn it blurs the line between support and surveillance. The same system designed to offer empathy can simultaneously collect insights that may be used to evaluate morale, predict resignations or inform management decisions. 

Research indicates that constant monitoring can heighten stress rather than reduce it. Workers who know they are being analyzed tend to modulate behavior, speak differently or avoid emotional honesty altogether. The risk of misinterpretation is another concern: existing emotion-tracking models have demonstrated bias against marginalized groups, potentially leading to misread emotional cues and unfair conclusions. 

The growing use of AI-mediated emotional support raises broader organizational questions. If employees trust AI more than managers, what does that imply about leadership? And if AI becomes the primary emotional outlet, what happens to the human relationships workplaces rely on? 

Experts argue that AI can play a positive role, but only when paired with transparent data use policies, strict privacy protections and ethical limits. Ultimately, technology may help supplement emotional care—but it cannot replace the trust, nuance and accountability required to sustain healthy workplace relationships.

Google’s High-Stakes AI Strategy: Chips, Investment, and Concerns of a Tech Bubble

 

At Google’s headquarters, engineers work on Google’s Tensor Processing Unit, or TPU—custom silicon built specifically for AI workloads. The device appears ordinary, but its role is anything but. Google expects these chips to eventually power nearly every AI action across its platforms, making them integral to the company’s long-term technological dominance. 

Pichai has repeatedly described AI as the most transformative technology ever developed, more consequential than the internet, smartphones, or cloud computing. However, the excitement is accompanied by growing caution from economists and financial regulators. Institutions such as the Bank of England have signaled concern that the rapid rise in AI-related company valuations could lead to an abrupt correction. Even prominent industry leaders, including OpenAI CEO Sam Altman, have acknowledged that portions of the AI sector may already display speculative behavior. 

Despite those warnings, Google continues expanding its AI investment at record speed. The company now spends over $90 billion annually on AI infrastructure, tripling its investment from only a few years earlier. The strategy aligns with a larger trend: a small group of technology companies—including Microsoft, Meta, Nvidia, Apple, and Tesla—now represents roughly one-third of the total value of the U.S. S&P 500 market index. Analysts note that such concentration of financial power exceeds levels seen during the dot-com era. 

Within the secured TPU lab, the environment is loud, dominated by cooling units required to manage the extreme heat generated when chips process AI models. The TPU differs from traditional CPUs and GPUs because it is built specifically for machine learning applications, giving Google tighter efficiency and speed advantages while reducing reliance on external chip suppliers. The competition for advanced chips has intensified to the point where Silicon Valley executives openly negotiate and lobby for supply. 

Outside Google, several AI companies have seen share value fluctuations, with investors expressing caution about long-term financial sustainability. However, product development continues rapidly. Google’s recently launched Gemini 3.0 model positions the company to directly challenge OpenAI’s widely adopted ChatGPT.  

Beyond financial pressures, the AI sector must also confront resource challenges. Analysts estimate that global data centers could consume energy on the scale of an industrialized nation by 2030. Still, companies pursue ever-larger AI systems, motivated by the possibility of reaching artificial general intelligence—a milestone where machines match or exceed human reasoning ability. 

Whether the current acceleration becomes a long-term technological revolution or a temporary bubble remains unresolved. But the race to lead AI is already reshaping global markets, investment patterns, and the future of computing.

Massive Leak Exposes 1.3 Billion Passwords and 2 Billion Emails — Check If Your Credentials Are at Risk

 

If you haven’t recently checked whether your login details are floating around online, now is the time. A staggering 1.3 billion unique passwords and 2 billion unique email addresses have surfaced publicly — and not due to a fresh corporate breach.

Instead, this massive cache was uncovered after threat-intelligence firm Synthient combed through both the open web and the dark web for leaked credentials. You may recognize the company, as they previously discovered 183 million compromised email accounts.

Much of this enormous collection is made up of credential-stuffing lists, which bundle together login details stolen from various older breaches. Cybercriminals typically buy and trade these lists to attempt unauthorized logins across multiple platforms.

This time, Synthient pulled together all 2 billion emails and 1.3 billion passwords, and with help from Troy Hunt and Have I Been Pwned (HIBP), the entire dataset can now be searched so users can determine if their personal information is exposed.

The compilation was created by Synthient founder Benjamin Brundage, who spent months gathering leaked credentials from countless sources across hacker forums and malware dumps. The dataset includes both older breach data and newly stolen information harvested through info-stealing malware, which quietly extracts passwords from infected devices.

According to Troy Hunt, Brundage provided the raw data while Hunt independently verified its authenticity.

To test its validity, Hunt used one of his old email addresses — one he already knew had appeared in past credential lists. As expected, that address and several associated passwords were included in the dataset.

After that, Hunt contacted a group of HIBP subscribers for verification. By choosing some users whose data had never appeared in a breach and others with previously exposed data, he confirmed that the new dataset wasn’t just recycled information — fresh, previously unseen credentials were indeed present.

HIBP has since integrated the exposed passwords into its Pwned Passwords service. Importantly, this database never links email addresses to passwords, maintaining privacy while still allowing users to check if their passwords are compromised.

To see if any of your current passwords have been leaked, visit the Pwned Passwords page and enter them. Your passwords are never sent to a server — the entire check is processed locally in your browser through an anonymity-preserving method.

If any password you use appears in the results, change it immediately. You can rely on a password manager to generate strong replacements, or use free password generators from tools like Bitwarden, LastPass, and ProtonPass.

The single most important cybersecurity rule remains the same: never reuse passwords. When criminals obtain one set of login credentials, they try them across other platforms — an attack method known as credential stuffing. Because so many people still repeat passwords, these attacks remain highly successful.

Make sure every account you own uses a strong, complex, and unique password. Password managers and built-in password generators are the easiest way to handle this.

Even the best password may not protect you if it’s stolen through a breach or malware. That’s why Two-Factor Authentication (2FA) is crucial. With a second verification step — such as an authenticator app or security key — criminals won’t be able to access your account even if they know the password.

You should also safeguard your devices against malware using reputable antivirus tools on Windows, Mac, and Android. Info-stealing malware, often spread through phishing attacks, remains one of the most common ways passwords are siphoned directly from user devices.

If you’re interested in going beyond passwords altogether, consider switching to passkeys. These use cryptographic key pairs rather than passwords, making them unguessable, non-reusable, and resistant to phishing attempts.

Think of your password as the lock on your home’s front door: the stronger it is, the harder it is for intruders to break in. But even with strong habits, your information can still be exposed through breaches outside your control — one reason many experts, including Hunt, see passkeys as the future.

While it’s easy to panic after reading about massive leaks like this, staying consistent with good digital hygiene and regularly checking your exposure will keep you one step ahead of cybercriminals.

Why Long-Term AI Conversations Are Quietly Becoming a Major Corporate Security Weakness

 



Many organisations are starting to recognise a security problem that has been forming silently in the background. Conversations employees hold with public AI chatbots can accumulate into a long-term record of sensitive information, behavioural patterns, and internal decision-making. As reliance on AI tools increases, these stored interactions may become a serious vulnerability that companies have not fully accounted for.

The concern resurfaced after a viral trend in late 2024 in which social media users asked AI models to highlight things they “might not know” about themselves. Most treated it as a novelty, but the trend revealed a larger issue. Major AI providers routinely retain prompts, responses, and related metadata unless users disable retention or use enterprise controls. Over extended periods, these stored exchanges can unintentionally reveal how employees think, communicate, and handle confidential tasks.

This risk becomes more severe when considering the rise of unapproved AI use at work. Recent business research shows that while the majority of employees rely on consumer AI tools to automate or speed up tasks, only a fraction of companies officially track or authorise such usage. This gap means workers frequently insert sensitive data into external platforms without proper safeguards, enlarging the exposure surface beyond what internal security teams can monitor.

Vendor assurances do not fully eliminate the risk. Although companies like OpenAI, Google, and others emphasize encryption and temporary chat options, their systems still operate within legal and regulatory environments. One widely discussed court order in 2025 required the preservation of AI chat logs, including previously deleted exchanges. Even though the order was later withdrawn and the company resumed standard deletion timelines, the case reminded businesses that stored conversations can resurface unexpectedly.

Technical weaknesses also contribute to the threat. Security researchers have uncovered misconfigured databases operated by AI firms that contained user conversations, internal keys, and operational details. Other investigations have demonstrated that prompt-based manipulation in certain workplace AI features can cause private channel messages to leak. These findings show that vulnerabilities do not always come from user mistakes; sometimes the supporting AI infrastructure itself becomes an entry point.

Criminals have already shown how AI-generated impersonation can be exploited. A notable example involved attackers using synthetic voice technology to imitate an executive, tricking an employee into transferring funds. As AI models absorb years of prompt history, attackers could use stylistic and behavioural patterns to impersonate employees, tailor phishing messages, or replicate internal documents.

Despite these risks, many companies still lack comprehensive AI governance. Studies reveal that employees continue to insert confidential data into AI systems, sometimes knowingly, because it speeds up their work. Compliance requirements such as GDPR’s strict data minimisation rules make this behaviour even more dangerous, given the penalties for mishandling personal information.

Experts advise organisations to adopt structured controls. This includes building an inventory of approved AI tools, monitoring for unsanctioned usage, conducting risk assessments, and providing regular training so staff understand what should never be shared with external systems. Some analysts also suggest that instead of banning shadow AI outright, companies should guide employees toward secure, enterprise-level AI platforms.

If companies fail to act, each casual AI conversation can slowly accumulate into a dataset capable of exposing confidential operations. While AI brings clear productivity benefits, unmanaged use may convert everyday workplace conversations into one of the most overlooked security liabilities of the decade.

Cloudflare Outage Traced to Internal File Error After Initial Fears of Massive DDoS Attack

Cloudflare experienced a major disruption yesterday that knocked numerous websites and online services offline. At first, the company suspected it was under a massive “hyper-scale” DDoS attack.

“I worry this is the big botnet flexing,” Cloudflare co-founder and CEO Matthew Prince wrote in an internal chat, referring to concerns that the Aisuru botnet might be responsible. However, the team later confirmed that the issue originated from within Cloudflare’s own infrastructure: a critical configuration file unexpectedly grew in size and spread across the network.

This oversized file caused failures in software responsible for reading the data used by Cloudflare’s bot management system, which relies on machine learning to detect harmful traffic. As a result, Cloudflare’s core CDN, security tools, and other services were impacted.

“After we initially wrongly suspected the symptoms we were seeing were caused by a hyper-scale DDoS attack, we correctly identified the core issue and were able to stop the propagation of the larger-than-expected feature file and replace it with an earlier version of the file,” Prince explained in a post-mortem.

According to Prince, the issue began when changes to database permissions caused the system to generate duplicate entries inside a “feature file” used by the company’s bot detection model. The file then doubled in size and automatically replicated across Cloudflare’s global network.

Machines that route traffic through Cloudflare read this file to keep the bot management system updated. But the software had a strict size limit for this configuration file, and the bloated version exceeded that threshold, causing widespread failures. Once the old version was restored, traffic began returning to normal — though it took another 2.5 hours to stabilize the network after the sudden surge in requests.

Prince apologized for the downtime, noting the heavy dependence many online platforms have on Cloudflare. “On behalf of the entire team at Cloudflare, I would like to apologize for the pain we caused the Internet today,” he wrote, adding that outages are especially serious due to “Cloudflare’s importance in the Internet ecosystem.”

Cloudflare’s bot management system assigns bot scores using machine learning, helping customers filter legitimate traffic from malicious requests. The configuration file powering this system is updated every five minutes to adapt quickly to changing bot behaviors.

The faulty file was generated by a query on a ClickHouse database cluster. After new permissions were added, the query began returning additional metadata—duplicating columns and producing more rows than expected. Because the system caps features at 200, the oversized file triggered a panic state once deployed across Cloudflare’s servers.

The result was a dramatic surge in 5xx server errors. The pattern appeared irregular at first because only some database nodes were generating the bad file. Every five minutes, the system could push either a correct or incorrect version depending on which node handled the query, creating cyclical failures that initially resembled a distributed attack.

Eventually, all ClickHouse nodes began producing the faulty file consistently. Cloudflare resolved the issue by stopping the distribution of the corrupted file, manually injecting a stable version, and restarting its core proxy services. The network returned to normal later that day.

Prince called this Cloudflare’s most significant outage since 2019. To prevent similar incidents, the company plans to strengthen safeguards around internal configuration files, introduce more global kill switches, prevent system overloads caused by error logs, and review failure points across core components.

While Prince emphasized that no system can be guaranteed immune to outages, he noted that past failures have led Cloudflare to build more resilient systems each time.

Genesis Mission Launches as US Builds Closed-Loop AI System Linking National Laboratories

 

The United States has announced a major federal scientific initiative known as the Genesis Mission, framed by the administration as a transformational leap forward in how national research will be conducted. Revealed on November 24, 2025, the mission is described by the White House as the most ambitious federal science effort since the Manhattan Project. The accompanying executive order tasks the Department of Energy with creating an interconnected “closed-loop AI experimentation platform” that will join the nation’s supercomputers, 17 national laboratories, and decades of research datasets into one integrated system. 

Federal statements position the initiative as a way to speed scientific breakthroughs in areas such as quantum engineering, fusion, advanced semiconductors, biotechnology, and critical materials. DOE has called the system “the most complex scientific instrument ever built,” describing it as a mechanism designed to double research productivity by linking experiment automation, data processing, and AI models into a single continuous pipeline. The executive order requires DOE to progress rapidly, outlining milestones across the next nine months that include cataloging datasets, mapping computing capacity, and demonstrating early functionality for at least one scientific challenge. 

The Genesis Mission will not operate solely as a federal project. DOE’s launch materials confirm that the platform is being developed alongside a broad coalition of private, academic, nonprofit, cloud, and industrial partners. The roster includes major technology companies such as Microsoft, Google, OpenAI for Government, NVIDIA, AWS, Anthropic, Dell Technologies, IBM, and HPE, alongside aerospace companies, semiconductor firms, and energy providers. Their involvement signals that Genesis is designed not only to modernize public research, but also to serve as part of a broader industrial and national capability. 

However, key details remain unclear. The administration has not provided a cost estimate, funding breakdown, or explanation of how platform access will be structured. Major news organizations have already noted that the order contains no explicit budget allocation, meaning future appropriations or resource repurposing will determine implementation. This absence has sparked debate across the AI research community, particularly among smaller labs and industry observers who worry that the platform could indirectly benefit large frontier-model developers facing high computational costs. 

The order also lays the groundwork for standardized intellectual-property agreements, data governance rules, commercialization pathways, and security requirements—signaling a tightly controlled environment rather than an open-access scientific commons. Certain community reactions highlight how the initiative could reshape debates around open-source AI, public research access, and the balance of federal and private influence in high-performance computing. While its long-term shape is not yet clear, the Genesis Mission marks a pivotal shift in how the United States intends to organize, govern, and accelerate scientific advancement using artificial intelligence and national infrastructure.

UK’s Proposed Ban on Ransomware Payments Sparks Debate as Attacks Surge in 2025

 

Ransomware incidents continue to escalate, reigniting discussions around whether organizations should ever pay attackers. Cybercriminals are increasingly leveraging ransomware to extort significant sums from companies desperate to protect their internal and customer data.

Recent research revealed a 126% jump in ransomware activity in the first quarter of 2025, compared to the previous quarter — a spike that has prompted urgent attention.

In reaction to this rise, the UK government has proposed banning ransomware payments, a move intended to curb organizations from transferring large sums to cybercriminals in hopes of restoring their data or avoiding public scrutiny. Under the current proposal, the ban would initially apply to public sector bodies and Critical National Infrastructure (CNI) organizations, though there is growing interest in extending the policy across all UK businesses.

If this wider ban takes effect, organizations will need to adapt to a reality where paying attackers is no longer an option. Instead, they will have to prioritize robust resilience measures, thorough incident response planning, and faster recovery capabilities.

This raises a central debate: Are ransomware payment bans the right solution? And if implemented, how can organizations protect themselves without relying on a financial “escape route”?

Many organizations have long viewed ransom payments as a convenient way to restore operations — a perceived “get out of jail free” shortcut that avoids lengthy reporting, disclosure, or regulatory scrutiny.

But the reality is stark: when dealing with criminals, there are no guarantees. Paying a ransom reinforces an already thriving network of cybercriminal operations.

In spite of this, organizations continue to pay. Recent studies indicate that 41% of organizations in 2025 admitted to paying ransom demands, although only 67% of those who paid actually regained full access to their data. These figures highlight the willingness of companies to divert large budgets to ransom fees — investments that could otherwise strengthen cyber defenses and prevent attacks altogether.

There are strong arguments on both sides of the UK proposal. A payment ban removes the burden of negotiating with threat actors who have no obligation to keep their word. It also eliminates the possibility of paying for data that attackers may never return after receiving the funds.

Another issue is the ongoing stigma around publicly acknowledging a ransomware attack. To protect their reputation, many organizations choose to quietly meet attackers’ demands — enabling criminals to operate undetected and without law enforcement involvement.

A ban would change this dynamic entirely. Without the option to pay, organizations would be forced to report incidents, helping authorities investigate and track cybercriminal activity more effectively.

The broader hope behind the proposal is that, without profit incentives, ransomware attacks will eventually fade out. While optimistic, the UK government views this approach as one of the few viable long-term strategies to reduce ransomware incidents.

However, the near-term outlook is more complex. Attacks are unlikely to stop immediately, and eliminating the option to pay could leave organizations without a practical mechanism for retrieving highly sensitive data — including customer information — in the aftermath of an attack.

If ransomware payments become illegal, organizations must proactively invest in stronger cyber resilience. Small and medium businesses, which often lack internal cybersecurity expertise, can significantly benefit from partnering with a Managed Service Provider (MSP). MSPs manage IT systems and cybersecurity operations, allowing business leaders to focus on growth and innovation. Research shows that over 80% of SMEs now rely on MSPs for cybersecurity support.

Regular security awareness training is also essential. Educating employees on identifying phishing attempts and suspicious activity helps reduce human errors that often lead to ransomware breaches.

Furthermore, a tested and well-structured incident response plan is critical. Many organizations overlook this step, but it plays a major role in containing damage during an attack.

With the UK edging closer to implementing a nationwide ransomware payment ban, organizations cannot afford to wait. Strengthening cyber resilience is the most effective path forward. This includes deploying advanced security tools, working with MSPs, and building a thorough — and regularly tested — incident response strategy.

Businesses that act early will be far better equipped to withstand attacks in a world where paying ransom is no longer an option.

Akira Ramps up Ransomware Activity With New Variant And More Aggressive Intrusion Methods

 


Akira, one of the most active ransomware operations this year, has expanded its capabilities and increased the scale of its attacks, according to new threat intelligence shared by global security agencies. The group’s operators have upgraded their ransomware toolkit, continued to target a broad range of sectors, and sharply increased the financial impact of their attacks.

Data collected from public extortion portals shows that by the end of September 2025 the group had claimed roughly 244.17 million dollars in ransom proceeds. Analysts note that this figure represents a steep rise compared to estimates released in early 2024. Current tracking data places Akira second in overall activity among hundreds of monitored ransomware groups, with more than 620 victim organisations listed this year.

The growing number of incidents has prompted an updated joint advisory from international cyber authorities. The latest report outlines newly observed techniques, warns of the group’s expanded targeting, and urges all organisations to review their defensive posture.

Researchers confirm that Akira has introduced a new ransomware strain, commonly referenced as Akira v2. This version is designed to encrypt files at higher speeds and make data recovery significantly harder. Systems affected by the new variant often show one of several extensions, which include akira, powerranges, akiranew, and aki. Victims typically find ransom instructions stored as text files in both the main system directory and user folders.

Investigations show that Akira actors gain entry through several familiar but effective routes. These include exploiting security gaps in edge devices and backup servers, taking advantage of authentication bypass and scripting flaws, and using buffer overflow vulnerabilities to run malicious code. Stolen or brute forced credentials remain a common factor, especially when multi factor authentication is disabled.

Once inside a network, the attackers quickly establish long-term access. They generate new domain accounts, including administrative profiles, and have repeatedly created an account named itadm during intrusions. The group also uses legitimate system tools to explore networks and identify sensitive assets. This includes commands used for domain discovery and open-source frameworks designed for remote execution. In many cases, the attackers uninstall endpoint detection products, change firewall rules, and disable antivirus tools to remain unnoticed.

The group has also expanded its focus to virtual and cloud based environments. Security teams recently observed the encryption of virtual machine disk files on Nutanix AHV, in addition to previous activity on VMware ESXi and Hyper-V platforms. In one incident, operators temporarily powered down a domain controller to copy protected virtual disk files and load them onto a new virtual machine, allowing them to access privileged credentials.

Command and control activity is often routed through encrypted tunnels, and recent intrusions show the use of tunnelling services to mask traffic. Authorities warn that data theft can occur within hours of initial access.

Security agencies stress that the most effective defence remains prompt patching of known exploited vulnerabilities, enforcing multi factor authentication on all remote services, monitoring for unusual account creation, and ensuring that backup systems are fully secured and tested.



Germany’s Cyber Skills Shortage Leaves Companies Exposed to Record Cyberattacks

 

Germany faces a critical shortage of cybersecurity specialists amid a surge in cyberattacks that caused record damages of €202.4 billion in 2024, according to a study by Strategy&, a unit of PwC. The study found that nine out of 10 organizations surveyed reported a shortage of cybersecurity experts, a sharp increase from two-thirds in 2023. 

Key institutions such as German air traffic control, the Federal Statistical Office, and the Society for Eastern European Studies were targeted by foreign cyberattacks, highlighting the nation’s digital vulnerability. Russia and China were specifically identified as significant cyber threats.

The overall damage to German organizations from cyber-related incidents in 2024 reached €267 billion, with cyberattacks themselves accounting for about €179 billion. Other forms of damage included theft of data, IT equipment, and various acts of espionage and sabotage. Despite the growing threat, the recruitment landscape for cybersecurity roles is bleak.

Only half of the public sector's job ads for cybersecurity specialists attracted more than 10 applicants, and a decline in applications has been noted. Over two-thirds of organizations reported that applicants either partially met or failed to meet the qualifications, with notable gaps in knowledge about cybersecurity standards and data protection.

The most acute shortage exists in critical roles such as risk management, where 57% of respondents identified major gaps in positions responsible for recognizing and responding to cyber threats. Financial constraints pose another barrier to hiring, especially in the public sector, where 78% cited budget issues as a reason for not filling positions, compared to 48% in the private sector. 

Low pay contributes significantly to high staff turnover. Many experts in urgent demand in the public sector are moving to tech companies offering better salaries, exacerbating the problem. The study also revealed that only about 20% of organizations have strategically employed AI to alleviate staff shortages. Experts recommend using bonuses, allowances, outsourcing, and automation to retain talent and improve efficiency. 

Without these interventions, the study warns that bottlenecks in security-critical roles will persist, potentially crippling the ability of institutions to operate and jeopardizing Germany’s overall digital resilience. Strengthening cyber expertise through targeted incentives and international recruitment is urgent to counter these growing challenges. This situation poses a serious risk to the country's cybersecurity defenses and operational readiness .

Cybercriminals Speed Up Tactics as AI-Driven Attacks, Ransomware Alliances, and Rapid Exploitation Reshape Threat Landscape

 

Cybercriminals are rapidly advancing their attack methods, strengthening partnerships, and harnessing artificial intelligence to gain an edge over defenders, according to new threat intelligence. Rapid7’s latest quarterly findings paint a picture of a threat environment that is evolving at high speed, with attackers leaning on fileless ransomware, instant exploitation of vulnerabilities, and AI-enabled phishing operations.

While newly exploited vulnerabilities fell by 21% compared to the previous quarter, threat actors are increasingly turning to long-standing unpatched flaws—some over a decade old. These outdated weaknesses remain potent entry points, reflected in widespread attacks targeting Microsoft SharePoint and Cisco ASA/FTD devices via recently revealed critical bugs.

The report also notes a shrinking window between public disclosure of vulnerabilities and active exploitation, leaving organisations with less time to respond.

"The moment a vulnerability is disclosed, it becomes a bullet in the attacker's arsenal," said Christiaan Beek, Senior Director of Threat Intelligence and Analytics, Rapid7.
"Attackers are no longer waiting. Instead, they're weaponising vulnerabilities in real time and turning every disclosure into an opportunity for exploitation. Organisations must now assume that exploitation begins the moment a vulnerability is made public and act accordingly," said Beek.

The number of active ransomware groups surged from 65 to 88 this quarter. Rapid7’s analysis shows increasing consolidation among these syndicates, with groups pooling infrastructure, blending tactics, and even coordinating public messaging to increase their reach. Prominent operators such as Qilin, SafePay, and WorldLeaks adopted fileless techniques, launched extensive data-leak operations, and introduced affiliate services such as ransom negotiation assistance. Sectors including business services, healthcare, and manufacturing were among the most frequently targeted.

"Ransomware has evolved significantly beyond its early days to become a calculated strategy that destabilises industries," said Raj Samani, Chief Scientist, Rapid7.
"In addition, the groups themselves are operating like shadow corporations. They merge infrastructure, tactics, and PR strategies to project dominance and erode trust faster than ever," said Samani.

Generative AI continues to lower the barrier for cybercriminals, enabling them to automate and scale phishing and malware development. The report points to malware families such as LAMEHUG, which now have advanced adaptive features, allowing them to issue new commands on the fly and evade standard detection tools.

AI is making it easier for inexperienced attackers to craft realistic, large-volume phishing campaigns, creating new obstacles for security teams already struggling to keep pace with modern threats.

State-linked actors from Russia, China, and Iran are also evolving, shifting from straightforward espionage to intricate hybrid operations that blend intelligence collection with disruptive actions. Many of these campaigns focus on infiltrating supply chains and compromising identity systems, employing stealthy tactics to maintain long-term access and avoid detection.

Overall, Rapid7’s quarterly analysis emphasises the urgent need for organisations to modernise their security strategies to counter the speed, coordination, and technological sophistication of today’s attackers.

Apple’s Digital ID Tool Sparks Privacy Debate Despite Promised Security

 

Apple’s newly introduced Digital ID feature has quickly ignited a divide among users and cybersecurity professionals, with reactions ranging from excitement to deep skepticism. Announced earlier this week, the feature gives U.S. iPhone owners a way to present their passport directly from Apple Wallet at Transportation Security Administration checkpoints across more than 250 airports nationwide. Designed to replace the need for physical identity documents at select travel touchpoints, the rollout marks a major step in Apple’s broader effort to make digital credentials mainstream. But the move has sparked conversations about how willing society should be to entrust critical identity information to smartphones. 

On one side are supporters who welcome the convenience of leaving physical IDs at home, believing Apple’s security infrastructure offers a safer and more streamlined travel experience. On the other side are privacy advocates who fear that such technology could pave the way for increased surveillance and data misuse, especially if government agencies gain new avenues to track citizens. These concerns mirror wider debates already unfolding in regions like the United Kingdom and the European Union, where national and bloc-wide digital identity programs have faced opposition from civil liberties organizations. 

Apple states that its Digital ID system relies on advanced encryption and on-device storage to protect sensitive information from unauthorized access. Unlike cloud-based sharing models, Apple notes that passport data will remain confined to the user’s iPhone, and only the minimal information necessary for verification will be transmitted during identification checks. Authentication through Face ID or Touch ID is required to access the ID, aiming to ensure that no one else can view or alter the data. Apple has emphasized that it does not gain access to passport details and claims its design prioritizes privacy at every stage. 

Despite these assurances, cybersecurity experts and digital rights advocates are unconvinced. Jason Bassler, co-founder of The Free Thought Project, argued publicly that increasing reliance on smartphone-based identity tools could normalize a culture of compromised privacy dressed up as convenience. He warned that once the public becomes comfortable with digital credentials, resistance to broader forms of monitoring may fade. Other specialists, such as Swiss security researcher Jean-Paul Donner, note that iPhone security is not impenetrable, and both hackers and law enforcement have previously circumvented device protections. 

Major organizations like the ACLU, EFF, and CDT have also called for strict safeguards, insisting that identity systems must be designed to prevent authorities from tracking when or where identification is used. They argue that without explicit structural barriers to surveillance, the technology could be exploited in ways that undermine civil liberties. 

Whether Apple can fully guarantee the safety and independence of digital identity data remains an open question. As adoption expands and security is tested in practice, the debate over convenience versus privacy is unlikely to go away anytime soon. TechRadar is continuing to consult industry experts and will provide updates as more insights emerge.

Users Will Soon Text From External Apps Directly Inside WhatsApp

 


WhatsApp is taking a significant step towards ensuring greater digital openness across Europe by enabling seamless communication that extends beyond the borders of its own platform, making it closer to enabling seamless communication that extends beyond the confines of its platform itself. 

According to the requirements for interoperability outlined in the EU’s Digital Markets Act, the company is preparing to add third-party chat support to its chat services within the European Union. A new feature that is being offered by WhatsApp will allow users to communicate with users on other messaging services which are willing to integrate with the WhatsApp framework. This feature can be opted into by individuals who choose to opt in. 

An initial rollout, planned in Europe for both Android and iOS devices, will cover the basics like text, photos, videos, voice notes, and files, while a later phase will include a broader range of capabilities, including cross-platform group chats. 

The new system is offered as an option and can be controlled in the application's settings. However, WhatsApp's new features have been built in a way that ensures that end-to-end encryption standards are maintained within WhatsApp's existing security protocols, ensuring users' privacy is never compromised as a result of expanding connectivity. 

A few users in the European Union have reported a new "third-party chats" section in their WhatsApp account settings, which indicates that WhatsApp may be expanding its cross-platform ambitions. While this feature is still under development and has not yet been formally introduced, it gives a glimpse into how the platform intends to streamline communication across multiple platforms by making it easier to communicate. 

The Messenger app also offers users the option to sync their messages, photos, videos, voice messages, and documents with external apps, allowing them to exchange messages, photos, videos, voice notes, and documents with these apps or separate them into a separate section that is clearly identified and accessible to them.

It is important to note that some WhatsApp functions, including status posts, disappearing messages, and stickers, remain unsupported for the time being, and there are some limitations in place, such as the possibility of receiving messages from individuals previously blocked on WhatsApp who initiate contact through another platform. 

When users receive incoming message requests from third-party platforms, they can choose to respond immediately to messages or review them at their convenience according to how they want. In addition to providing a detailed preview of how the cross-platform experience will function once it has been released to a broader audience, WhatsApp’s testing phase will also give an in-depth look at how the cross-platform experience functions in real life. 

In parts of the European Union, Google is undergoing test trials regarding a new setting that exists within the app, known as "third-party chats," and allows users to exchange text messages, images, videos, voice notes, and documents with compatible external services through these third-party chats. In the beta period, BirdyChat seems to be the only app that is connected, but as more platforms adopt the required technical framework, there is expected to be a broader interoperability.

It is up to the user to decide whether to store these conversations in his or her primary inbox or separate folders based on his or her individual preferences. Some platform-specific tools, such as status updates, disappearing messages, and stickers, will not carry over to external exchanges, since they will only be accessible on WhatsApp. This feature is entirely optional, allowing those satisfied with WhatsApp's existing environment to leave it disabled. Further, WhatsApp blocked users are still able to reach out to those blocked via a third-party application, which the company has noted in its testing. 

Although WhatsApp's own communication channels continue to be encrypted end-to-end, the level of protection for messages that are exchanged with other platforms is a result of the encryption policies adopted by those services. The company maintains that it cannot read the content of chats sent by third parties, even when they are accessed through WhatsApp' interface. 

Despite months of controlled testing, what has been done to highlight the progress made through the cross-platform initiative is now moving into a broader rollout phase. As part of a recent announcement by the company, we learned that WhatsApp users in the European region will shortly be able to communicate directly with people using BirdyChat and Haiket by using the newly introduced third-party chat feature. 

Meta describes this advance as a key milestone that will help Meta meet the EU's requirements for interoperability under the Digital Markets Act of the European Union. The new feature will enable European users to send messages, images, voice notes, videos, and files via external platforms to their external contacts and as soon as partner services complete their own technical preparations, users will be able to exchange group messages and images with each other. 

A notification will appear in the Settings tab to guide users through the opt-in process as Meta plans to enter this feature gradually over the coming weeks. Currently, the feature is only compatible with Android and iOS, leaving desktop, web, and tablet versions of the app unaffected. 

As Meta points out, these partnerships were developed over the course of several years as a result of repeated efforts by European messaging providers and the European Commission to establish an interoperability framework that is both DMA-compliant and protects the privacy of users. It is mandatory for all third-party interactions to follow encryption protocols, which are consistent with WhatsApp's own end-to-end protections. 

Furthermore, the interface has been designed to make it easy for users to distinguish between native and external chats. The system was already previewed by Meta in late 2024, which included features like a dedicated folder for third-party messages and an alert system when a new external messaging service becomes available for use. In accordance with the Digital Markets Act, WhatsApp is under pressure to support only the most basic messaging functionality. 

However, WhatsApp is in the process of developing advanced features for third-party chat users who enable the function. A number of advanced interaction features will accompany the initial rollout of Meta's communication services, such as message reaction, threaded replies, typing indicator, and read receipts, ensuring a smoother and more familiar communication process across multiple services.

There is also a long-term roadmap that has been developed by the company, which includes the introduction of cross-platform group chats in 2025, as well as the implementation of voice and video calling by 2027, once technical integrations have matured. 

Aside from the fact that WhatsApp emphasizes that the wider availability of these features depends on how soon other messaging apps will embrace the necessary standards for interoperability, the company believes the ultimate goal is to create an intuitive, secure platform that allows users to seamlessly communicate across multiple platforms with ease and without any hassle.

A feature like the one listed above, as WhatsApp moves steadily towards a more integrated messaging ecosystem, will likely have a long-term impact that extends beyond the convenience it provides. As WhatsApp opens its doors to external platforms, it is positioning itself at the center of a unified digital communication landscape—one in which users will not have to juggle a variety of applications in order to remain in touch.

The shift provides consumers with greater flexibility, a wider reach, and fewer barriers between services, while for developers it creates a new competitive environment based on interoperability rather than isolation. It is quite likely that, if this transition is executed well, it will redefine how millions of people around the world navigate their daily lives.

Samsung Zero-Day Exploit “Landfall” Targeted Galaxy Devices Before April Patch

 

A recently disclosed zero-day vulnerability affecting several of Samsung’s flagship smartphones has raised renewed concerns around mobile device security. Researchers from Palo Alto Networks’ Unit 42 revealed that attackers had been exploiting a flaw in Samsung’s image processing library, tracked as CVE-2025-21042, for months before a security fix was released. The vulnerability, which the researchers named “Landfall,” allowed threat actors to compromise devices using weaponized image files without requiring any interaction from the victim. 

The flaw impacted premium Samsung models across the Galaxy S22, S23, and S24 generations as well as the Galaxy Z Fold 4 and Galaxy Z Flip 4. Unit 42 found that attackers could embed malicious data into DNG image files, disguising them with .jpeg extensions to appear legitimate and avoid suspicion. These files could be delivered through everyday communication channels such as WhatsApp, where users are accustomed to receiving shared photos. Because the exploit required no clicks and relied solely on the image being processed, even careful users were at risk. 

Once installed, spyware leveraging Landfall could obtain access to sensitive data stored on the device, including photos, contacts, and location information. It was also capable of recording audio and collecting call logs, giving attackers broad surveillance capabilities. The targeting appeared focused primarily on users in the Middle East, with infections detected in countries such as Iraq, Iran, Turkey, and Morocco. Samsung was first alerted to the exploit in September 2024 and issued a patch in April, closing the zero-day vulnerability across affected devices.  

The seriousness of the flaw prompted the U.S. Cybersecurity and Infrastructure Security Agency (CISA) to place CVE-2025-21042 in its Known Exploited Vulnerabilities catalog, a list reserved for security issues actively abused in attacks. Federal agencies have been instructed to ensure that any vulnerable Samsung devices under their management are updated no later than December 1st, reflecting the urgency of mitigation efforts.  

For consumers, the incident underscores the importance of maintaining strong cybersecurity habits on mobile devices. Regularly updating the operating system is one of the most effective defenses against emerging exploits, as patches often include protections for newly discovered vulnerabilities. Users are also encouraged to be cautious regarding unsolicited content, including media files sent from unknown contacts, and to avoid clicking links or downloading attachments they cannot verify. 

Security experts additionally recommend using reputable mobile security tools alongside Google Play Protect to strengthen device defenses. Many modern Android antivirus apps offer supplementary safeguards such as phishing alerts, VPN access, and warnings about malicious websites. 

Zero-day attacks remain an unavoidable challenge in the smartphone landscape, as cybercriminals continually look for undiscovered flaws to exploit. But with proactive device updates and careful online behavior, users can significantly reduce their exposure to threats like Landfall and help ensure their personal data remains secure.

Quantum Error Correction Moves From Theory to Practical Breakthroughs

Quantum computing’s biggest roadblock has always been fragility: qubits lose information at the slightest disturbance, and protecting them requires linking many unstable physical qubits into a single logical qubit that can detect and repair errors. That redundancy works in principle, but the repeated checks and recovery cycles have historically imposed such heavy overhead that error correction remained mainly academic. Over the last year, however, a string of complementary advances suggests quantum error correction is transitioning from theory into engineering practice. 

Algorithmic improvements are cutting correction overheads by treating errors as correlated events rather than isolated failures. Techniques that combine transversal operations with smarter decoders reduce the number of measurement-and-repair rounds needed, shortening runtimes dramatically for certain hardware families. Platforms built from neutral atoms benefit especially from these methods because their qubits can be rearranged and operated on in parallel, enabling fewer, faster correction cycles without sacrificing accuracy.

On the hardware side, researchers have started to demonstrate logical qubits that outperform the raw physical qubits that compose them. Showing a logical qubit with lower effective error rates on real devices is a milestone: it proves that fault tolerance can deliver practical gains, not just theoretical resilience. Teams have even executed scaled-down versions of canonical quantum algorithms on error-protected hardware, moving the community from “can this work?” to “how do we make it useful?” 

Software and tooling are maturing to support these hardware and algorithmic wins. Open-source toolkits now let engineers simulate error-correction strategies before hardware commits, while real-time decoders and orchestration layers bridge quantum operations with the classical compute that must act on error signals. Training materials and developer platforms are emerging to close the skills gap, helping teams build, test, and operate QEC stacks more rapidly. 

That progress does not negate the engineering challenges ahead. Error correction still multiplies resource needs and demands significant classical processing for decoding in real time. Different qubit technologies present distinct wiring, control, and scaling trade-offs, and growing system size will expose new bottlenecks. Experts caution that advances are steady rather than explosive: integrating algorithms, hardware, and orchestration remains the hard part. 

Still, the arc is unmistakable. Faster algorithms, demonstrable logical qubits, and a growing ecosystem of software and training make quantum error correction an engineering discipline now, not a distant dream. The field has shifted from proving concepts to building repeatable systems, and while fault-tolerant, cryptographically relevant quantum machines are not yet here, the path toward reliable quantum computation is clearer than it has ever been.

ClickFix: The Silent Cyber Threat Tricking Families Worldwide

 

ClickFix has emerged as one of the most pervasive and dangerous cybersecurity threats in 2025, yet remains largely unknown to the average user and even many IT professionals. This social engineering technique manipulates users into executing malicious scripts—often just a single line of code—by tricking them with fake error messages, CAPTCHA prompts, or fraudulent browser update alerts.

The attack exploits the natural human desire to fix technical problems, bypassing most endpoint protections and affecting Windows, macOS, and Linux systems. ClickFix campaign typically begin when a victim encounters a legitimate-looking message urging them to run a script or command, often on compromised or spoofed websites. 

Once executed, the script connects the victim’s device to a server controlled by attackers, allowing stealthy installation of malware such as credential stealers (e.g., Lumma Stealer, SnakeStealer), remote access trojans (RATs), ransomware, cryptominers, and even nation-state-aligned malware. The technique is highly effective because it leverages “living off the land” binaries, which are legitimate system tools, making detection difficult for security software.

ClickFix attacks have surged by over 500% in 2025, accounting for nearly 8% of all blocked attacks and ranking as the second most common attack vector after traditional phishing. Threat actors are now selling ClickFix builders to automate the creation of weaponized landing pages, further accelerating the spread of these attacks. Victims are often ordinary users, including families, who may lack the technical knowledge to distinguish legitimate error messages from malicious ones.

The real-world impact of ClickFix is extensive: it enables attackers to steal sensitive information, hijack browser sessions, install malicious extensions, and even execute ransomware attacks. Cybersecurity firms and agencies are urging users to exercise caution with prompts to run scripts and to verify the authenticity of error messages before taking any action. Proactive human risk management and user education are essential to mitigate the threat posed by ClickFix and similar social engineering tactics.

New runC Vulnerabilities Expose Docker and Kubernetes Environments to Potential Host Breakouts

 

Three newly uncovered vulnerabilities in the runC container runtime have raised significant concerns for organizations relying on Docker, Kubernetes, and other container-based systems. The flaws, identified as CVE-2025-31133, CVE-2025-52565, and CVE-2025-52881, were disclosed by SUSE engineer and Open Container Initiative board member Aleksa Sarai. Because runC serves as the core OCI reference implementation responsible for creating container processes, configuring namespaces, managing mounts, and orchestrating cgroups, weaknesses at this level have broad consequences for modern cloud and DevOps infrastructure. 

The issues stem from the way runC handles several low-level operations, which attackers could manipulate to escape the container boundary and obtain root-level write access on the underlying host system. All three vulnerabilities allow adversaries to redirect or tamper with mount operations or trigger writes to sensitive files, ultimately undoing the isolation that containers are designed to enforce. CVE-2025-31133 involves a flaw where runC attempts to “mask” system files by bind-mounting /dev/null. If an attacker replaces /dev/null with a symlink during initialization, runC can end up mounting an attacker-chosen location read-write inside the container, enabling potential writes to the /proc filesystem and allowing escape. 

CVE-2025-52565 presents a related problem involving races and symlink redirection. The bind mount intended for /dev/console can be manipulated so that runC unknowingly mounts an unintended target before full protections are in place. This again opens a window for writes to critical procfs entries, providing an attacker with a pathway out of the container. The third flaw, CVE-2025-52881, highlights how runC may be tricked into performing writes to /proc that get redirected to files controlled by the attacker. This behavior could bypass certain Linux Security Module relabel protections and turn routine runC operations into dangerous arbitrary writes, including to sensitive files such as /proc/sysrq-trigger. 

Two of the vulnerabilities—CVE-2025-31133 and CVE-2025-52881—affect all versions of runC, while CVE-2025-52565 impacts versions from 1.0.0-rc3 onward. Patches have been issued in runC versions 1.2.8, 1.3.3, 1.4.0-rc.3, and later. Security researchers at Sysdig noted that exploiting these flaws requires attackers to start containers with custom mount configurations, a condition that could be met via malicious Dockerfiles or harmful pre-built images. So far, there is no evidence of active exploitation, but the potential severity has prompted urgent guidance. Detection efforts should focus on monitoring suspicious symlink activity, according to Sysdig’s advisory. 

The runC team has also emphasized enabling user namespaces for all containers while avoiding mappings that equate the host’s root user with the container’s root. Doing so limits the scope of accessible files because user namespace restrictions prevent host-level file access. Security teams are further encouraged to adopt rootless containers where possible to minimize the blast radius of any successful attack. Even though traditional container isolation provides significant security benefits, these findings underscore the importance of layered defenses and continuous monitoring in containerized environments, especially as threat actors increasingly look for weaknesses at the infrastructure level.