Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Cyber Security. Show all posts

Quantum Error Correction Moves From Theory to Practical Breakthroughs

Quantum computing’s biggest roadblock has always been fragility: qubits lose information at the slightest disturbance, and protecting them requires linking many unstable physical qubits into a single logical qubit that can detect and repair errors. That redundancy works in principle, but the repeated checks and recovery cycles have historically imposed such heavy overhead that error correction remained mainly academic. Over the last year, however, a string of complementary advances suggests quantum error correction is transitioning from theory into engineering practice. 

Algorithmic improvements are cutting correction overheads by treating errors as correlated events rather than isolated failures. Techniques that combine transversal operations with smarter decoders reduce the number of measurement-and-repair rounds needed, shortening runtimes dramatically for certain hardware families. Platforms built from neutral atoms benefit especially from these methods because their qubits can be rearranged and operated on in parallel, enabling fewer, faster correction cycles without sacrificing accuracy.

On the hardware side, researchers have started to demonstrate logical qubits that outperform the raw physical qubits that compose them. Showing a logical qubit with lower effective error rates on real devices is a milestone: it proves that fault tolerance can deliver practical gains, not just theoretical resilience. Teams have even executed scaled-down versions of canonical quantum algorithms on error-protected hardware, moving the community from “can this work?” to “how do we make it useful?” 

Software and tooling are maturing to support these hardware and algorithmic wins. Open-source toolkits now let engineers simulate error-correction strategies before hardware commits, while real-time decoders and orchestration layers bridge quantum operations with the classical compute that must act on error signals. Training materials and developer platforms are emerging to close the skills gap, helping teams build, test, and operate QEC stacks more rapidly. 

That progress does not negate the engineering challenges ahead. Error correction still multiplies resource needs and demands significant classical processing for decoding in real time. Different qubit technologies present distinct wiring, control, and scaling trade-offs, and growing system size will expose new bottlenecks. Experts caution that advances are steady rather than explosive: integrating algorithms, hardware, and orchestration remains the hard part. 

Still, the arc is unmistakable. Faster algorithms, demonstrable logical qubits, and a growing ecosystem of software and training make quantum error correction an engineering discipline now, not a distant dream. The field has shifted from proving concepts to building repeatable systems, and while fault-tolerant, cryptographically relevant quantum machines are not yet here, the path toward reliable quantum computation is clearer than it has ever been.

ClickFix: The Silent Cyber Threat Tricking Families Worldwide

 

ClickFix has emerged as one of the most pervasive and dangerous cybersecurity threats in 2025, yet remains largely unknown to the average user and even many IT professionals. This social engineering technique manipulates users into executing malicious scripts—often just a single line of code—by tricking them with fake error messages, CAPTCHA prompts, or fraudulent browser update alerts.

The attack exploits the natural human desire to fix technical problems, bypassing most endpoint protections and affecting Windows, macOS, and Linux systems. ClickFix campaign typically begin when a victim encounters a legitimate-looking message urging them to run a script or command, often on compromised or spoofed websites. 

Once executed, the script connects the victim’s device to a server controlled by attackers, allowing stealthy installation of malware such as credential stealers (e.g., Lumma Stealer, SnakeStealer), remote access trojans (RATs), ransomware, cryptominers, and even nation-state-aligned malware. The technique is highly effective because it leverages “living off the land” binaries, which are legitimate system tools, making detection difficult for security software.

ClickFix attacks have surged by over 500% in 2025, accounting for nearly 8% of all blocked attacks and ranking as the second most common attack vector after traditional phishing. Threat actors are now selling ClickFix builders to automate the creation of weaponized landing pages, further accelerating the spread of these attacks. Victims are often ordinary users, including families, who may lack the technical knowledge to distinguish legitimate error messages from malicious ones.

The real-world impact of ClickFix is extensive: it enables attackers to steal sensitive information, hijack browser sessions, install malicious extensions, and even execute ransomware attacks. Cybersecurity firms and agencies are urging users to exercise caution with prompts to run scripts and to verify the authenticity of error messages before taking any action. Proactive human risk management and user education are essential to mitigate the threat posed by ClickFix and similar social engineering tactics.

New runC Vulnerabilities Expose Docker and Kubernetes Environments to Potential Host Breakouts

 

Three newly uncovered vulnerabilities in the runC container runtime have raised significant concerns for organizations relying on Docker, Kubernetes, and other container-based systems. The flaws, identified as CVE-2025-31133, CVE-2025-52565, and CVE-2025-52881, were disclosed by SUSE engineer and Open Container Initiative board member Aleksa Sarai. Because runC serves as the core OCI reference implementation responsible for creating container processes, configuring namespaces, managing mounts, and orchestrating cgroups, weaknesses at this level have broad consequences for modern cloud and DevOps infrastructure. 

The issues stem from the way runC handles several low-level operations, which attackers could manipulate to escape the container boundary and obtain root-level write access on the underlying host system. All three vulnerabilities allow adversaries to redirect or tamper with mount operations or trigger writes to sensitive files, ultimately undoing the isolation that containers are designed to enforce. CVE-2025-31133 involves a flaw where runC attempts to “mask” system files by bind-mounting /dev/null. If an attacker replaces /dev/null with a symlink during initialization, runC can end up mounting an attacker-chosen location read-write inside the container, enabling potential writes to the /proc filesystem and allowing escape. 

CVE-2025-52565 presents a related problem involving races and symlink redirection. The bind mount intended for /dev/console can be manipulated so that runC unknowingly mounts an unintended target before full protections are in place. This again opens a window for writes to critical procfs entries, providing an attacker with a pathway out of the container. The third flaw, CVE-2025-52881, highlights how runC may be tricked into performing writes to /proc that get redirected to files controlled by the attacker. This behavior could bypass certain Linux Security Module relabel protections and turn routine runC operations into dangerous arbitrary writes, including to sensitive files such as /proc/sysrq-trigger. 

Two of the vulnerabilities—CVE-2025-31133 and CVE-2025-52881—affect all versions of runC, while CVE-2025-52565 impacts versions from 1.0.0-rc3 onward. Patches have been issued in runC versions 1.2.8, 1.3.3, 1.4.0-rc.3, and later. Security researchers at Sysdig noted that exploiting these flaws requires attackers to start containers with custom mount configurations, a condition that could be met via malicious Dockerfiles or harmful pre-built images. So far, there is no evidence of active exploitation, but the potential severity has prompted urgent guidance. Detection efforts should focus on monitoring suspicious symlink activity, according to Sysdig’s advisory. 

The runC team has also emphasized enabling user namespaces for all containers while avoiding mappings that equate the host’s root user with the container’s root. Doing so limits the scope of accessible files because user namespace restrictions prevent host-level file access. Security teams are further encouraged to adopt rootless containers where possible to minimize the blast radius of any successful attack. Even though traditional container isolation provides significant security benefits, these findings underscore the importance of layered defenses and continuous monitoring in containerized environments, especially as threat actors increasingly look for weaknesses at the infrastructure level.

U.S. Agencies Consider Restrictions on TP-Link Routers Over Security Risks

 



A coordinated review by several federal agencies in the United States has intensified scrutiny of TP-Link home routers, with officials considering whether the devices should continue to be available in the country. Recent reporting indicates that more than six departments and agencies have supported a proposal recommending restrictions because the routers may expose American data to security risks.

Public attention on the matter began in December 2024, when major U.S. outlets revealed that the Departments of Commerce, Defense and Justice had opened parallel investigations into TP-Link. The inquiries focused on whether the company’s corporate structure and overseas connections could create opportunities for foreign government influence. After those initial disclosures, little additional information surfaced until the Washington Post reported that the proposal had cleared interagency review.

Officials involved believe the potential risk comes from how TP-Link products collect and manage sensitive information, combined with the company’s operational ties to China. TP-Link strongly disputes the allegation that it is subject to any foreign authority and says its U.S. entity functions independently. The company maintains that it designs and manufactures its devices without any outside control.

TP-Link was founded in Shenzhen in 1996 and reorganized in 2024 into two entities: TP-Link Technologies and TP-Link Systems. The U.S. arm, TP-Link Systems, operates from Irvine, California, with roughly 500 domestic employees and thousands more across its global workforce. Lawmakers previously expressed concern that companies with overseas operations may be required to comply with foreign legal demands. They also cited past incidents in which compromised routers, including those from TP-Link, were used by threat actors during cyber operations targeting the United States.

The company has grown rapidly in the U.S. router market since 2019. Some reports place its share at a majority of consumer sales, although TP-Link disputes those figures and points to independent data that estimates a smaller share. One industry platform found that about 12 percent of active U.S. home routers are TP-Link devices. Previous reporting also noted that more than 300 internet providers distribute TP-Link equipment to customers.

In a separate line of inquiry, the Department of Justice is examining whether TP-Link set prices at levels intended to undercut competitors. The company denies this and says its pricing remains sustainable and profitable.

Cybersecurity researchers have found security flaws in routers from many manufacturers, not only TP-Link. Independent analysts identified firmware implants linked to state-sponsored groups, as well as widespread botnet activity involving small office and home routers. A Microsoft study reported that some TP-Link devices became part of password spray attacks when users did not change default administrator credentials. Experts emphasize that router vulnerabilities are widespread across the industry and not limited to one brand.

Consumers who use TP-Link routers can reduce risk by updating administrator passwords, applying firmware updates, enabling modern encryption such as WPA3, turning on built-in firewalls, and considering reputable VPN services. Devices that no longer receive updates should be replaced.

The Department of Commerce has not issued a final ruling. Reports suggest that ongoing U.S. diplomatic discussions with China could influence the timeline. TP-Link has said it is willing to improve transparency, strengthen cybersecurity practices and relocate certain functions if required. 

Why Oslo’s Bus Security Tests Highlight the Hidden Risks of Connected Vehicles

 

Modern transportation looks very different from what it used to be, and the question of who controls a vehicle on the road no longer has a simple answer. Decades ago, the person behind the wheel was unquestionably the one in charge. But as cars, buses, and trucks increasingly rely on constant connectivity, automated functions, and remote software management, the definition of a “driver” has become more complicated. With vehicles now vulnerable to remote interference, the risks tied to this connectivity are prompting transportation agencies to take a closer look at what’s happening under the hood. 

This concern is central to a recent initiative by Ruter, the public transport agency responsible for Oslo and the surrounding Akershus region. Ruter conducted a detailed assessment of two electric bus models—one from Dutch manufacturer VDL and another from Chinese automaker Yutong—to evaluate the cybersecurity implications of integrating modern, connected vehicles into public transit networks. The goal was straightforward but crucial: determine whether any external entity could access bus controls or manipulate onboard camera systems. 

The VDL buses showed no major concerns because they lacked the capability for remote software updates, effectively limiting the pathways through which an attacker could interfere. The Yutong buses, however, presented a more complex picture. While one identified vulnerability tied to third-party software has since been fixed, Ruter’s investigation revealed a more troubling possibility: the buses could potentially be halted or disabled by the manufacturer through remote commands. Ruter is now implementing measures to slow or filter incoming signals so they can differentiate between legitimate updates and suspicious activity, reducing the chance of an unnoticed hijack attempt. 

Ruter’s interest in cybersecurity aligns with broader global concerns. The Associated Press noted that similar tests are being carried out by various organizations because the threat landscape continues to expand. High-profile demonstrations over the past decade have shown that connected vehicles are susceptible to remote interference. One of the most well-known examples was when WIRED journalist Andy Greenberg rode in a Jeep that hackers remotely manipulated, controlling everything from the brakes to the steering. More recent research, including reports from LiveScience, highlights attacks that can trick vehicles’ perception systems into detecting phantom obstacles. 

Remote software updates play an important role in keeping vehicles functional and reducing the need for physical recalls, but they also create new avenues for misuse. As vehicles become more digital than mechanical, transit agencies and governments must treat cybersecurity as a critical aspect of transportation safety. Oslo’s findings reinforce the reality that modern mobility is no longer just about engines and wheels—it’s about defending the invisible networks that keep those vehicles running.

USB Drives Are Handy, But Never For Your Only Backup

 

Storing important files on a USB drive offers convenience due to their ease of use and affordability, but there are significant considerations regarding both data preservation and security that users must address. USB drives, while widely used for backup, should not be solely relied upon for safeguarding crucial files, as various risks such as device failure, malware infection, and physical theft can compromise data integrity.

Data preservation challenges

USB drive longevity depends heavily on build quality, frequency of use, and storage conditions. Cheap flash drives carry a higher failure risk compared to rugged, high-grade SSDs, though even premium devices can malfunction unexpectedly. Relying on a single drive is risky; redundancy is the key to effective file preservation.

Users are encouraged to maintain multiple backups, ideally spanning different storage approaches—such as using several USB drives, local RAID setups, and cloud storage—for vital files. Each backup method has its trade-offs: local storage like RAID arrays provides resilience against hardware failure, while cloud storage via services such as Google Drive or Dropbox enables convenient access but introduces exposure to hacking or unauthorized access due to online vulnerabilities.

Malware and physical risks

All USB drives are susceptible to malware, especially when connected to compromised computers. Such infections can propagate, and in some cases, lead to ransomware attacks where files are held hostage. Additionally, used or secondhand USB drives pose heightened malware risks and should typically be avoided. Physical security is another concern; although USB drives are inaccessible remotely when unplugged, they are unprotected if stolen unless properly encrypted.

Encryption significantly improves USB drive security. Tools like BitLocker (Windows) and Disk Utility (MacOS) enable password protection, making it more difficult for thieves or unauthorized users to access files even if they obtain the physical device. Secure physical storage—such as safes or safety deposit boxes—further limits theft risk.

Recommended backup strategy

Most users should keep at least two backups: one local (such as a USB drive) and one cloud-based. This dual approach ensures data recovery if either the cloud service is compromised or the physical drive is lost or damaged. For extremely sensitive data, robust local systems with advanced encryption are preferable. Regularly simulating data loss scenarios and confirming your ability to restore lost files provides confidence and peace of mind in your backup strategy.

Continuous Incident Response Is Redefining Cybersecurity Strategy

 


With organizations now faced with relentless digital exposure, continuous security monitoring has become an operational necessity instead of a best practice, as organizations navigate an era where digital exposure is ubiquitous. In 2024, cyber-attacks will increase by nearly 30%, with the average enterprise having to deal with over 1,600 attempted intrusions a week, with the financial impact of a data breach regularly rising into six figures. 

Even so, the real crisis extends well beyond the rising level of threats. In the past, cybersecurity strategies relied on a familiar formula—detect quickly, respond promptly, recover quickly—but that cadence no longer suffices in an environment that is characterized by adversaries automating reconnaissance, exploiting cloud misconfiguration within minutes, and weaponizing legitimate tools so that they can move laterally far faster than human analysts are able to react. 

There has been a growing gap between what organizations can see and the ability to act as the result of successive waves of innovation, from EDR to XDR, as a result of which they have widened visibility across sprawling digital estates. The security operations center is already facing unprecedented complexity. Despite the fact that security operations teams juggle dozens of tools and struggle with floods of alerts that require manual validation, organisations are unable to act as quickly as they should. 

A recent accelerated disconnect between risk and security is transforming how security leaders understand risks and forcing them to face a difficult truth: visibility without speed is no longer an effective defence. When examining the threat patterns defining the year 2024, it becomes more apparent why this shift is necessary. According to security firms, attackers are increasingly using stealthy, fileless techniques to steal from their victims, with nearly four out of five detections categorised as malware-free today, with the majority of attacks classified as malware-free. 

As a result, ransomware activity has continued to climb steeply upward, rising by more than 80% on a year-over-year basis and striking small and midsized businesses the most disproportionately, accounting for approximately 70% of all recorded incidents. In recent years, phishing campaigns have become increasingly aggressive, with some vectors experiencing unprecedented spikes - some exceeding 1,200% - as adversaries use artificial intelligence to bypass human judgment. 

A number of SMBs remain structurally unprepared in spite of these pressures, with the majority acknowledging that they have become preferred targets, but three out of four of them continue to use informal or internally managed security measures. These risks are compounded by human error, which is responsible for an estimated 88% of reported cyber incidents. 

There have been staggering financial consequences as well; in the past five years alone, the UK has suffered losses of more than £44 billion, resulting in both immediate disruption and long-term revenue losses. Due to this, the industry’s definition of continuous cybersecurity is now much broader than periodic audits. 

It is necessary to maintain continuous threat monitoring, proactive vulnerability and exposure management, disciplined identity governance, sustained employee awareness programs, regularly tested incident response playbooks, and ongoing compliance monitoring—a posture which emphasizes continuous evaluation rather than reactive control as part of an operational strategy. Increasingly complex digital estates are creating unpredictable cyber risks, which are making continuous monitoring an essential part of modern defence strategies. 

Continuous monitoring is a real time monitoring system that scans systems, networks, and cloud environments in real time, in order to detect early signs of misconfiguration, compromise, or operational drift. In contrast to periodic checks which operate on a fixed schedule and leave long periods of exposure, continuous monitoring operates in real time. 

The approach outlined above aligns closely with the NIST guidance, which urges organizations to set up an adaptive monitoring strategy capable of ingesting a variety of data streams, analysing emerging vulnerabilities, and generating timely alerts for security teams to take action. Using continuous monitoring, organizations can discover latent weaknesses that are contributing to their overall cyber posture. 

Continuous monitoring reduces the frequency and severity of incidents, eases the burden on security personnel, and helps them meet increasing regulatory demands. Even so, maintaining such a level of vigilance remains a challenge, especially for small businesses that lack the resources, expertise, and tooling to operate around the clock in order to stay on top of their game. 

The majority of organizations therefore turn to external service providers in order to achieve the scalability and economic viability of continuous monitoring. Typically, effective continuous monitoring programs include four key components: a monitoring engine, analytics that can be used to identify anomalies and trends on a large scale, a dashboard that shows key risk indicators in real time, and an alerting system to ensure that emerging issues are quickly addressed by the appropriate staff. 

With the help of automation, security teams are now able to process a great deal of telemetry in a timely and accurate manner, replacing outdated or incomplete snapshots with live visibility into organisational risk, enabling them to respond successfully in a highly dynamic threat environment. 

Continuous monitoring can take on a variety of forms, depending on the asset in focus, including endpoint monitoring, network traffic analysis, application performance tracking, cloud and container observability, etc., all of which provide an important layer of protection against attacks as they spread across every aspect of the digital infrastructure. 

It has also been shown that the dissolution of traditional network perimeters is a key contributor to the push toward continuous response. In the current world of cloud-based workloads, SaaS-based ecosystems, and remote endpoints, security architectures mustwork as flexible and modular systems capable of correlating telemetrics between email, DNS, identity, network, and endpoint layers, without necessarily creating new silos within the architecture. 

Three operational priorities are usually emphasized by organizations moving in this direction: deep integration to keep unified visibility, automation to handle routine containment at machine speed and validation practices, such as breach simulations and posture tests, to ensure that defence systems behave as they should. It has become increasingly common for managed security services to adopt these principles, and this is why more organizations are adopting them.

909Protect, for instance, is an example of a product that provides rapid, coordinated containment across hybrid environments through the use of automated detection coupled with continuous human oversight. In such platforms, the signals from various security vectors are correlated, and they are layered on top of existing tools with behavioural analysis, posture assessment and identity safeguards in order to ensure that no critical alert goes unnoticed while still maintaining established investments. 

In addition to this shift, there is a realignment among the industry as a whole toward systems that are built to be available continuously rather than undergoing episodic interventions. Cybersecurity has gone through countless “next generation” labels, but only those approaches which fundamentally alter the behavior of operations tend to endure, according to veteran analysts in the field. In addressing this underlying failure point, continuous incident response fits perfectly into this trajectory. 

Organizations are rarely breached because they have no data, but rather because they do not act on it quickly enough or cohesively. As analysts argue, the path forward will be determined by the ability to combine automation, analytics, and human expertise into a single adaptive workflow that can be used in an organization's entirety. 

There is no doubt that the organizations that are most likely to be able to withstand emerging threats in the foreseeable future will be those that approach security as a living, constantly changing system that is not only based on the visible, but also on the ability of the organization to detect, contain, and recover in real time from any threats as they arise. 

In the end, the shift toward continuous incident response is a sign that cybersecurity resilience is more than just about speed anymore, but about endurance as well. Investing in unified visibility, disciplined automation, as well as persistent validation will not only ensure that the path from detection to containment is shortened, but that the operations remain stable over the longer term as well.

The advantage will go to those who treat security as an evolving ecosystem—one that is continually refined, coordinated across teams and committed to responding in a continuity similar to the attacks used by adversaries.

Knownsec Data Leak Exposes Deep Cyber Links and Global Targeting Operations

 

A recent leak involving Chinese cybersecurity company Knownsec has uncovered more than 12,000 internal documents, offering an unusually detailed picture of how deeply a private firm can be intertwined with state-linked cyber activities. The incident has raised widespread concern among researchers, as the exposed files reportedly include information on internal artificial intelligence tools, sophisticated cyber capabilities, and extensive international targeting efforts. Although the materials were quickly removed after surfacing briefly on GitHub, they have already circulated across the global security community, enabling analysts to examine the scale and structure of the operations. 

The leaked data appears to illustrate connections between Knownsec and several government-aligned entities, giving researchers insight into China’s broader cyber ecosystem. According to those reviewing the documents, the files map out international targets across more than twenty countries and regions, including India, Japan, Vietnam, Indonesia, Nigeria, and the United Kingdom. Of particular concern are spreadsheets that allegedly outline attacks on around 80 foreign organizations, including critical infrastructure providers and major telecommunications companies. These insights suggest activity far more coordinated than previously understood, highlighting the growing sophistication of state-associated cyber programs. 

Among the most significant revelations is the volume of foreign data reportedly linked to prior breaches. Files attributed to the leaks include approximately 95GB of immigration information from India, 3TB of call logs taken from South Korea’s LG U Plus, and nearly 459GB of transportation records from Taiwan. Researchers also identified multiple Remote Access Trojans capable of infiltrating Windows, Linux, macOS, iOS, and Android systems. Android-based malware found in the leaked content reportedly has functionality allowing data extraction from widely used Chinese messaging applications and Telegram, further emphasizing the operational depth of the tools. 

The documents also reference hardware-based hacking devices, including a malicious power bank engineered to clandestinely upload data into a victim’s system once connected. Such devices demonstrate that offensive cyber operations may extend beyond software to include physical infiltration tools designed for discreet, targeted attacks. Security analysts reviewing the information suggest that these capabilities indicate a more expansive and organized program than earlier assessments had captured. 

Beijing has denied awareness of any breach involving Knownsec. A Foreign Ministry spokesperson reiterated that China opposes malicious cyber activities and enforces relevant laws, though the official statement did not directly address the alleged connections between the state and companies involved in intelligence-oriented work. While the government’s response distances itself from the incident, analysts note that the leaked documents will likely renew debates about the role of private firms in national cyber strategies. 

Experts warn that traditional cybersecurity measures—including antivirus software and firewall defenses—are insufficient against the type of advanced tools referenced in the leak. Instead, organizations are encouraged to adopt more comprehensive protection strategies, such as real-time monitoring systems, strict network segmentation, and the responsible integration of AI-driven threat detection. 

The Knownsec incident underscores that as adversaries continue to refine their methods, defensive systems must evolve accordingly to prevent large-scale breaches and safeguard sensitive data.