Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

Ransomware Campaign Leverages QEMU to Slip Past Enterprise Defences

  In an effort to circumvent traditional security controls, hackers are increasingly relying on virtualisation as a covert execution layer, ...

All the recent news you need to know

North Korea-Linked Hackers Target Crypto Platforms, $500M Stolen

 



Cybersecurity researchers are raising alarms over a developing pattern of cryptocurrency thefts linked to North Korean actors, with recent incidents suggesting a move from isolated breaches to a sustained and structured campaign. In a span of just over two weeks, attacks targeting the Drift trading platform and the Kelp protocol resulted in losses exceeding $500 million, pointing to a level of coordination that goes beyond opportunistic hacking.

What initially appeared to be separate security failures is now being viewed as part of a broader operational strategy, likely driven by the financial pressures faced by a heavily sanctioned state. Shortly after attackers used social engineering techniques to compromise Drift, another incident emerged involving Kelp, a restaking protocol integrated with cross-chain infrastructure.

The Kelp breach surfaces a noticeable turn in attacker behavior. Rather than exploiting traditional software bugs or stealing credentials, the attackers targeted fundamental design assumptions within decentralized systems. When examined together, both incidents indicate a deliberate escalation in efforts to extract value from the crypto ecosystem.

Alexander Urbelis of ENS Labs described the pattern as systematic rather than incidental, noting that the frequency and timing of these events resemble an operational cycle. He warned that reactive fixes alone are insufficient against threats that follow a structured tempo.


Breakdown of the Kelp exploit

Unlike many traditional cyberattacks, the Kelp incident did not involve bypassing encryption or stealing private keys. Instead, the system behaved as designed, but was fed manipulated data. Attackers altered the inputs that the protocol relied on, causing it to validate transactions that never actually occurred.

Urbelis explained that while cryptographic signatures can verify the origin of a message, they do not ensure the truthfulness of the information being transmitted. In simple terms, the system confirmed who sent the data, but failed to verify whether the data itself was accurate.

David Schwed of SVRN reinforced this view, stating that the exploit was not based on breaking cryptography, but on taking advantage of how the system had been configured.

A central weakness was Kelp’s dependence on a single verifier to validate cross-chain messages. While this approach improves efficiency and simplifies deployment, it removes an essential layer of security redundancy. In response, LayerZero has advised projects to adopt multiple independent verifiers, similar to requiring multiple approvals in traditional financial systems.

However, this recommendation has sparked criticism. Some experts argue that if a configuration is known to be unsafe, it should not be offered as a default option. Relying on users to manually implement secure settings, especially in complex environments, increases the likelihood of misconfiguration.


Contagion across interconnected systems

The impact of the Kelp exploit did not remain confined to a single platform. Decentralized finance systems are deeply interconnected, with assets frequently reused across multiple protocols. This creates a chain of dependencies, where a failure in one component can propagate across others.

Schwed described these assets as interconnected obligations, emphasizing that the strength of the system depends on each individual link. In this case, lending platforms such as Aave, which accepted the affected assets as collateral, experienced financial strain. This transformed an isolated breach into a broader ecosystem-level disruption.


Reassessing decentralization claims

The incident also exposes a disconnect between how decentralization is promoted and how systems actually function. A structure that relies on a single point of verification cannot be considered fully decentralized, despite being marketed as such.

Urbelis expanded on this by noting that decentralization is not an inherent feature, but the result of specific design decisions. Weaknesses often emerge in less visible layers, such as data validation or infrastructure components, which are increasingly becoming primary targets for attackers.

The activity aligns with a bigger change in strategy by groups such as Lazarus Group. Instead of focusing only on exchanges or obvious coding flaws, attackers are now targeting foundational infrastructure, including cross-chain bridges and restaking mechanisms.

These components play a critical role in enabling asset movement and reuse across blockchain networks. Their complexity, combined with the large volumes of value they handle, makes them particularly attractive targets.

Earlier waves of crypto-related attacks often focused on centralized platforms or easily identifiable vulnerabilities. In contrast, current operations are increasingly directed at the underlying systems that connect the ecosystem, which are harder to monitor and more prone to configuration errors.

Importantly, the Kelp exploit did not introduce a new category of vulnerability. Instead, it demonstrated how existing weaknesses remain exploitable when not properly addressed. The incident underscores a recurring issue in the industry: security measures are often treated as optional guidelines rather than mandatory requirements.

As attackers continue to enhance their methods and increase the pace of operations, this gap becomes easier to exploit and more costly for organizations. The growing sophistication of these campaigns suggests that the primary risk may not lie in unknown flaws, but in the failure to consistently address well-understood security challenges.

Terms And Conditions Grow Harder To Read As Platforms Limit Users’ Legal Rights Study Finds

 

Most people click "agree" without looking - yet those agreements keep getting harder to understand. Complexity rises, researchers note, just as user protections shrink. From Cambridge, a recent study points out expanded corporate access to personal information. Legal barriers grow tougher, making it more difficult to take firms to court. Lengthy clauses quietly reshape power, favoring businesses over individuals. Beginning with a project called the Transparency Hub, results emerge from systematic tracking of legal texts across 300-plus online platforms. 

Stored within it: twenty thousand iterations - past and present - of service conditions and privacy notices from apps like TikTok, among others. Over months, changes in wording reveal shifts in corporate approaches to personal information. What users agree to today may differ subtly from last year’s version, now preserved here. Visibility grows when updates accumulate, showing patterns once hidden beneath routine acceptance clicks. Surprisingly clear trends show a steady drop in how easily people can read service contracts. 

From 2016 to 2025, studies applying the Flesch-Kincaid method reveal nearly 86 percent demand skills typical of university readers. Because of this shift, grasping the full meaning behind digital consent has grown harder for most individuals. While signing up seems routine, the depth of understanding often lags behind. Away from mere complexity, attention turns to changing corporate approaches in handling disagreements. While once settled in open courtrooms, conflict resolution now leans on closed-door arbitration imposed by platform rules. 

A third-party referee reaches final judgments, yet clarity tends to fade behind closed processes. Users find their options shrinking when collective lawsuits are blocked. Even mediator choices sometimes rest with the businesses involved, quietly shaping outcomes. Newer artificial intelligence platforms like Anthropic and Perplexity AI also follow this pattern, embedding clauses that block participation in group litigation. Because of this, anyone feeling wronged has to file a personal claim - often pricier and weaker than joining others in court. A few companies allow narrow chances to decline the clause; however, acting fast after registration is usually required. 

Now appearing, this study arrives as officials across Europe weigh tighter rules for online services, focusing on effects tied to youth engagement. With France leading examples, followed by Spain, Portugal, and Denmark, governments test new steps aimed at tackling unease around digital privacy and web-based risks. One thing stands out: laws around online services are drifting further from what everyday users can grasp. 

Though written rules get longer and tighter, people must now sort through fine print that defines their digital freedoms - frequently unaware of what they’re agreeing to. While clarity lags behind complexity, personal responsibility quietly expands.

Lazarus Hackers Steal $290M from KelpDAO in Cross-Chain Exploit

 

KelpDAO has become the latest DeFi project to face a major security crisis after a $290 million heist that investigators say is likely tied to North Korea’s Lazarus Group. The attack targeted rsETH, a restaked ether asset used across several protocols, and drained about 116,500 tokens in a matter of hours. What makes the incident alarming is that the exploit did not appear to rely on a typical smart-contract flaw. Instead, it seems to have abused the project’s cross-chain verification setup, showing how a vulnerability in infrastructure can be just as damaging as a bug in code. 

According to the project’s public statement, KelpDAO detected suspicious cross-chain activity involving rsETH on April 18, 2026, and quickly paused rsETH contracts across Ethereum mainnet and Layer 2 networks. The team said it was working with LayerZero, Unichain, and other partners to investigate the breach and contain the damage. On-chain activity later showed that the stolen funds were moved through Tornado Cash, a common laundering route used to hide crypto theft. 

LayerZero’s early findings suggest the attack was highly coordinated. Researchers believe the hackers compromised RPC nodes and then used a DDoS campaign to force the system into failing over to poisoned infrastructure, where fraudulent cross-chain messages could be accepted as legitimate. In other words, the attackers appear to have tricked the bridge layer into believing a transfer had been properly authorized. That design weakness, rather than the asset itself, seems to have opened the door to the theft. 

The impact propagated quickly beyond KelpDAO. Because rsETH is accepted as collateral in lending markets, the exploit created risk for other DeFi platforms, including Compound, Euler, and Aave. Aave responded by freezing and blocking new deposits or borrowing using rsETH collateral. The wider market reaction highlights how one compromised bridge can ripple across multiple protocols, creating uncertainty far beyond the original target. 

The KelpDAO incident is another reminder that DeFi security depends not only on smart-contract audits, but also on the trust assumptions behind cross-chain systems. As protocols grow more interconnected, attackers need only find one weak link to trigger losses on a massive scale. For users and developers alike, the lesson is clear: layered security, diversified verification, and conservative bridge design are no longer optional.

PyTorch Lightning and Intercom Client Users Exposed to Credential Stealing Campaign


 

Python's software supply chain has been compromised, which targeted the popular PyPI package Lightning and exposed downstream machine learning environments to covert credential theft through a sophisticated software supply chain compromise. 

In conjunction with Aikido Security, OX Security, Socket, and StepSecurity researchers, versions 2.6.2 and 2.6.3, both published on April 30, 2026, have been modified maliciously as part of a broader intrusion related to the "Mini Shai-Hulud" campaign. 

A day earlier, the attack emerged through compromised SAP-related npm packages, underlining an ongoing trend of coordinated cross-ecosystem supply chain threats targeting high-value development environments. As a result of this compromise, organizations that utilize PyTorch Lightning, an open-source abstraction layer over PyTorch with over 31,000 stars on Github, face significant risk. 

In addition to being frequently embedded in dependency trees facilitating image classification, fine-tuning of large language models, diffusion workloads, and forecasting, Lightning's ubiquity increased the scope of the attack. 

A standard pip install lightning command was sufficient for the activation of the malicious chain exploitation did not require a sophisticated trigger. Upon installation of the compromised package, a hidden _runtime directory containing obfuscated JavaScript was created and executed automatically upon module import. This behavior was embedded within the package's initialization logic, ensuring that no additional user interaction was required to execute the script. 

Upon receiving the payload, a Python script (start.py) downloaded the Bun JavaScript runtime from external sources, followed by an 11 MB obfuscated file (router_runtime.js) which carried out the attack sequence in stages. An execution model utilizing JavaScript within a Python package utilizing cross-language JavaScript marks a significant evolution in attacker tradecraft. This complicates detection mechanisms focusing on single-language threats.

The malware's primary objective was credential harvesting. Analysis indicates that the malware targeted GitHub tokens, cloud service credentials spanning Amazon Web Services (AWS), Google Cloud Platform (GCP), and Azure, SSH keys, NPM tokens, Kubernetes configurations, Docker credentials, and environment variables systematically. Moreover, it was also capable of accessing cryptocurrency wallets and developer secrets stored within local and continuous integration/continuous delivery environments. 

By exploiting compromised credentials, stolen data was exfiltrated, often by automating commits to attacker-controlled GitHub repositories, which effectively concealed malicious activity within legitimate developer workflows, effectively masking malicious activity. There were distinctive markers that linked the campaign to the "Shai-Hulud" identity. 

Infected environments were observed creating public repositories with unusual naming conventions, including EveryBoiWeBuildIsaWormBoi and descriptions such as "A Mini Shai-Hulud has appeared." Attackers seem to be able to track compromised systems using these artifacts both as infection indicators and as signalling mechanisms. 

An effort has been made to link the activity to a financial motivated threat group referred to as TeamPCP, who has consistently demonstrated a focus on credential-rich development environments. According to OX Security, approximately 8.3 million downloads are likely to have been exposed as a result of the incident. 

As a result of the attack, Intercom-Client was compromised on the same day, further demonstrating the coordinated nature of the campaign. These incidents are the culmination of a series of supply chain breaches affecting npm, PyPI, and Docker Hub occurring between April 21 and 23 that suggest that a deliberate and sustained effort was made to infiltrate widely trusted software distribution channels between April 21 and 23.

The router_runtime.js payload was further examined in order to uncover extensive obfuscation and a clear focus on credential access and repository manipulation. Approximately 700 references were found to process and environment variables, over 460 references were identified to authentication tokens, and approximately 330 references were found to code repositories. 

Shai-Hulud operations are closely related to these patterns, which emphasize code reuse and iterative refinement of attack techniques. Furthermore, the payload was also capable of poisoning GitHub repositories and propagating through npm packages, raising concerns about secondary infection vectors beyond data exfiltration. 

The Lightning-AI GitHub repository became aware of the compromise when a user reported suspicious behavior under issue #21689 titled “Possible supply chain attack on version 2.6.3.” The report described a hidden execution chain that involved downloading the Bun runtime and executing a large obfuscated payload during module import. Despite this, the issue was later closed without clarification, thereby creating uncertainty concerning the project's initial response to the matter. 

Following Socket's disclosure in the Lightning-AI/pytorch-lightning repository, an even more unusual outcome occurred. In a matter of seconds, an account identified as pl-ghost closed the issue warning about compromised versions, and then posted a meme entitled "SILENCE DEVELOPER." This behavior has raised immediate concerns about potential account compromise since it was seen as anomalous. 

It was discovered that additional suspicious activity was related to the same account, including six rapid branch creations and deletions across multiple repositories within approximately 70 minutes, which were associated with this account. Several of these branches followed random 10-character lowercase naming conventions, which is consistent with the behavior of the Shai-Hulud worm, which probes for write access. 

As well as the branch impersonating Dependabot, another contained inconsistencies such as a misspelled identifier and incorrect naming structure, and all branches were deleted within seconds of being created, and none of them triggered workflows, indicating that automated probing was not being used in development. This combined evidence strongly suggests that the maintainer account may have been compromised, possibly using the same stolen credentials that enabled the malicious package publication on PyPI to be published. 

Upon learning of the incident, Python Package Index administrators quarantined Lightning versions that may have been affected. According to the maintainers, an investigation is underway in order to determine the cause, as the compromised releases introduced functionality that was consistent with credential harvesting methods. 

In the meantime, it is highly recommended that developers remove versions 2.6.2 and 2.6.3 from their environments, downgrade to version 2.6.1, and rotate any potentially exposed credentials across multiple cloud and development platforms, including API keys, tokens, and access credentials. Besides Python, the campaign is evolving beyond Python.

Researchers have confirmed that version 7.0.4 of the intercom-client package within the Node ecosystem has also been compromised, using a preinstall hook to execute credentials-stealing malware. Packagist also has been affected by the attack, where the intercom/intercom-php package (version 5.0.2) has been altered to include a Composer plugin that downloads the Bun runtime using a shell script (setup-intercom.sh) and executes the same obfuscated payload during installation and updates. 

As a result of encryption and exfiltration of stolen data to a remote server endpoint, the campaign's adaptability across ecosystems was further demonstrated. It has been determined that the GitHub account "nhur" has likely been compromised, and that the malicious intercom-client package was published through an automated Continuous Integration workflow triggered by a now-deleted branch of GitHub.

It appears that technical overlap exists among the npm, PyPI, and PHP ecosystems, with similarities in exfiltration techniques based on GitHub, credential targeting patterns, and payload structures. Furthermore, researchers have found similarities between these attacks and previous ones affecting organizations such as Checkmarx, Bitwarden, Telnyx, LiteLLM, and Aqua Security's Trivy, which supports the hypothesis that a single threat actor is responsible. 

Upon suspension from mainstream platforms, TeamPCP reportedly launched an onion-based platform on the dark web to expand its presence. Additionally, the actors have publicly referenced their ties with other cybercriminal groups, including LAPSUS$, while marketing their own tooling infrastructure. 

The developments suggest that the threat landscape is becoming increasingly organized and persistent, with supply chain attacks not just isolated incidents but a broader strategy for infiltrating and monetizing developer ecosystems. Lightning and Intercom compromises remain a stark reminder of the fragility of modern software supply chains as investigations continue. 

In light of the increasingly capable of pivoting across ecosystems and exploiting trusted distribution channels by attackers, organizations operating in cloud-native environments and AI-based environments have become increasingly reliant on robust dependency auditing, real-time monitoring, and rapid incident response. 

The incident highlights a critical juncture in software supply chain security, at which trusted ecosystems are increasingly being weaponised through stealthy, cross-language attack chains that are emerging from across the globe. The coordinated compromises of PyPI, npm, and Packagist packages, together with evidence of maintainer account abuse and automated propagation techniques, demonstrate a high level of operational maturity that challenges traditional methods of detection and response. 

It is now necessary to take proactive measures to guard against threats such as TeamPCP, who have demonstrated their capability to infiltrate developer workflows on a large scale. These include rigorous dependency auditing, tighter access controls, and continuous monitoring of build environments. 

It is imperative to safeguard the integrity of open-source components in order to maintain confidence in modern software development in the present threat landscape.

Are You Letting AI Do Too Much of Your Thinking?

 




As artificial intelligence tools take on a growing share of everyday thinking tasks, researchers are raising concerns that this shift may be quietly affecting how people process information, remember ideas, and engage with their own work.

When Nataliya Kosmyna reviewed applications for internships, she noticed a pattern that stood out. Many cover letters were structured in nearly identical ways, written in polished language, and included vague or forced connections to her research. The consistency suggested that applicants were relying on large language models, the technology behind tools such as ChatGPT, Google Gemini, and Claude.

At the same time, while teaching at the Massachusetts Institute of Technology, Kosmyna began noticing that students were finding it harder to retain what they had learned. Compared to previous years, more students struggled to recall material, which led her to question whether growing dependence on AI tools could be influencing cognitive abilities.

Researchers studying human-computer interaction are increasingly concerned that relying too heavily on AI may alter not just how people write but how they think. This phenomenon, often described as “cognitive offloading,” refers to shifting mental effort onto external tools. While this has existed for years with calculators and search engines, experts warn that AI systems may deepen the effect because they generate complete responses rather than simply helping users find information.

Earlier research on internet usage identified what is known as the “Google effect,” where people became less likely to remember facts because they could easily look them up. Some researchers argued that this allowed the brain to focus on more complex tasks. However, AI tools now go a step further by producing answers, arguments, and even creative content, reducing the need for active thinking.

To better understand the impact, Kosmyna and her team conducted an experiment involving 54 students. Participants were divided into three groups. One group used AI tools to write essays, another relied on search engines without AI-generated summaries, and a third completed the task without any digital assistance. Their brain activity was monitored while they worked on open-ended topics such as happiness, loyalty, and everyday decisions.

The differences were clear. Students who worked without any tools showed strong and widespread brain activity across multiple regions. Those using search engines still demonstrated notable engagement, particularly in areas related to visual processing. In contrast, the group using AI tools showed comparatively lower brain activity, with levels dropping by as much as 55%. Activity in areas linked to creativity and deeper thinking was especially reduced.

The impact extended beyond brain activity. Students who used AI struggled to recall what they had written shortly after completing their essays. Several participants also reported feeling disconnected from their work, as if they had not fully contributed to it. Similar findings from other studies suggest that frequent use of AI tools can weaken memory retention and recall.

Research from the University of Pennsylvania introduces another concern described as “cognitive surrender,” where users accept AI-generated responses without questioning them. In such cases, individuals may rely on the system’s output even when it conflicts with their own understanding.

The effects are not limited to academic settings. A multinational study found that medical professionals who relied on AI tools for detecting colon cancer became less accurate when asked to identify cases without assistance after several months of use. This suggests that repeated dependence on AI may reduce independent decision-making skills, even in critical fields.

Kosmyna also observed that essays written with AI tended to be highly similar, lacking variation in style and depth. Teachers reviewing the work described it as uniform and lacking originality. In some cases, the responses were so alike that it appeared as though students had collaborated, even when they had not.

Follow-up observations months later revealed further differences. Students who had previously relied on AI showed weaker neural connectivity when asked to complete tasks without it, compared to those who had worked independently earlier. This may indicate that they had engaged less deeply with the material from the start.

Vivienne Ming, author of Robot Proof, has raised similar concerns. In her research, students asked to make real-world predictions often defaulted to copying answers from AI systems instead of forming their own conclusions. Brain measurements showed low levels of gamma wave activity, which is associated with active thinking. Reduced gamma activity has been linked in other studies to cognitive decline over time.

However, not all users showed the same pattern. A small group, fewer than 10%, used AI differently by treating it as a source of information rather than a final answer. These individuals analysed the output themselves, showed stronger brain engagement, and produced more accurate results.

The concerns echo earlier findings related to navigation technology. Increased reliance on GPS has been associated with reduced spatial memory in some studies. Weak spatial navigation skills have also been explored as a possible early indicator of conditions such as Alzheimer's disease. These parallels suggest that reduced mental effort over time may have broader cognitive consequences.

Researchers emphasize that AI itself is not the problem but how it is used. Ming advocates for a more deliberate approach, where individuals think through problems first and then use AI to test or refine their ideas. She suggests methods such as asking AI to challenge one’s reasoning or limiting it to providing context instead of direct answers, encouraging deeper engagement.

Kosmyna similarly recommends building a strong understanding of subjects without AI assistance before integrating such tools into the learning process.

The alarming takeaway from the current research is clear. While AI offers efficiency and convenience, it may also encourage mental shortcuts. Human cognition depends on regular effort and engagement, and reducing that effort could carry long-term consequences. As these tools become more integrated into daily life, the challenge will be to use them in ways that support thinking rather than replace it.



eth.limo DNS Hijack Thwarted By DNSSEC After Social Engineering Attack On EasyDNS

 

Unexpectedly, the ENS gateway known as eth.limo revealed a DNS hijack stemming from a social engineering scheme aimed at EasyDNS, its domain provider. Though settings shifted temporarily under unauthorized access, safeguards held firm throughout. Protection layers blocked harm, keeping user activity untouched during the episode. Compromise occurred at the registrar level - yet defenses prevented escalation beyond domain redirection. Hours after the incident started, a person pretending to be part of the eth.limo group tricked EasyDNS support into starting an account reset. 

Because of that mistaken trust, the intruder gained entry and altered where the domain pointed, shifting it first through servers at Cloudflare, then moving again toward Namecheap systems. Right away, automatic warnings went off once those shifts happened, which gave the real eth.limo members time to react fast. Their quick actions reversed the breach soon afterward. A single point of failure in eth.limo allowed it to act like a bridge, routing requests from regular browsers to data hosted on networks such as IPFS, Arweave, and Swarm. Because its DNS setup uses wildcards, countless .eth addresses rely on the same infrastructure - making them vulnerable when one part fails. 

Traffic meant for legitimate decentralized sites might instead flow toward harmful servers under attacker control. Notable resources, even those tied to figures like Vitalik Buterin, faced potential exposure should deception tactics have taken hold. Stopping the damage came down to DNS Security Extensions - called DNSSEC by many. Not through speed, but through verification: it checks DNS replies with digital signatures. Without access to the correct private keys, the hacker's fake entries could not pass these tests. Because validation failed, devices refused the corrupted data, showing failures rather than loading harmful pages. 

Though eth.limo and EasyDNS saw interference, they noted minimal reach due to this layer. To date, no individuals have faced consequences from the attempt. Surprisingly, EasyDNS spoke out after the event, calling it their initial customer-targeted social engineering success in almost thirty years. Following this, improvements to internal procedures are underway. Instead of old methods, eth.limo will shift to a tighter system - one without recovery pathways. That change aims to block repeat incidents. 

Over time, weaker entry points may fade. Security evolves differently now. Most recent cases show similar patterns across decentralized services. Though blockchains themselves stay distributed and protected, the websites people actually visit run on standard domain setups. These entry points open doors hackers are now using more frequently. Instead of breaking encryption, they shift traffic by manipulating DNS records. Users get sent elsewhere without noticing - sometimes losing assets quickly. Security layers matter more than ever, shown clearly by what happened with eth.limo. 

Even when human manipulation tricks succeed, safeguards such as DNSSEC often stop further damage. Because digital dangers keep changing shape, companies - especially in cryptocurrency - now pay closer attention to protecting not just blockchain networks but also the traditional services people rely on to reach them.

Featured