Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

Infinity Stealer Targets macOS Using ClickFix Trick and Python-Based Malware

  A newly identified information-stealing malware, dubbed Infinity Stealer, is targeting macOS users through a sophisticated attack chain th...

All the recent news you need to know

Malware Hidden in Blockchain Networks Is Quietly Targeting Developers Worldwide



A new investigation has uncovered a cyberattack method that uses blockchain networks to quietly distribute malware, raising concerns among security researchers about how difficult it may be to stop once it spreads further.

The threat first surfaced when a senior engineering executive at Crystal Intelligence received a freelance opportunity through LinkedIn. The message appeared routine, asking him to review and run code hosted on GitHub. However, the request resembled a known tactic used by a North Korean-linked group often referred to as Contagious Interview, which relies on fake job offers to target developers.

Instead of proceeding, the executive examined the code and found something unusual. Hidden within it was the beginning of a multi-step attack designed to look harmless. A developer following normal instructions would likely execute it without noticing anything suspicious.

Once activated, the code connects to blockchain networks such as TRON and Aptos, which are commonly used because of their low transaction costs. These networks do not contain the malware itself but instead store information that directs the program to another blockchain, Binance Smart Chain. From there, the final malicious payload is retrieved and executed.

Researchers say this last stage installs a powerful data-stealing tool known as “Omnistealer.” According to analysts working with Ransom-ISAC, the malware is designed to extract a wide range of sensitive data. It can access more than 60 cryptocurrency wallet extensions, including MetaMask and Coinbase Wallet, as well as over 10 password managers such as LastPass. It also targets major browsers like Chrome and Firefox and can pull data from cloud storage services like Google Drive. This means attackers are not just stealing cryptocurrency, but also login credentials and internal access to company systems.

What initially looked like a simple phishing attempt turned out to be far more layered. By placing parts of the attack inside blockchain transactions, the attackers have created a system that is extremely difficult to dismantle. Data stored on blockchains cannot easily be removed, which means parts of this malware infrastructure could remain accessible for years.

Researchers believe the scale of this operation could grow rapidly. Some have compared its potential reach to the WannaCry ransomware attack, which disrupted hundreds of thousands of systems worldwide. In this case, however, the method is quieter and more flexible, which may allow it to spread further before being detected. At the same time, investigators are still unsure what the attackers ultimately intend to do with the access they gain.

Further analysis has revealed possible links to North Korean cyber actors. Investigators traced parts of the activity to an IP address in Vladivostok, a location that has previously appeared in investigations involving North Korean operations. Research cited by NATO has noted that North Korea expanded its internet routing through Russia several years ago. Additional findings from Trend Micro connect similar infrastructure to earlier campaigns involving fake recruiters.

The number of affected victims is already significant. Researchers estimate that around 300,000 credentials have been exposed so far, although they believe the real figure could be much higher. Impacted organizations include cybersecurity firms, defense contractors, financial companies, and government entities in countries such as the United States and Bangladesh.

The attackers rely heavily on deception to gain access. In some cases, they pose as recruiters and convince developers to run infected code as part of a hiring process. In others, they present themselves as freelance developers and introduce malicious code directly into company systems through platforms like GitHub.

Developers in rapidly growing tech ecosystems appear to be a key focus. India, for example, has seen a surge in new contributors on GitHub and ranks among the top countries for cryptocurrency adoption. Researchers suggest that a combination of high developer activity and economic incentives may make such regions more vulnerable to these tactics.

Initial contact is typically made through platforms such as LinkedIn, Upwork, Telegram, and Discord. Representatives from these platforms have advised users to be cautious, particularly when asked to download files or execute unfamiliar code outside controlled environments.

Not all targeted organizations appear strategically important, which suggests the attackers may be casting a wide net. However, the presence of defense and security-related entities among the victims raises more serious concerns about potential intelligence-gathering objectives.

Security experts say this campaign reflects a broader shift in how attacks are being designed. Instead of relying on a single point of failure, attackers are combining social engineering, publicly accessible code platforms, and decentralized infrastructure. The use of blockchain in particular adds a layer of persistence that traditional security tools are not designed to handle.

As investigations continue, researchers warn that this may only be an early stage of a much larger problem. The combination of hidden delivery methods, long-term persistence, and unclear intent makes this campaign especially difficult to predict and contain.

Cyber Attacks Threatening Global Digital Landscape, Affecting Human Lives


Cyberattack campaigns have increased against critical infrastructure like power grids, healthcare, and energy. 

Cyber warfare and global threat

The global threat landscape has shifted from data theft to threats against human lives. The convergence of Operational Technology (OT) and Information Technology (IT) has increased the attack surface, exposing sectors like public utilities, aviation, and transport to outsider risks. 

According to Gaurav Shukla, cybersecurity expert at Deloitte South Asia, “For the past two years, we observed that cyber threats were not limited only to the IT systems. They were pervading beyond IT systems, and the perpetrators were targeting more of the critical infrastructure.” 

Change in digital landscape

Digital transformation in recent years has increased the attack surface, providing more opportunities for threat actors to compromise critical infrastructure. “

"If you are driving a connected car on a highway at 120 km/h and suddenly find the steering is no longer in your control, you are not going to be worried about how much money is in your bank account. You are worried about the danger to your life,” Shukla added. 

How dangerous can it be?

For instance, an attack on a medical device compromising patient information can be dangerous, whereas a cyber attack on power grids and the transmission sector can result in countrywide blackouts.

Rise in connected devices

The world population of eight billion is currently surrounded by more than 30 billion IoT sensors. This means that, on average, a person is surrounded by more than 3.5 sensors. 

India’s Digital Public Infrastructure

India’s Digital Public Infrastructure, aka India Stack, has become a global benchmark. According to experts, Deloitte has suggested that 24 countries adopt their own framework for the India Stack. Shukla has warned that as DPI reaches beyond identity and payments to include education and healthcare, the convergence points create new threats. DPI accounts for around 80% of India’s digital transactions in January 2026.

Attackers' use of artificial intelligence (AI) increases the speed and scope of their attacks. Thus, ongoing testing against supply chain problems and AI-related risks will be extremely important, he continued.

Cyberwarfare is continuous, demanding ongoing cooperation between businesses, academics, and the government, whereas kinetic wars are time-bound. “Much like you need a language to build a foundation, awareness of cybersecurity and privacy is going to be just as important,” Shukla added. 

Claude Mythos 5: Trillion-Parameter AI Powerhouse Unveiled

 

Anthropic has launched Claude Mythos 5, a groundbreaking AI model boasting 10 trillion parameters, positioning it as a leader in advanced artificial intelligence capabilities. This massive scale enables superior performance in demanding fields like cybersecurity, coding, and academic reasoning, surpassing many competitors in handling complex, high-stakes tasks. 

Alongside it, the mid-tier Capabara model offers efficient versatility, bridging the gap between flagship power and practical deployment, with Anthropic emphasizing a phased rollout for ethical safety. Claude Mythos 5's model excels in precision and adaptability, making it ideal for cybersecurity threat detection and intricate software development where accuracy is paramount. In academic reasoning, it tackles multifaceted problems that require deep logical inference, outpacing previous models in benchmark tests. 

Anthropic's commitment to responsible AI ensures these tools minimize risks like misuse, aligning innovation with accountability in real-world applications. Complementing Anthropic's releases, GLM 5.1 emerges as a key open-source milestone, excelling in instruction-following and multi-step workflows for automation tasks. Though not the fastest, its reliability fosters community-driven innovation, providing accessible alternatives to proprietary systems for developers worldwide. This model democratizes AI progress, enabling collaborative advancements without the barriers of closed ecosystems. 

Google DeepMind's Gemini 3.1 advances real-time multimodal processing for voice and vision, enhancing latency and quality in sectors like healthcare and autonomous systems. OpenAI's revamped Codeex platform introduces plug-in ecosystems with pre-built workflows, streamlining coding and boosting developer productivity. Meanwhile, the ARC AGI 3 Benchmark sets a rigorous standard for agentic reasoning, combating overfitting and driving genuine AI intelligence gains. 

These developments, including Mistral AI’s expressive text-to-speech and Anthropic’s biology-focused Operon, signal AI's transformative potential across industries. From ethical trillion-parameter giants to open benchmarks, they promise efficiency in research, automation, and creative workflows. As AI evolves rapidly, balancing power with safety will shape a future of innovative problem-solving.

Generative AI Expanding Capabilities of Fraud and Social Engineering Attacks


 

In the past, the quiet integration of generative artificial intelligence into financial systems has been framed as a story of optimizing and scaling. However, in the digital banking industry, generative AI is now being rewritten in terms that are much more urgent. 

It is influencing not only the dynamics of fraud, but the way institutions operate as well, forcing them to rethink how they protect themselves as well. Those technologies that once promised frictionless customer experiences as well as operational precision are now being repurposed by malicious actors with unsettling efficiency, allowing deception to take place with unprecedented realism and speed that traditional safeguards are unprepared to handle.

Due to this, fraud is no longer merely an external threat that must be dealt with; it is now an adaptive, intelligence-driven force embedded within the digital ecosystem that requires banks to continuously reevaluate their security posture while maintaining the fragile trust that underpins modern financial transactions. This shift has been accelerated by the rapid maturation of generative artificial intelligence capabilities, which was initially underestimated by even the most experienced security practitioners.

A number of tools, including large language models, were capable of generating passable but largely generic phishing content in the early stages of widespread adoption. However, they were unable to provide contextual precision or psychological nuance required for high impact attacks. Despite long being regarded as a domain characterized by human intuition, reconnaissance, and carefully constructed deception, full automation appears to have remained problematic. Nevertheless, technological advances have sharply increased in recent years.

Modern models have evolved beyond static datasets and now include real-time retrieval of information, while AI agents are becoming increasingly sophisticated and capable of orchestrating a wide variety of workflows, from data aggregation to targeted messages. In light of these developments, the threat landscape has materially changed. 

 A highly personalised attack narrative, previously requiring deliberate human effort to construct, can be built rapidly and scaleably using publicly available digital footprints and behavioral cues. The concept of fully automated, precision-driven social engineering is no longer theoretical in this context.

Instead of representing an emerging operational reality, it represents an emerging operational reality that requires threat actors only to initiate the process, leaving adaptive AI systems to refine and execute campaigns with a level of consistency and reach that significantly increases the frequency and effectiveness of fraud attempts. 

Modern artificial intelligence systems have advanced the analytical and generative capabilities of social engineering, enabling a significant proportion of successful intrusions to be carried out with this tactic. These models are capable of building highly contextualised engagement vectors which reflect the authentic communication patterns of corporations, social media platforms, and professional networks by systematically harvesting and correlating publicly accessible data across corporate websites, social media platforms, and professional networks. 

Consequently, phishing and business email compromise attempts are now more sophisticated than they were before, as they replicate internal correspondence, vendor interactions, and executive directives with a degree of authenticity that challenges conventional scrutiny in both linguistics and situationality. 

By allowing adversaries to seamlessly operate across geographically dispersed organizations, multilingual generation further extends the reach of such campaigns. Moreover, there has been an increase in synthetic media techniques, including voice cloning and artificial intelligence-generated audio, that are increasingly being deployed in real-time impersonation attacks, especially in cases where trust is high, such as financial authorizations and executive communications. 

A new approach to governance frameworks is necessary for enterprises operating in distributed and digitally dependent environments, with a greater emphasis on verification protocols, communication authentication, and continuous monitoring. Parallel to this, it is becoming increasingly difficult for malicious software developers to enter the market. 

In spite of sophisticated threat actors continuing to engineer advanced malware using traditional methods, generative AI provides less experienced adversaries with the ability to interact with the threat landscape. AI-assisted tooling identifies exploitable weaknesses in open-source codebases, generates functional scripts tailored to those vulnerabilities, and iteratively modifies existing payloads to evade signature-based detection. 

While such outputs may not always match the complexity of state-sponsored tooling, they are more effective due to their scalability and speed. Attackers can rapidly test multiple variants against defensive systems and refine their approaches quickly and effectively without the need for extensive technical knowledge. 

The increased iteration cycle contributes to a more volatile threat environment, as it results in a greater variety of attack techniques that are capable of adapting quickly to defensive countermeasures due to the increased diversity of attack techniques. This shift reveals the limitations of traditional security architectures relying primarily on perimeter-based control mechanisms and static prevention systems. 

While firewalls, antivirus solutions, and access controls remain fundamental, they are no longer sufficient to combat automated adversaries that are more adaptive and adaptive. Despite the fact that AI-driven attacks are capable of bypassing rule-based systems, the sheer volume and speed of attempts increase the probability of compromise statistically. 

Organizations are therefore being forced to make detection and response capabilities a core component of their security posture, thus prioritizing them as core components. These include continuous monitoring of endpoints and networks, the use of behavioral analytics to identify deviations from established patterns, and the establishment of workflows for rapid investigation and response to incidents. These measures are essential not only for early threat identification, but also to limit the operational and financial impact of breaches. This development also has a significant economic impact. 

A major factor contributing to scam-related losses is artificial intelligence, which acts as a force multiplier, accelerating the scale and success rate of fraud. Global scam losses are estimated to exceed hundreds of billions annually. AI-enabled scams have increasingly reached execution and completion within a compressed timeframe, often within hours of initial contact, which has reduced the window for detection and intervention. 

Looking forward, the implications go well beyond incremental risk. Incorporating artificial intelligence into cybercriminal operations represents a substantial change in how fraud is conceived, executed, and scaled. With the rapid advancement of attack methodologies, increasing cost-efficiency, and increased autonomy, defensive strategies are unable to keep pace.

In an environment where tactics are evolving in real time, organizations must not only identify isolated threats, but also continually adapt in order to remain competitive. It is becoming increasingly clear that financial institutions are repositioning generative AI as a foundational layer within modern fraud detection architectures as a defensive response to the rapidly changing threat landscape. 

The most significant application of this technology lies in real-time behavioural intelligence, where models are continuously analyzing signals, including typing cadence, navigation patterns, device characteristics, and transactional timing, to establish dynamic baselines for legitimate user activity in real-time. These behavioural signatures can be instantly identified if they depart from them, thus allowing institutions to take action immediately during critical moments, such as digital onboarding or high risk transactions. 

By using such systems in practice, fraud operations have been improved by reducing false positives and improving detection precision, addressing one of the long-standing inefficiencies. When viewed in light of synthetic identity fraud, which has emerged as a persistent and financially material risk across digital channels, this capability becomes particularly relevant. 

Synthetic fraud differs from traditional identity theft by using fabricated and legitimate data to create identities that can be evaded using conventional verification methods. By modeling the lifecycle and behavioral consistency of authentic identities over time,generative AI introduces a more nuanced approach to identifying anomalies that are statistically subtle yet operationally meaningful as they occur. 

Using a near-authentic detection threshold represents a significant departure from rule-based systems, which are often incapable of identifying fraud based on predefined patterns. As a result, transaction monitoring traditionally burdened by excessive alert volumes and limited contextual clarity is undergoing a structural transformation. As a result of these capabilities, cognitive systems are now able to correlate disparate signals into coherent analytical narratives, effectively grouping isolated alerts into fraud scenarios, and prioritizing them based on their inferred impact and risk. 

By shifting from static thresholding to context-aware analysis, detection rates are enhanced as well as the amount of manual work on investigation teams is significantly reduced. Providing institutions with the ability to interpret and explain risk in a structured manner has proven to be critical in environments where speed and accuracy are equally important.

In addition to detection, generative AI is also used to create proactive resilience through large-scale fraud simulations. A stress-testing process involving the generation of synthetic datasets and modelling complex attack scenarios, such as deepfake-enabled payment fraud and coordinated mule account networks, is possible under conditions that closely approximate real-world threats by organizations. 

With the help of simulation environments, security teams are able to identify and refine systemic weaknesses before adversaries exploit them in production systems, thereby shifting from a reactive to an anticipatory defensive posture. Despite this accelerated adoption, the overall fraud landscape continues to deteriorate, underscoring the magnitude of the issue. 

A significant majority of financial institutions have begun utilizing AI-driven tools actively, with adoption rates rapidly increasing in recent years. Nevertheless, fraud losses, particularly those caused by identity abuse, instant payments, and account takeovers, continue to rise, emphasizing the limitations of legacy controls when faced with adaptive adversaries enabled by artificial intelligence. 

As AI enhances defensive capabilities, it simultaneously enhances sophistication and accessibility of attack methodologies, demonstrating a critical inflection point. Generated artificial intelligence is not positioned here as a standalone solution, but rather as a vital component of a future security strategy. Its value lies in enabling systems to continuously learn, to detect anomalies based on greater contextual awareness, and to respond at machine speed when necessary. 

With the interconnectedness of financial ecosystems and the increase in transaction volumes, real-time prediction and neutralization of emerging fraud patterns is becoming increasingly important. To ensure operational integrity and customer trust, organizations need to integrate generative artificial intelligence as a core component of fraud defence as a strategic necessity. 

An increasingly intelligent threat environment makes it a strategic necessity. Managing this rapidly evolving risk environment requires shifting attention from incremental enhancements to deliberate, architecture-level transformation. In order to mitigate fraud, institutions are expected to integrate adaptive intelligence throughout the fraud lifecycle, incorporating advanced analytics into strong governance frameworks, cross-channel visibility, and rapid decision-making processes. 

Human expertise must be paired with machine-driven insights to ensure that automation augments rather than replaces strategic oversight. In order to sustain resilience to increasingly autonomous threats, continuous model validation, adversarial testing, and workforce upskilling will be necessary. Agile, accountable, and real-time responsive organizations will ultimately be in a better position to contain emerging risks in an increasingly AI-mediated financial ecosystem.

Cybersecurity Risks Rise as Modern Vehicles Become Complex Digital Ecosystems

 

Today’s vehicles have evolved into highly interconnected cyber-physical systems, combining mobile apps, backend infrastructure, over-the-air (OTA) update mechanisms, and AI-powered decision-making. This growing integration has significantly expanded the potential attack surface, introducing security risks that traditional IT frameworks were not designed to address. As a result, vulnerabilities are increasingly surfacing across the entire automotive ecosystem.

"Unlike a traditional IT system, like a mail server or your home network, the worst case scenario involves things like safety implications or real-world operational disruptions like closing down a road or being able to cause damage to the environment," said Kamel Ghali, vice president at Car Hacking Village.

With the shift toward software-defined vehicles and reliance on OTA updates, cars are beginning to inherit many of the same security weaknesses seen in conventional IT systems. At the same time, the integration of artificial intelligence introduces new concerns, as these models—now responsible for safety-critical decisions—must be safeguarded against manipulation or external interference, Ghali noted.

During a video interview with Information Security Media Group at the RSAC Conference 2026, Ghali further highlighted several key developments. He explained that the automotive supply chain is increasingly investing in cryptographically secure processors to gain a competitive edge. 

He also pointed out that threat modeling in the automotive sector is expanding beyond traditional IT considerations to address safety, operational continuity, and environmental impact. Additionally, he emphasized that maintaining supply chain integrity will likely emerge as the most significant long-term cybersecurity challenge for the automotive industry.

Ghali brings over seven years of expertise in automotive cybersecurity, specializing in ethical hacking, penetration testing, training, and product security. He is an active contributor to the global cybersecurity community, leads outreach initiatives for the DEF CON Car Hacking Village, and plays a key role in raising awareness about vehicle security risks.

Threat Actors Exploit GitHub as C2 in Multi-Stage Attacks Attacking Organizations in South Korea


GitHub attacked by state-sponsored hackers 

Cyber criminals possibly linked with the Democratic People's Republic of Korea (DPRK) have been found using GitHub as a C2 infrastructure in multi-stage campaigns attacking organizations in South Korea. 

The operation chain involves hidden Windows shortcut (LNK) files that work as a beginning point to deploy a fake PDF document and a PowerShell script that triggers another attack. Experts believe that these LNK files are circulated through phishing emails.

Payload execution 

Once the payloads are downloaded, the victim is shown as the PDF document, while the harmful PowerShell script operates covertly in the background. 

The PowerShell script does checks to avoid analysis by looking for running processes associated with machines, forensic tools, and debuggers. 

Successful exploit scenario 

If successful, it retrieves a Visual Basic Script (VBScript) and builds persistence through a scheduled task that activates the PowerShell payload every 30 minutes in a covert window to escape security. 

This allows the PowerShell script to deploy automatically after every system reboot. “Unlike previous attack chains that progressed from LNK-dropped BAT scripts to shellcode, this case confirms the use of newly developed dropper and downloader malware to deliver shellcode and the ROKRAT payload,” S2W reported. 

The PowerShell script then classifies the attacked host, saves the response to a log file, and extracts it to a GitHub repository made under the account “motoralis” via a hard-coded access token. Few of the GitHub accounts made as part of the campaign consist of “Pigresy80,” "pandora0009”, “brandonleeodd93-blip” and “God0808RAMA.”

After this, the script parses a particular file in the same GitHub repository to get more instructions or modules, therefore letting the threat actor to exploit the trust built with a platform such as GitHub to gain trust and build persistence over the compromised host. 

Campaign history 

According to Fortnet, LNK files were used in previous campaign iterations to propagate malware families such as Xeno RAT. Notably, last year, ENKI and Trellix demonstrated the usage of GitHub C2 to distribute Xeno RAT and its version MoonPeak. 

Kimsuky, a North Korean state-sponsored organization, was blamed for these assaults. Instead of depending on complex custom malware, the threat actor uses native Windows tools for deployment, evasion, and persistence. By minimizing the use of dropped PE files and leveraging LolBins, the attacker can target a broad audience with a low detection rate,” said researcher Cara Lin. 


Featured