Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Artificial Intelligence. Show all posts

Panama and Vietnam Governments Suffer Cyber Attacks, Data Leaked


Hackers stole government data from organizations in Panama and Vietnam in multiple cyber attacks that surfaced recently.

About the incident

According to Vietnam’s state news outlet, the Cyber Emergency Response Team (VNCERT) confirmed reports of a breach targeting the National Credit Information Center (CIC) that manages credit information for businesses and people, an organization run by the State Bank of Vietnam. 

Personal data leaked

Earlier reports suggested that personal information was exposed due to the attack. VNCERT is now investigating and working with various agencies and Viettel, a state-owned telecom. It said, “Initial verification results show signs of cybercrime attacks and intrusions to steal personal data. The amount of illegally acquired data is still being counted and clarified.”

VNCERT has requested citizens to avoid downloading and sharing stolen data and also threatened legal charges against people who do so.

Who was behind the attack?

The statement has come after threat actors linked to the Shiny Hunters Group and Scattered Spider cybercriminal organization took responsibility for hacking the CIC and stealing around 160 million records. 

Threat actors put up stolen data for sale on the cybercriminal platforms, giving a sneak peek of a sample that included personal information. DataBreaches.net interviewed the hackers, who said they abused a bug in end-of-life software, and didn’t offer a ransom for the stolen information.

CIC told banks that the Shiny Hunters gang was behind the incident, Bloomberg News reported.

The attackers have gained the attention of law enforcement agencies globally for various high-profile attacks in 2025, including various campaigns attacking big enterprises in the insurance, retail, and airline sectors. 

The Finance Ministry of Panama also hit

The Ministry of Economy and Finance in Panam was also hit by a cyber attack, government officials confirmed. “The Ministry of Economy and Finance (MEF) informs the public that today it detected an incident involving malicious software at one of the offices of the Ministry,” they said in a statement. 

The INC ransomware group claimed responsibility for the incident and stole 1.5 terabytes of data, such as emails, budgets, etc., from the ministry.

AdaptixC2 Raises Security Alarms Amid Active Use in Cyber Incidents

 


During this time, when digital resilience has become more important than digital innovation, there is an increasing gap between strengthened defences and the relentless adaptability of cybercriminals, which is becoming increasingly evident as we move into the next decade. According to a recent study by Veeam, seven out of ten organisations still suffered cyberattacks in the past year, despite spending more on security and recovery capabilities. 

Rather than simply preventing intrusions, the issue has now evolved into ensuring rapid recovery of mission-critical data once an attack has succeeded, a far more complex challenge. As a result of this uneasiness, the emergence of AdaptixC2, an open-source framework for emulating post-exploitation adversarial adversaries, is making people more concerned. 

With its modular design, support for multiple beacon formats, and advanced tunnelling features, AdaptixC2 is one of the most versatile platforms available for executing commands, transferring files, and exfiltrating data from compromised systems. As a result, analysts have observed its use in attacks ranging from social engineering campaigns via Microsoft Teams to automated scripts likely to be used in many of these attacks, and in some cases in combination with ransomware attacks. 

In light of the ever-evolving threat landscape, the increasing prevalence of such customizable frameworks has heightened the pressure on CISOs and IT leaders to ensure both the recovery and continuity of business under fire are possible not only by building stronger defences, but also by providing a framework that can be customised to suit specific requirements. 

In May 2025, researchers from Unit 42 discovered evidence that the AdaptixC2 malware was being used in active campaigns to infect multiple systems and demonstrated that it is becoming increasingly relevant as a cyber threat. The original goal of AdaptixC2 was to develop a framework for post-exploitation and adversarial emulation by penetration testers, but it has quietly evolved into a weaponised tool that is preferred by threat actors because of its stealth and adaptability. 

It is noteworthy that, unlike other widely recognised command-and-control frameworks, AdaptixC2 has been virtually unnoticed, with limited reports documenting its usage in actual-life situations. The framework has a wide array of capabilities, allowing malicious actors to perform command execution, transfer files, and exfiltrate sensitive data at alarming speeds. 

Since it is an open source platform, it is very easy to customise, allowing adversaries to take advantage of it with ease and make it highly versatile. Several recent investigations have also indicated that Microsoft Teams is used in social engineering campaigns to deliver malicious payloads, including those instances in which Microsoft Teams was utilized to deliver malicious payloads. AI-generated scripts are also suspected to have been used in some operations. 

The development of such tools demonstrates the trend of attackers increasingly employing modular and customizable frameworks as a means of bypassing traditional defences. Nevertheless, artificial intelligence-powered threats are adding new layers of complexity to the threat landscape. Deepfake-based phishing scams, adaptive bot operations that are similar to human beings, and more. 

Several recent incidents, such as the Hong Kong case, in which scammers used fake video impersonations to swindle US$25 million from their victims, demonstrate how devastating these tactics can be. 

With AI enabling adversaries to imitate voices, behaviours, and even writing styles with uncanny accuracy, it is escalating the challenges that security teams face to remain on top of the ever-changing threats they face: Keeping up with adversaries who are evolving faster, deceiving more convincingly, and evading detection at a much faster pace. In the past few years, AdaptixC2 has evolved into a formidable open-source command-and-control framework known as AdaptixC2. 

As a result of its flexible architecture, modular design, and support for various beacon agent formats, the beacon agent has become an integral part of the threat actor arsenal when it comes to persistence and stealth. This has been a weapon that has been used for penetration testing and adversarial simulation. 

With the flexibility of the framework, operators are able to customise modules, integrate AI-generated scripts into the application, and deploy sophisticated tunnelling mechanisms across a wide range of communication channels, including HTTP, DNS, and even their own foggyweb protocols, thanks to its extensible nature. 

By virtue of its adaptability, AdaptixC2 is a versatile toolkit for post-exploitation, allowing it to execute commands, transfer files, and exfiltrate encrypted data while ensuring minimal detection. As part of their investigations, researchers have been able to identify the malware's deployment methods. Social engineering campaigns were able to use Microsoft Teams as a tool, while payload droppers were likely crafted with artificial intelligence scripting. 

Those attackers established resilient tunnels, maintained long-term persistence, and carefully orchestrated the exfiltration of sensitive data. AdaptixC2 has also been used to combine with ransomware campaigns, enabling adversaries to harvest credentials, map networks, and exfiltrate critical data before unleashing disruptive encryption payloads to gain financial gain. 

In addition, open-source C2 frameworks are becoming increasingly integrated into multi-phase attacks, which blur the line between reconnaissance, lateral movement, and destructive activity within the threat ecosystem, highlighting a broader shift in the threat landscape. It is clear from this growing threat that defenders need to build layered detection strategies to monitor anomalous beacons, foggy web traffic, and unauthorised script execution, as well as to raise user awareness about social engineering within collaboration platforms, which is of paramount importance. 

The more AdaptixC2 is analysed in detail, the more evident it becomes how comprehensive and dangerous its capabilities are when deployed in real-life environments. In spite of being designed initially as a tool to perform red-teaming, the framework provides comprehensive control over compromised machines and is increasingly exploited by malicious actors. 

 The threat operators have several tools available to them, including manipulating the file system, creating or deleting files, enumerating processes, terminating applications, and even initiating new program executions, all of which can be used to extend their reach. In order to carry out such actions, attackers need to be able to use advanced tunnelling features - such as SOCKS4/5 proxying and port forwarding - which enable them to maintain covert communication channels even within highly secured networks. 

Its modular architecture, built upon "extenders" which function as plugins, allows adversaries to craft custom payloads and evasion techniques. Beacon Object Files (BOFs) further enhance the stealth capabilities of an agent by executing small C programs directly within the agent's process. As part of this framework, beacon agents can be generated in multiple formats, including executables, DLLs, service binaries, or raw shell code, on both x86 and x64 architectures.

These agents can perform discreet data exfiltration using their specialised commands, even dividing up file transfers into small chunks in order to avoid triggering detection tools by network-based systems. AdaptixC2 has also been designed with operational security features embedded in it, enabling attackers to blend into normal traffic flow without being detected. 

A number of parameters can be configured to prevent beacons from activating during off-hours monitoring, such as "KillDate" and "WorkingTime". By using this system, it is possible to configure beacons in three primary ways, which include HTTP, SMB, and TCP, all of which are tailored to different communication paths and protocols. 

There are three major types of HTTP disguise methods: those that hide traffic using familiar web parameters such as headers, URIs, and user-agent strings, those which leverage Windows named pipes and those which use TCP to obfuscate connections by using lightweight obfuscation to disguise traffic. 

A study published in the Journal of Computer Security has highlighted the fact that despite the RC4 encryption in the configuration, its predictable structure enables defenders to build tools that get an overview of malicious samples, retrieve server details, and display communication profiles automatically. 

In addition to the modularity, covert tunnelling, and operational security measures AdaptixC2 offers attackers, it has also provided a significant leap forward in the evolution of open-source C2 frameworks by providing a persistent challenge for defenders who have to deal with detecting threats and responding to them. As AdaptixC2 becomes increasingly popular, it becomes increasingly evident that both its adaptability and its escalating risks to enterprises are becoming more significant. 

A modular design, combined with the increasing use of artificial intelligence-assisted code generation, makes it possible for adversaries to improve their techniques at a rapid rate, making detection and containment more challenging for defenders. 

The framework’s flexibility has made it a favourite choice for sophisticated campaigns where rapid customisations are able to transform even routine intrusions into long-term, persistent threats. Researchers warn that this makes the framework a preferred choice for sophisticated campaigns. Security providers are enhancing their defences in an attempt to counter these developments by investing in advanced detection and prevention mechanisms. 

Palo Alto Networks, for instance, has upgraded its security portfolio in order to effectively address AdaptixC2-related threats by utilising multiple layers of defences. A new version of Advanced URL Filtering and Advanced DNS Security has been added, which finds and blocks domains and URLs linked to malicious activity. Advanced Threat Prevention has also been updated to include machine learning models that detect exploits in real time. 

As part of the company’s WildFire analysis platform, new artificial intelligence-driven models have been developed to identify emerging indicators better, and its Cortex XDR and XSIAM solutions offer a multilayered malware prevention system that prevents both known and previously unknown threats across all endpoints. 

 A proactive defence strategy such as this highlights the importance of tracking not only the progress of AdaptixC2 technology but also continuously updating mitigation strategies in order to stay ahead of adversaries, who are increasingly relying on customised frameworks to outperform traditional security controls in an ever-changing threat landscape. 

It is, in my opinion, clear that the emergence of AdaptixC2 underscores the fact that cyber defence is no longer solely about building barriers, but rather about fostering resilience in the face of adversaries who are growing more sophisticated, quicker, and more resourceful each day. Increasingly, organisations need to integrate adaptability into every layer of their security posture rather than relying on static strategies. 

The key to achieving this is not simply deploying advanced technology - it involves cultivating a culture of vigilance, where employees recognise emerging social engineering tactics and IT teams are proactive in seeking out potential threats before they escalate. The balance can be shifted to favour the defences by investing in zero-trust frameworks, enhanced threat intelligence, and automated response mechanisms. 

The importance of industry-wide collaboration cannot be overstated, where information sharing and coordinated efforts make it much harder for tools like AdaptixC2 to remain hidden from view. Because threat actors are increasingly leveraging artificial intelligence and customizable frameworks to refine their attacks, defenders are also becoming more and more adept at using AI-based analytics and automation in order to detect anomalies and respond swiftly to them. 

With the high stakes of this contest at stake, those who consider adaptability a continuous discipline - rather than a one-off fix-all exercise - will be the most prepared to safeguard their mission-critical assets and ensure operational continuity despite the relentless cyber threats they face.

AI Image Attacks: How Hidden Commands Threaten Chatbots and Data Security

 



As artificial intelligence becomes part of daily workflows, attackers are exploring new ways to exploit its weaknesses. Recent research has revealed a method where seemingly harmless images uploaded to AI systems can conceal hidden instructions, tricking chatbots into performing actions without the user’s awareness.


How hidden commands emerge

The risk lies in how AI platforms process images. To reduce computing costs, most systems shrink images before analysis, a step known as downscaling. During this resizing, certain pixel patterns, deliberately placed by an attacker can form shapes or text that the model interprets as user input. While the original image looks ordinary to the human eye, the downscaled version quietly delivers instructions to the system.

This technique is not entirely new. Academic studies as early as 2020 suggested that scaling algorithms such as bicubic or bilinear resampling could be manipulated to reveal invisible content. What is new is the demonstration of this tactic against modern AI interfaces, proving that such attacks are practical rather than theoretical.


Why this matters

Multimodal systems, which handle both text and images, are increasingly connected to calendars, messaging apps, and workplace tools. A hidden prompt inside an uploaded image could, in theory, request access to private information or trigger actions without explicit permission. One test case showed that calendar data could be forwarded externally, illustrating the potential for identity theft or information leaks.

The real concern is scale. As organizations integrate AI assistants into daily operations, even one overlooked vulnerability could compromise sensitive communications or financial data. Because the manipulation happens inside the preprocessing stage, traditional defenses such as firewalls or antivirus tools are unlikely to detect it.


Building safer AI systems

Defending against this form of “prompt injection” requires layered strategies. For users, simple precautions include checking how an image looks after resizing and confirming any unusual system requests. For developers, stronger measures are necessary: restricting image dimensions, sanitizing inputs before models interpret them, requiring explicit confirmation for sensitive actions, and testing models against adversarial image samples.

Researchers stress that piecemeal fixes will not be enough. Only systematic design changes such as enforcing secure defaults and monitoring for hidden instructions can meaningfully reduce the risks.

Images are no longer guaranteed to be safe when processed by AI systems. As attackers learn to hide commands where only machines can read them, users and developers alike must treat every upload with caution. By prioritizing proactive defenses, the industry can limit these threats before they escalate into real-world breaches.



Google DeepMind’s Jeff Dean Says AI Models Already Outperform Humans in Most Tasks

 

With artificial intelligence evolving rapidly, the biggest debate in the AI community is whether advanced models will soon outperform humans in most tasks—or even reach Artificial General Intelligence (AGI). 

Google DeepMind’s Chief Scientist Jeff Dean, while avoiding the term AGI, shared that today’s AI systems may already be surpassing humans in many everyday activities, though with some limitations.

Speaking on the Moonshot Podcast, Dean remarked that current models are "better than the average person at most tasks" that don’t involve physical actions.

"Most people are not that good at a random task if you ask them to do that they've never done before, and you know some of the models we have today are actually pretty reasonable at most things," he explained.

However, Dean also cautioned that these systems are far from flawless. "You know, they will fail at a lot of things; they're not human expert level in some things, so that's a very different definition and being better than the world expert at every single task," he said.

When asked about AI’s ability to make breakthroughs faster than humans, Dean responded: "We're actually probably already you know close to that in some domains, and I think we're going to broaden out that set of domains." He emphasized that automation will play a crucial role in accelerating "scientific progress, engineering progress," and advancing human capabilities over the next "five, 10, 15, 20 years."

From Vulnerability Management to Preemptive Exposure Management

 

The traditional model of vulnerability management—“scan, wait, patch”—was built for an earlier era, but today’s attackers operate at machine speed, exploiting weaknesses within hours of disclosure through automation and AI-driven reconnaissance. The challenge is no longer about identifying vulnerabilities but fixing them quickly enough to stay ahead. While organizations discover thousands of exposures every month, only a fraction are remediated before adversaries take advantage.

Roi Cohen, co-founder and CEO of Vicarius, describes the answer as “preemptive exposure management,” a strategy that anticipates and neutralizes threats before they can be weaponized. “Preemptive exposure management shifts the model entirely,” he explains. “It means anticipating and neutralizing threats before they’re weaponized, not waiting for a CVE to be exploited before taking action.” This proactive model requires continuous visibility of assets, contextual scoring to highlight the most critical risks, and automation that compresses remediation timelines from weeks to minutes.

Michelle Abraham, research director for security and trust at IDC, notes the urgency of this shift. “Proactive security seems to have taken a back seat to reactive security at many organizations. IDC research highlights that few organizations track all their IT assets which is the critical first step towards visibility of the full digital estate. Once assets and exposures are identified, security teams are often overwhelmed by the volume of findings, underscoring the need for risk-based prioritization,” she says. Traditional severity scores such as CVSS do not account for real-world exploitability or the value of affected systems, which means organizations often miss what matters most. Cohen stresses that blending exploit intelligence, asset criticality, and business impact is essential to distinguish noise from genuine risk.

Abraham further points out that less than half of organizations currently use exposure prioritization algorithms, and siloed operations between security and IT create costly delays. “By integrating visibility, prioritization and remediation, organizations can streamline processes, reduce patching delays and fortify their defenses against evolving threats,” she explains.

Artificial intelligence adds another layer of complexity. Attackers are already using AI to scale phishing campaigns, evolve malware, and rapidly identify weaknesses, but defenders can also leverage AI to automate detection, intelligently prioritize threats, and generate remediation playbooks in real time. Cohen highlights its importance: “In a threat landscape that moves faster than any analyst can, remediation has to be autonomous, contextual and immediate and that’s what preemptive strategy delivers.”

Not everyone, however, is convinced. Richard Stiennon, chief research analyst at IT-Harvest, takes a more skeptical stance: “Most organizations have mature vulnerability management programs that have identified problems in critical systems that are years old. There is always some reason not to patch or otherwise fix a vulnerability. Sprinkling AI pixie dust on the problem will not make it go away. Even the best AI vulnerability discovery and remediation solution cannot overcome corporate lethargy.” His concerns highlight that culture and organizational behavior remain as critical as the technology itself.

Even with automation, trust issues persist. A single poorly executed patch can disrupt mission-critical operations, leading experts to recommend gradual adoption. Much like onboarding a new team member, automation should begin with low-risk actions, operate with guardrails, and build confidence over time as results prove consistent and reliable. Lawrence Pingree of Dispersive emphasizes prevention: “We have to be more preemptive in all activities, this even means the way that vendors build their backend signatures and systems to deliver prevention. Detection and response is failing us and we're being shot behind the line.”

Regulatory expectations are also evolving. Frameworks such as NIST CSF 2.0 and ISO 27001 increasingly measure how quickly vulnerabilities are remediated, not just whether they are logged. Compliance is becoming less about checklists and more about demonstrating speed and effectiveness with evidence to support it.

Experts broadly agree on what needs to change: unify detection, prioritization, and remediation workflows; automate obvious fixes while maintaining safeguards; prioritize vulnerabilities based on exploitability, asset value, and business impact; and apply runtime protections to reduce exposure during patching delays. Cohen sums it up directly: security teams don’t need to find more vulnerabilities—they need to shorten the gap between detection and mitigation. With attackers accelerating at machine speed, the only sustainable path forward is a preemptive strategy that blends automation, context, and human judgment.

Misuse of AI Agents Sparks Alarm Over Vibe Hacking


 

Once considered a means of safeguarding digital battlefields, artificial intelligence has now become a double-edged sword —a tool that can not only arm defenders but also the adversaries it was supposed to deter, giving them both a tactical advantage in the digital fight. According to Anthropic's latest Threat Intelligence Report for August 2025, shown below, this evolving reality has been painted in a starkly harsh light. 

It illustrates how cybercriminals are developing AI as a product of choice, no longer using it to support their attacks, but instead executing them as a central instrument of attack orchestration. As a matter of fact, according to the report, malicious actors are now using advanced artificial intelligence in order to automate phishing campaigns on a large scale, circumvent traditional security measures, and obtain sensitive information very efficiently, with very little human oversight needed. As a result of AI's precision and scalability, the threat landscape is escalating in troubling ways. 

By leveraging AI's accuracy and scalability, modern cyberattacks are being accelerated, reaching, and sophistication. A disturbing evolution of cybercrime is being documented by Anthropologic, as it turns out that artificial intelligence is no longer just used to assist with small tasks such as composing phishing emails or generating malicious code fragments, but is also serving as a force multiplier for lone actors, giving them the capacity to carry out operations at scale and with precision that was once reserved for organized criminal syndicates to accomplish. 

Investigators have been able to track down a sweeping extortion campaign back to a single perpetrator in one particular instance. This perpetrator used Claude Code's execution environment as a means of automating key stages of intrusion, such as reconnaissance, credential theft, and network penetration, to carry out the operation. The individual compromised at least 17 organisations, ranging from government agencies to hospitals to financial institutions, and he has made ransom demands that have sometimes exceeded half a million dollars in some instances. 

It was recently revealed that researchers have conceived of a technique called “vibe hacking” in which coding agents can be used not just as tools but as active participants in attacks, marking a profound shift in both cybercriminal activity’s speed and reach. It is believed by many researchers that the concept of “vibe hacking” has emerged as a major evolution in cyberattacks, as instead of exploiting conventional network vulnerabilities, it focuses on the logic and decision-making processes of artificial intelligence systems. 

In the year 2025, Andrej Karpathy started a research initiative called “vibe coding” - an experiment in artificial intelligence-generated problem-solving. Since then, the concept has been co-opted by cybercriminals to manipulate advanced language models and chatbots for unauthorised access, disruption of operations, or the generation of malicious outputs, originating from a research initiative. 

By using AI, as opposed to traditional hacking, in which technical defences are breached, this method exploits the trust and reasoning capabilities of machine learning itself, making detection especially challenging. Furthermore, the tactic is reshaping social engineering as well: attackers can create convincing phishing emails, mimic human speech, build fraudulent websites, create clones of voices, and automate whole scam campaigns at an unprecedented level using large language models that simulate human conversations with uncanny realism. 

With tools such as artificial intelligence-driven vulnerability scanners and deepfake platforms, the threat is amplified even further, creating a new frontier of automated deception, according to experts. In one notable variant of scamming, known as “vibe scamming,” adversaries can launch large-scale fraud operations in which they generate fake portals, manage stolen credentials, and coordinate follow-up communications all from a single dashboard, which is known as "vibe scamming." 

Vibe hacking is one of the most challenging cybersecurity tasks people face right now because it is a combination of automation, realism, and speed. The attackers are not relying on conventional ransomware tactics anymore; they are instead using artificial intelligence systems like Claude to carry out all aspects of an intrusion, from reconnaissance and credential harvesting to network penetration and data extraction.

A significant difference from earlier AI-assisted attacks was that Claude demonstrated "on-keyboard" capability as well, performing tasks such as scanning VPN endpoints, generating custom malware, and analysing stolen datasets to prioritise the victims with the highest payout potential. As soon as the system was installed, it created tailored ransom notes in HTML, containing the specific financial requirements, workforce statistics, and regulatory threats of each organisation, all based on the data that had been collected. 

The amount of payments requested ranged from $75,000 to $500,000 in Bitcoin, which illustrates that with the assistance of artificial intelligence, one individual could control the entire cybercrime network. Additionally, the report emphasises how artificial intelligence and cryptocurrency have increasingly become intertwined. For example, ransom notes include wallet addresses in ransom notes, and dark web forums are exclusively selling AI-generated malware kits in cryptocurrency. 

An investigation by the FBI has revealed that North Korea is increasingly using artificial intelligence (AI) to evade sanctions, which is used to secure fraudulent positions at Western tech companies by state-backed IT operatives who use it for the fabrication of summaries, passing interviews, debugging software, and managing day-to-day tasks. 

According to officials in the United States, these operations channel hundreds of millions of dollars every year into Pyongyang's technical weapon program, replacing years of training with on-demand artificial intelligence assistance. This reveals a troubling shift: artificial intelligence is not only enabling cybercrime but is also amplifying its speed, scale, and global reach, as evidenced by these revelations. A report published by Anthropological documents how Claude Code has been used not just for breaching systems, but for monetising stolen information at large scales as well. 

As a result of using the software, thousands of records containing sensitive identifiers, financial information, and even medical information were sifted through, and then customised ransom notes and multilayered extortion strategies were generated based on the victim's characteristics. As the company pointed out, so-called "agent AI" tools now provide attackers with both technical expertise and hands-on operational support, which effectively eliminates the need to coordinate teams of human operators, which is an important factor in preventing cyberattackers from taking advantage of these tools. 

Researchers warn that these systems can be dynamically adapted to defensive countermeasures, such as malware detection, in real time, thus making traditional enforcement efforts increasingly difficult. There are a number of cases to illustrate the breadth of abuse that occurs in the workplace, and there is a classifier developed by Anthropic to identify the behaviour. However, a series of case studies indicates this behaviour occurs in a multitude of ways. 

In the North Korean case, Claude was used to fabricate summaries and support fraudulent IT worker schemes. In the U.K., a criminal known as GTG-5004 was selling ransomware variants based on artificial intelligence on darknet forums; Chinese actors utilised artificial intelligence to compromise Vietnamese critical infrastructure; and Russian and Spanish-speaking groups were using the software to create malicious software and steal credit card information. 

In order to facilitate sophisticated fraud campaigns, even low-skilled actors have begun integrating AI into Telegram bots around romance scams as well as false identity services, significantly expanding the number of fraud campaigns available. A new report by Anthropic researchers Alex Moix, Ken Lebedev, and Jacob Klein argues that artificial intelligence, based on the results of their research, is continually lowering the barriers to entry for cybercriminals, enabling fraudsters to create profiles of victims, automate identity theft, and orchestrate operations at a speed and scale that is unimaginable with traditional methods. 

It is a disturbing truth that is highlighted in Anthropic’s report: although artificial intelligence was once hailed as a shield for defenders, it is now increasingly being used as a weapon, putting digital security at risk. Nevertheless, people must not retreat from AI adoption, but instead develop defensive strategies in parallel that are geared toward keeping up with AI adoption. Proactive guardrails must be set up in order to prevent artificial intelligence from being misused, including stricter oversight and transparency by developers, as well as continuous monitoring and real-time detection systems to recognise abnormal AI behaviour before it escalates into a serious problem. 

A company's resilience should go beyond its technical defences, and that means investing in employee training, incident response readiness, and partnerships that enable data sharing across sectors. In addition to this, governments are also under mounting pressure to update their regulatory frameworks in order to keep pace with the evolution of threat actors in terms of policy.

By harnessing artificial intelligence responsibly, people can still make it a powerful ally—automating defensive operations, detecting anomalies, and even predicting threats before they are even visible. In order to ensure that it continues in a manner that favours protection over exploitation, protecting not just individual enterprises, but the overall trust people have in the future of the digital world. 

A significant difference from earlier AI-assisted attacks was that Claude demonstrated "on-keyboard" capability as well, performing tasks such as scanning VPN endpoints, generating custom malware, and analysing stolen datasets in order to prioritise the victims with the highest payout potential. As soon as the system was installed, it created tailored ransom notes in HTML, containing the specific financial requirements, workforce statistics, and regulatory threats of each organisation, all based on the data that had been collected. 

The amount of payments requested ranged from $75,000 to $500,000 in Bitcoin, which illustrates that with the assistance of artificial intelligence, one individual could control the entire cybercrime network. Additionally, the report emphasises how artificial intelligence and cryptocurrency have increasingly become intertwined. 

For example, ransom notes include wallet addresses in ransom notes, and dark web forums are exclusively selling AI-generated malware kits in cryptocurrency. An investigation by the FBI has revealed that North Korea is increasingly using artificial intelligence (AI) to evade sanctions, which is used to secure fraudulent positions at Western tech companies by state-backed IT operatives who use it for the fabrication of summaries, passing interviews, debugging software, and managing day-to-day tasks. 

According to U.S. officials, these operations funnel hundreds of millions of dollars a year into Pyongyang's technical weapons development program, replacing years of training with on-demand AI assistance. All in all, these revelations indicate an alarming trend: artificial intelligence is not simply enabling cybercrime, but amplifying its scale, speed, and global reach. 

According to the report by Anthropic, Claude Code has been weaponised not only to breach systems, but also to monetise stolen data. This particular tool has been used in several instances to sort through thousands of documents containing sensitive information, including identifying information, financial details, and even medical records, before generating customised ransom notes and layering extortion strategies based on each victim's profile. 

The company explained that so-called “agent AI” tools are now providing attackers with both technical expertise and hands-on operational support, effectively eliminating the need for coordinated teams of human operators to perform the same functions. Despite the warnings of researchers, these systems are capable of dynamically adapting to defensive countermeasures like malware detection in real time, making traditional enforcement efforts increasingly difficult, they warned. 

Using a classifier built by Anthropic to identify this type of behaviour, the company has shared technical indicators with trusted partners in an attempt to combat the threat. The breadth of abuse is still evident through a series of case studies: North Korean operatives use Claude to create false summaries and maintain fraud schemes involving IT workers; a UK-based criminal with the name GTG-5004 is selling AI-based ransomware variants on darknet forums. 

Some Chinese actors use artificial intelligence to penetrate Vietnamese critical infrastructure, while Russians and Spanish-speaking groups use Claude to create malware and commit credit card fraud. The use of artificial intelligence in Telegram bots marketed for romance scams or synthetic identity services has even reached the level of low-skilled actors, allowing sophisticated fraud campaigns to become more accessible to the masses. 

A new report by Anthropic researchers Alex Moix, Ken Lebedev, and Jacob Klein argues that artificial intelligence, based on the results of their research, is continually lowering the barriers to entry for cybercriminals, enabling fraudsters to create profiles of victims, automate identity theft, and orchestrate operations at a speed and scale that is unimaginable with traditional methods. In the report published by Anthropic, it appears to be revealed that artificial intelligence is increasingly being used as a weapon to challenge the foundations of digital security, despite being once seen as a shield for defenders. 

There is a solution to this, but it is not in retreating from AI adoption, but by accelerating the parallel development of defensive strategies that are at the same pace as AI adoption. According to experts, proactive guardrails are necessary to ensure that AI deployments are monitored, developers are held more accountable, and there is continuous monitoring and real-time detection systems available that can be used to identify abnormal AI behaviour before it becomes a serious problemOrganisationss must not only focus on technical defences; they must also invest in employee training, incident response readiness, and partnerships that facilitate intelligence sharing between sectors as well.

Governments are also under increasing pressure to update regulatory frameworks to keep pace with the evolving threat actors, in order to ensure that policy is updated at the same pace as they evolve. By harnessing artificial intelligence responsibly, people can still make it a powerful ally—automating defensive operations, detecting anomalies, and even predicting threats before they are even visible. In order to ensure that it continues in a manner that favours protection over exploitation, protecting not just individual enterprises, but the overall trust people have in the future of the digital world.

New Gmail Phishing Attack Exploits Login Flow to Steal Credentials

 


Despite today's technologically advanced society, where convenience and connectivity are the norms, cyber threats continue to evolve at an alarming rate, making it extremely dangerous to live in. It has recently been reported that phishing attacks and online scams are on the rise among U.S. consumers, warning that malicious actors are increasingly targeting login credentials to steal personal and financial information from their customers. Those concerns are echoed by the Federal Bureau of Investigation (FBI), which revealed that online scams accounted for a staggering $16.6 billion in losses last year—a jump of 33 per cent compared with the year prior.

The extent to which the problem is increasing has been highlighted in surveys that have revealed more than 60 per cent of Americans feel scam attempts are increasing, and nearly one in three have experienced a data breach regularly. Taking these figures together, it is apparent that fortifying digital defences against an ever-expanding threat landscape is of utmost importance. 

Phishing itself is not new; however, its evolution has been dramatic over the past few decades. Previously, such scams could be easily detected due to their clumsy emails that contained spelling errors and awkward greetings like "Dear User." Today's attacks are much more sophisticated. In this latest Gmail phishing campaign, Google's legitimate login process is accurately mimicked with alarming accuracy, deceiving even tech-savvy users. 

It has been documented by security researchers that thousands of Gmail accounts have been compromised, with stolen credentials opening the door to a broad range of infiltrations, including banking, retail, and social networking sites. A breach like this is compared to an intruder entering one's digital home with the key to the rightful owner. 

A breach of this kind can cause long-lasting damage both financially and personally because it extends well beyond inconvenience. Investigations have shown that this campaign is based on deception and abuse of trusted infrastructures. Fraudulent "New Voice Notification" emails are a way for scammers to get victims by phoning them with fake sender information and making them listen to their voicemails. This attack begins with a legitimate Microsoft Dynamics marketing platform, which lends instant credibility to it, thereby enabling it to bypass many standard security controls. 

A CAPTCHA page on horkyrown[.]com, which can be traced to Pakistan, then redirects victims to a fake login page that looks exactly like Gmail's login page, which makes them feel like they're being hacked before giving them the real thing. When credentials are exfiltrated in real time, the account can be taken over almost immediately. Adding more complexity to this problem is the advent of artificial intelligence in phishing operations. 

Cybercriminals are now making perfect emails, mimicking writing styles, and even making convincing voice calls impersonating trusted figures, utilising advanced language models. According to security companies, artificial intelligence-driven phishing attempts are just as effective as human-crafted ones - if not more so - showing a 55 per cent increase between 2023 and 2025 in success rates. 

With the use of techniques such as metadata spoofing and "Open Graph Spoofing," attackers can further disguise malicious links, essentially making them almost indistinguishable from safe ones with the help of these techniques. In this new wave of phishing, which has become increasingly personalised, multimodal, and distributed at unprecedented scales, it is becoming increasingly difficult to detect. 

The FBI, as well as the Cybersecurity and Infrastructure Security Agency (CISA), have already issued warnings regarding artificial intelligence-enhanced phishing campaigns that target Gmail accounts. There was one case in which Ethereum developer Nick Johnson told of receiving a fraudulent “subpoena” email that passed Gmail's authentication checks and appeared to be just like a legitimate security alert. In similar attacks, phone calls and email have been used to harvest recovery codes, enabling full account takeover. 

Additionally, analysts found that attackers stole session cookies, enabling them to bypass login screens and bypass the entire process. Although Google's filters are now blocking nearly 10 million malicious emails per minute, experts warn that attackers are adapting faster, making stronger authentication measures and user vigilance essential. 

According to the technical analysis of the attack, it has been discovered that the (purpxqha[.]ru) Russian servers used to redirect traffic and perform cross-site requests should be responsible for the attack, while the primary domain name infrastructure was registered in Karachi, Pakistan. 

Using the malicious system, multiple layers of security within Gmail are bypassed, allowing hackers to not only collect email addresses and password combinations, but also two-factor authentication codes, Google Authenticator tokens, backup recovery keys, and even responses to security questions, enabling the attackers to completely take control of victims' accounts before they are aware that they have been compromised. Security experts have made several recommendations to organisations, including blocking identified domains, strengthening monitoring, and educating users about these evolving attack vectors. It must be noted that the Gmail phishing craze reflects a broader reality: cybersecurity is no longer a passive discipline but is a continuous discipline that must adapt to the speed of innovation as it evolves. 

There is no doubt that cultivating digital scepticism is a priority for individuals—they should question every unexpected email, voicemail, or login request, and they should reinforce their accounts with two-factor authentication or hardware security keys to ensure their accounts remain secure. A company’s responsibilities extend further, as they invest in employee awareness training, conduct mock phishing exercises, and implement adaptive tools capable of detecting subtle changes in behaviour. 

A cross-government collaboration between industry leaders, governments, and security researchers will be crucial to the dismantling of criminal infrastructure that exploits global trust. The need for vigilance in an environment where deception is becoming increasingly sophisticated each day has become more than an act of precaution, but a form of empowerment. This allows individuals and businesses alike to protect their digital identities from increasingly sophisticated threats while simultaneously protecting their digital identities.

AI and Quantum Computing: The Next Cybersecurity Frontier Demands Urgent Workforce Upskilling

 

Artificial intelligence (AI) has firmly taken center stage in today’s enterprise landscape. From the rapid integration of AI into company products, the rising demand for AI skills in job postings, and the increasing presence of AI in industry conferences, it’s clear that businesses are paying attention.

However, awareness alone isn’t enough. For AI to be implemented responsibly and securely, organizations must invest in robust training and skill development. This is becoming even more urgent with another technological disruptor on the horizon—quantum computing. Quantum advancements are expected to supercharge AI capabilities, but they will also amplify security risks.

As AI evolves, so do cyber threats. Deepfake scams and AI-powered phishing attacks are becoming more sophisticated. According to ISACA’s 2025 AI Pulse Poll, “two in three respondents expect deepfake cyberthreats to become more prevalent and sophisticated within the next year,” while 59% believe AI phishing is harder to detect. Generative AI adds another layer of complexity—McKinsey reports that only “27% of respondents whose organizations use gen AI say that employees review all content created by gen AI before it is used,” highlighting critical gaps in oversight.

Quantum computing raises its own red flags. ISACA’s Quantum Pulse Poll shows 56% of professionals are concerned about “harvest now, decrypt later” attacks. Meanwhile, 73% of U.S. respondents in a KPMG survey believe it’s “a matter of time” before cybercriminals exploit quantum computing to break modern encryption.

Despite these looming challenges, prioritization is alarmingly low. In ISACA’s AI Pulse Poll, just 42% of respondents said AI risks were a business priority, and in the quantum space, only 5% saw it becoming a top priority soon. This lack of urgency is risky, especially since no one knows exactly when “Q Day”—the moment quantum computers can break current encryption—will arrive.

Addressing AI and quantum risks begins with building a highly skilled workforce. Without the right expertise, AI deployments risk being ineffective or eroding trust through security and privacy failures. In the quantum domain, the stakes are even higher—quantum machines could render today’s public key cryptography obsolete, requiring a rapid transition to post-quantum cryptographic (PQC) standards.

While the shift sounds simple, the reality is complex. Digital infrastructures deeply depend on current encryption, meaning organizations must start identifying dependencies, coordinating with vendors, and planning migrations now. The U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) has already released PQC standards, and cybersecurity leaders need to ensure teams are trained to adopt them.

Fortunately, the resources to address these challenges are growing. AI-specific training programs, certifications, and skill pathways are available for individuals and teams, with specialized credentials for integrating AI into cybersecurity, privacy, and IT auditing. Similarly, quantum security education is becoming more accessible, enabling teams to prepare for emerging threats.

Building training programs that explore how AI and quantum intersect—and how to manage their combined risks—will be crucial. These capabilities could allow organizations to not only defend against evolving threats but also harness AI and quantum computing for advanced attack detection, real-time vulnerability assessments, and innovative solutions.

The cyber threat landscape is not static—it’s accelerating. As AI and quantum computing redefine both opportunities and risks, organizations must treat workforce upskilling as a strategic investment. Those that act now will be best positioned to innovate securely, protect stakeholder trust, and stay ahead in a rapidly evolving digital era.

DevilsTongue Spyware Attacking Windows System, Linked to Saudi Arabia, Hungary


Cybersecurity experts have discovered a new infrastructure suspected to be used by spyware company Candiru to target computers via Windows malware.

DevilsTongue spyware targets Windows systems

The research by Recorded Future’s Insikt Group disclosed eight different operational clusters associated with the spyware, which is termed as DevilsTongue. Five are highly active, including clusters linked to Hungary and Saudi Arabia. 

About Candiru’ spyware

According to the report, the “infrastructure includes both victim-facing components likely used in the deployment and [command and control] of Candiru’s DevilsTongue spyware, and higher-tier infrastructure used by the spyware operators.” While a few clusters directly handle their victim-facing infrastructure, others follow an intermediary infrastructure layers approach or through the Tor network, which allows threat actors to use the dark web.

Additionally, experts discovered another cluster linked to Indonesia that seemed to be active until November 2024. Experts couldn’t assess whether the two extra clusters linked with Azerbaijan are still active.

Mode of operation

Mercenary spyware such as DevilsTongue is infamous worldwide, known for use in serious crimes and counterterrorism operations. However, it also poses various legal, privacy, and safety risks to targets, their companies, and even the reporter, according to Recorded Future.

Windows itself has termed the spyware Devil's Tongue. There is not much reporting on its deployment techniques, but the leaked materials suggest it can be delivered via malicious links, man-in-the-middle attacks, physical access to a Windows device, and weaponized files. DevilsTongue has been installed via both threat actor-controlled URLs that are found in spearphishing emails and via strategic website attacks known as ‘watering hole,’ which exploit bugs in the web browser.

Insikt Group has also found a new agent inside Candiru’s network that is suspected to have been released during the time when Candiru’s assets were acquired by Integrity Partners, a US-based investment fund. Experts believe that a different company might have been involved in the acquisition.

How to stay safe?

In the short term, experts from Recorded Future advise defenders to “implement security best practices, including regular software updates, hunting for known indicators, pre-travel security briefings, and strict separation of personal and corporate devices.” In the long term, organizations are advised to invest in robust risk assessments to create effective policies.

Ransomware Defence Begins with Fundamentals Not AI

 


The era of rapid technological advancements has made it clear that artificial intelligence isn't only influencing cybersecurity, it is fundamentally redefining its boundaries and capabilities as well. The transformation was evident at the RSA Conference in San Francisco in the year 2025, as more than 40,000 cybersecurity professionals gathered to discuss the path forward for the industry.

It was essential to emphasise that the rapid integration of agentic AI into cyber operations is one of the most significant topics discussed, highlighting both the disruptive potential and strategic complexities it introduces simultaneously. AI technologies continue to empower both defenders and adversaries alike, and organizations are taking a measured approach, recognising the immense potential of AI-driven solutions while remaining vigilant against the increasingly sophisticated attacks from adversaries. 

As the rise of artificial intelligence (AI) and its application in criminal activities dominates headlines more often than not, the narrative is far from a one-sided one, as there are several factors playing a role. However, the rise of AI reflects a broader industry shift toward balancing innovation with resilience in the face of rapidly shifting threats. 
Several cybercriminals are indeed using artificial intelligence (AI) and large language models (LLMs) to make ransomware campaigns more sophisticated and more convincing, crafting more convincing phishing emails, bypassing traditional security measures, and improving the precision with which victims are selected. In addition to increasing the stealth and efficiency of attackers, the stakes for organisational cybersecurity have increased as a result of these tools. 

Although AI is considered a weapon for adversaries, it is proving to be an essential ally in the defence against ransomware when integrated into security systems. By integrating AI into security systems, organisations are able to detect threats more quickly and accurately, which leads to quicker detection and response to ransomware attacks. 

Furthermore, AI helps enhance the containment and recovery efforts of incidents, leading to faster containment and a reduction in potential damage. Furthermore, AI helps to mitigate and recover from incidents more effectively. With AI coupled with real-time threat intelligence, security teams are able to adapt to evolving attack techniques, providing them with the agility to close the gap between offence and defence, making the cyber environment in which people live more and more automated.

In the wake of a series of high-profile ransomware attacks - most notably, those targeted at prominent brands like M&S - concerns have been raised that artificial intelligence may be contributing to a spike in cybercrime that has never been seen before. In spite of the fact that artificial intelligence is undeniably changing the threat landscape by streamlining phishing campaigns and automating attack workflows, its impact on ransomware operations has often been exaggerated. 

In practice, AI isn't really a revolutionary force at all, but rather a tool to accelerate tactics cybercriminals have relied on for years to come. Most ransomware groups continue to rely on proven, straightforward methods that offer speed, scalability, and consistent financial returns for their attacks. As far as successful ransomware campaigns are concerned, scammy emails, credential theft, and insider exploitation have continued to be the cornerstones of these campaigns, offering reliable results without requiring the use of advanced artificial intelligence. 

As security leaders are looking for effective ways to address these threats, they are focusing on getting a realistic perspective on how artificial intelligence is used within ransomware ecosystems. It has become increasingly evident that breach and attack simulation tools are critical assets for organisations as they enable them to identify vulnerabilities and close security gaps in advance of attackers exploiting them. 

There is a sense of balance in this approach, which emphasises the importance of bolstering foundational security controls while keeping pace with the incremental evolution of adversarial capabilities. Nevertheless, generative artificial intelligence is continuing to evolve in profound and often paradoxical ways as it continues to mature. In one way, it empowers defenders by automating routine security operations, detecting hidden patterns in complex data sets, and detecting vulnerabilities that might otherwise go undetected by the average defender. 

It also provides cybercriminals with the power to craft more sophisticated, targeted, scalable attacks, blurring the line between innovation and exploitation, providing them with powerful tools to craft more sophisticated, targeted, and scalable attacks. According to recent studies, over 80% of cyber incidents are caused by human error, which is why organisations need to harness artificial intelligence to strengthen their security posture to prevent future cyber attacks. 

AI is an excellent tool for cybersecurity leaders as it streamlines threat detection, reduces human oversight, and enables real-time response in real-time. There is, however, a danger that the same technologies may be adapted by adversaries to enhance phishing tactics, automate malware deployment, and orchestrate advanced intrusion strategies. The dual use of artificial intelligence has raised widespread concerns among executives due to its dual purpose. 

According to a recent survey, 84% of CEOs have expressed concern about generative AI being the source of widespread or catastrophic cyberattacks. Consequently, organisations are beginning to make a significant investment in AI-based cybersecurity, with projections showing a 43% increase in AI security budgets by 2025 as a result of this increase. 

In an increasingly complex digital environment, it is becoming increasingly recognised that even though generative AI introduces new vulnerabilities, it also holds the key to strengthening cyber resilience. This surge is indicative of a growing recognition of the need for generative AI. As artificial intelligence is increasing the speed and sophistication with which cyberattacks are taking place, it has never been more important than now to adhere to foundational cybersecurity practices. 

While artificial intelligence has unquestionably enhanced the tactics available to cybercriminals, allowing them to conduct more targeted phishing attempts, exploit vulnerabilities more quickly, and create more evasive malware, many of the core techniques have not changed. In other words, even though they have many similarities, the differences lie more in how they are executed, rather than in what they do. 

As such, rigorously and consistently applied traditional cybersecurity strategies remain critical bulwarks against even the threats that are enhanced by artificial intelligence. In addition to these foundational defences, multi-factor authentication (MFA), which is widely used, provides a vital safeguard against credential theft, particularly in light of the increasing use of artificial intelligence-generated phishing emails that mimic legitimate communication with astonishing accuracy - a powerful security measure that is critical today. 

As important as it is to maintain regular data backups, maintaining a secure backup mechanism also provides an effective fallback mechanism for ransomware, which is now capable of dynamically altering payloads to avoid detection. The most important element is to make sure that all systems and software are updated, as this prevents AI-enabled tools from exploiting known vulnerabilities. 

A Zero Trust architecture is becoming increasingly relevant as attackers with artificial intelligence move faster and stealthier than ever before. By assuming no implicit trust within the network and restricting lateral movement, this model greatly reduces the blast radius of any potential breach of the network and reduces the likelihood of the attack succeeding. 

A major upgrade is also required for email filtering systems, with AI-based tools that are better equipped to detect subtle nuances in phishing campaigns that have been successfully evading legacy solutions. It is also becoming more and more important for organisations to emphasise security awareness training to prevent breaches, as human error is still one of the leading causes. There is no better line of defence for a company than having employees trained to spot deceptive artificial intelligence-crafted deception.

Furthermore, the use of artificial intelligence-based anomaly detection systems is becoming increasingly important for detecting unusual behaviours that indicate a breach of security. In order to limit exposure and contain threats, segmentation, strict access control policies, and real-time monitoring are all complementary tools. However, it is important to note that even as AI has created new complexities in the threat landscape, it has not rendered traditional defences obsolete. 

Rather, these tried and true cybersecurity measures, augmented by intelligent automation and threat intelligence, are the cornerstones of resilient cybersecurity, not the opposite. Defending against adversaries powered by artificial intelligence requires not just speed but also strategic foresight and disciplined execution of proven strategies. 

As AI-powered cyberattacks become a bigger and more prevalent subject of discussion, organisations themselves are at risk from an unchecked and ungoverned use of artificial intelligence tools, a risk that is often overlooked. While much of the attention has been focused on how threat actors are capable of weaponising artificial intelligence, the internal vulnerabilities that arise from the unscheduled adoption of generative AI present a significant and present threat to the organisation. 

In what is referred to as "Shadow AI," employees are using tools like ChatGPT without formal authorisation or oversight, which circumvents established security protocols and could potentially expose sensitive corporate data. According to a recent study, nearly 40% of IT professionals admit that they have used generative AI tools without proper authorisation. 

Besides compromising governance efforts, such practices obscure visibility of data processing and handling, complicate incident response, and increase the organisation's vulnerability to attacks. The use of artificial intelligence by organisations is unregulated, coupled with inadequate data governance and poorly configured artificial intelligence services, resulting in a number of operational and security issues. 

The risks posed by internal AI tools must be mitigated by organisations treating them as if they were any enterprise technologies. Among the measures that must be taken to mitigate these risks is to establish robust governance frameworks, ensure the transparency of data flows, conduct regular audits, and provide cybersecurity training that addresses the dangers of shadow artificial intelligence, as well as ensure that leaders remain mindful of current threats to their organisations. 

Although artificial intelligence generates headlines, the most successful attacks continue to rely on the proven techniques - phishing, credential theft, and ransomware. The emphasis placed on the potential threats that could be driven by AI can distract attention from critical, foundational defences. In this context, complacency and misplaced priorities are the greatest risks, and not AI itself. 

 It remains true that maintaining a disciplined cyber hygiene, simulating attacks, and strengthening security fundamentals remain the most effective ways to combat ransomware in the long run. There is no doubt that artificial intelligence is not just a single threat or solution for cybersecurity, but rather a powerful force capable of strengthening as well as destabilising digital defences in an environment that is rapidly evolving. 

As organisations navigate this shifting landscape, it is imperative to have clarity, discipline, and strategic depth as they attempt to navigate this new terrain. Despite the fact that artificial intelligence may dominate headlines and influence funding decisions, it does not negate the importance of basic cybersecurity practices. 

What is needed is a recalibration of priorities as people move forward. Security leaders must build resilience against emerging technologies, rather than chasing the allure of emerging technologies alone. They need to adopt a realistic and layered approach to security, one that embraces AI as a tool while never losing sight of what consistently works. 

To achieve this goal, advanced automation, analytics, and tried-and-true defences must be integrated, governance around AI usage must be enforced, and access to data flows and user behaviour must remain tightly controlled. In addition, organisations need to realise that technological tools are only as powerful as the frameworks and people that support them. 

Threats are becoming increasingly automated, making it even more important to have human oversight. Training, informed leadership, and an environment that fosters a culture of accountability are not optional; they are imperative. In order for artificial intelligence to be effective, it must be part of a larger, more comprehensive security strategy that is based on visibility, transparency, and proactive risk management. 

As the battle against ransomware and AI-enhanced cyber threats continues, the key to success will not be whose tools have the greatest sophistication, but whose application of these tools will be consistent, purposeful, and foresightful. AI isn't a threat, but it's an opportunity to master it, regulate it internally, and never let innovation overshadow the fundamentals that keep security sustainable in the first place. Today's defenders have a winning formula: strong fundamentals, smart integration, and unwavering vigilance are the keys to their success.

Delta Airline is Using AI to Set Ticket Prices

 

With major ramifications for passengers, airlines are increasingly using artificial intelligence to determine ticket prices. Now, simple actions like allowing browser cookies, accepting website agreements, or enrolling into loyalty programs can influence a flight's price. The move to AI-driven pricing brings up significant challenges of equity, privacy, and the possibility of increased travel costs. 

Recently, Delta Air Lines revealed that the Israeli startup Fetcherr's AI technology is used to determine about 3% of its domestic ticket rates. To generate customised offers, this system analyses a number of variables, such as past purchasing patterns, user lifetime value, and the current context of each booking query. The airline plans to raise AI-based pricing to 20% of tickets by the end of 2025, according to Delta President Glen Hauenstein, who also emphasised the favourable revenue impact. 

Regulatory issues

US lawmakers have questioned the use of AI pricing models, fearing that it may result in increased fares and unfair disadvantages for some customers. The public's response has been mixed; some passengers are concerned about customised pricing schemes that could make air travel less transparent and affordable. 

In order to adopt dynamic, data-driven pricing strategies, other airlines are doing the same by investing in AI knowledge and creating machine learning solutions. Although this tendency welcomes increased regulatory scrutiny, it also signifies a larger transition within the industry. In an effort to strike a balance between innovation and justice, authorities are looking more closely at how AI technologies impact consumer rights and market competition. 

In Canada, airlines such as Porter recognise the use of dynamic pricing and the integration of AI in some operational areas, but they do not yet use AI for personalised ticket pricing. Canadian consumers benefit from enhanced privacy safeguards under the Personal Information Protection and Electronic Documents Act (PIPEDA), which requires firms to get "meaningful consent" before collecting, processing, or sharing personal data. 

Nevertheless, experts caution that PIPEDA is out of date and does not completely handle the complications posed by AI-driven pricing. Terry Cutler, a Canadian information security consultant, notes that, while certain safeguards exist, significant ambiguities persist, particularly when data is used in unexpected ways, such as changing prices based on browsing histories or device types. 

Implications for passengers 

As airlines accelerate the introduction of AI-powered pricing, passengers should be cautious about how their personal information is used. With regulatory frameworks trying to keep up with rapid technology innovations, customers must navigate an ever-evolving sector that frequently lacks transparency. Understanding these dynamics is critical for maintaining privacy and making informed judgements in the age of AI-powered air travel pricing.

AI-Powered Malware ‘LameHug’ Attacks Windows PCs via ZIP Files

 

Cybersecurity researchers have discovered a new and alarming trend in the world of online threats: "LameHug". This malicious program distinguishes out because it uses artificial intelligence, notably large language models (LLMs) built by companies such as Alibaba. 

LameHug, unlike classic viruses, can generate its own instructions and commands, making it a more adaptive and potentially difficult to detect adversary. Its primary goal is to infiltrate Windows-based personal PCs and then take valuable data surreptitiously. 

The malicious program typically begins its infiltration camouflaged as ordinary-looking ZIP files. These files are frequently sent via fraudulent emails that seem to come from legitimate government sources. When a user opens the seemingly innocent archive, the hidden executable and Python files inside begin to work. The malware then collects information about the affected Windows PC. 

Following this first reconnaissance, LameHug actively looks for text documents and PDF files stored in popular computer directories before discreetly transferring the obtained data to a remote web server. Its ability to employ AI to write its own commands makes it exceptionally cunning in its actions. 

LameHug was discovered by the Ukrainian national cyber incident response team (CERT-UA). Their investigation points to the Russian cyber group APT028, as the most likely source of this advanced threat. The malware is written in Python and uses Hugging Face's programming interfaces. These interfaces, in turn, are powered by a special Alibaba Cloud language model known as Qwen-2.5-Coder-32B-Instruct LLM, demonstrating the complex technological foundation of this new digital weapon. 

LameHug's arrival marks the first instance of malicious software being observed to use artificial intelligence to produce its own executable commands. Existing security software, which is often made to identify known attack patterns, has significant challenges as a result of these capabilities. The ongoing and intensifying arms race in the digital sphere is highlighted by this breakthrough as well as the mention of other emerging malware, such as "Skynet," that may elude AI detection techniques.

AI-Driven Phishing Threats Loom After Massive Data Breach at Major Betting Platforms

 

A significant data breach impacting as many as 800,000 users from two leading online betting platforms has heightened fears over sophisticated phishing risks and the growing role of artificial intelligence in exploiting compromised personal data.

The breach, confirmed by Flutter Entertainment, the parent company behind Paddy Power and Betfair, exposed users’ IP addresses, email addresses, and activity linked to their gambling profiles.

While no payment or password information was leaked, cybersecurity experts warn that the stolen details could still enable highly targeted attacks. Flutter, which also owns brands like Sky Bet and Tombola, referred to the event as a “data incident” that has been contained. The company informed affected customers that there is, “nothing you need to do in response to this incident,” but still advised them to stay alert.

With an average of 4.2 million monthly users across the UK and Ireland, even partial exposure poses a serious risk.

Harley Morlet, chief marketing officer at Storm Guidance, emphasized: “With the advent of AI, I think it would actually be very easy to build out a large-scale automated attack. Basically, focusing on crafting messages that look appealing to those gamblers.”

Similarly, Tim Rawlins, director and senior adviser at the NCC Group, urged users to remain cautious: “You might re-enter your credit card number, you might re-enter your bank account details, those are the sort of things people need to be on the lookout for and be conscious of that sort of threat. If it's too good to be true, it probably is a fraudster who's coming after your money.”

Rawlins also noted that AI technology is making phishing emails increasingly convincing, particularly in spear-phishing campaigns where stolen data is leveraged to mimic genuine communications.

Experts caution that relying solely on free antivirus tools or standard Android antivirus apps offers limited protection. While these can block known malware, they are less effective against deceptive emails that trick users into voluntarily revealing sensitive information.

A stronger defense involves practicing layered security—maintaining skepticism, exercising caution, and following strict cyber hygiene habits to minimize exposure