Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Security Teams Struggle to Keep Up With Generative AI Threats, Cobalt Warns

 

A growing number of cybersecurity professionals are expressing concern that generative AI is evolving too rapidly for their teams to manage. 

According to new research by penetration testing company Cobalt, over one-third of security leaders and practitioners admit that the pace of genAI development has outstripped their ability to respond. Nearly half of those surveyed (48%) said they wish they could pause and reassess their defense strategies in light of these emerging threats—though they acknowledge that such a break isn’t realistic. 

In fact, 72% of respondents listed generative AI-related attacks as their top IT security risk. Despite this, one in three organizations still isn’t conducting regular security evaluations of their large language model (LLM) deployments, including basic penetration testing. 

Cobalt CTO Gunter Ollmann warned that the security landscape is shifting, and the foundational controls many organizations rely on are quickly becoming outdated. “Our research shows that while generative AI is transforming how businesses operate, it’s also exposing them to risks they’re not prepared for,” said Ollmann. 
“Security frameworks must evolve or risk falling behind.” The study revealed a divide between leadership and practitioners. Executives such as CISOs and VPs are more concerned about long-term threats like adversarial AI attacks, with 76% listing them as a top issue. Meanwhile, 45% of practitioners are more focused on immediate operational challenges such as model inaccuracies, compared to 36% of executives. 

A majority of leaders—52%—are open to rethinking their cybersecurity strategies to address genAI threats. Among practitioners, only 43% shared this view. The top genAI-related concerns identified by the survey included the risk of sensitive information disclosure (46%), model poisoning or theft (42%), data inaccuracies (40%), and leakage of training data (37%). Around half of respondents also expressed a desire for more transparency from software vendors about how vulnerabilities are identified and patched, highlighting a widening trust gap in the AI supply chain. 

Cobalt’s internal pentest data shows a worrying trend: while 69% of high-risk vulnerabilities are typically fixed across all test types, only 21% of critical flaws found in LLM tests are resolved. This is especially alarming considering that nearly one-third of LLM vulnerabilities are classified as serious. Interestingly, the average time to resolve these LLM-specific vulnerabilities is just 19 days—the fastest across all categories. 

However, researchers noted this may be because organizations prioritize easier, low-effort fixes rather than tackling more complex threats embedded in foundational AI models. Ollmann compared the current scenario to the early days of cloud adoption, where innovation outpaced security readiness. He emphasized that traditional controls aren’t enough in the age of LLMs. “Security teams can’t afford to be reactive anymore,” he concluded. “They must move toward continuous, programmatic AI testing if they want to keep up.”

Hackers Exploit ConnectWise ScreenConnect Installers to Deploy Signed Remote Access Malware

 

Threat actors are leveraging the ConnectWise ScreenConnect installer to craft signed remote access malware by manipulating hidden settings embedded within the software’s Authenticode signature.

ConnectWise ScreenConnect, widely used by IT administrators and managed service providers (MSPs) for remote monitoring and device management, enables extensive customization during installer creation. These configurations—such as specifying the remote server connection details, modifying dialog text, and applying custom logos—are embedded in the Authenticode signature of the executable.

This tactic, referred to as authenticode stuffing, lets attackers inject configuration data into the certificate table without invalidating the digital signature, making malicious files appear legitimate.

ScreenConnect Exploited for Phishing Campaigns

Cybersecurity researchers at G DATA discovered tampered ConnectWise binaries whose hashes matched genuine versions in every file section except the certificate table. “The only difference was a modified certificate table containing new malicious configuration information while still allowing the file to remain signed,” G DATA explained.

Initial evidence of these attacks surfaced on the BleepingComputer forums, where victims shared reports of infections following phishing lures. Similar incidents were also discussed on Reddit. The phishing campaigns often used deceptive PDFs or intermediary Canva pages that linked to malicious executables hosted on Cloudflare’s R2 servers.

One such file, titled “Request for Proposal.exe,” was identified by BleepingComputer as a trojanized ScreenConnect client configured to connect to attacker-controlled infrastructure at 86.38.225[.]6:8041 (relay.rachael-and-aidan.co[.]uk).

G DATA developed a tool to extract and inspect these malicious configurations. Investigators found that the threat actors rebranded the installer with titles like “Windows Update” and swapped the background image with a counterfeit Windows Update graphic, effectively transforming legitimate remote support software into stealthy malware.

After being contacted by G DATA, ConnectWise revoked the certificate associated with the compromised installers. G DATA now classifies these threats as Win32.Backdoor.EvilConwi.* and Win32.Riskware.SilentConwi.*. “G DATA says they never received a reply from ConnectWise about this campaign and their report.”

In a parallel campaign, attackers have also distributed altered SonicWall NetExtender VPN clients designed to steal login credentials and domain information. According to SonicWall’s advisory, the malicious variants transmit captured data to attacker-controlled servers. The company strongly urges users to download software exclusively from official sources to avoid compromise.

Polymorphic Security Approaches for the Next Generation of Cyber Threats


 

Considering the rapid evolution of cybersecurity today, organisations and security professionals must continue to contend with increasingly sophisticated adversaries in an ever-increasing contest. There is one class of malware known as polymorphic malware, which is capable of continuously changing the code of a piece of software to evade traditional detection methods and remain undetectable. It is among the most formidable threats to emerge. 

Although conventional malware is often recognisable by consistent patterns or signatures, polymorphic variants are dynamic in nature and dynamically change their appearance whenever they are infected or spread across networks. Due to their adaptive nature, cybercriminals are able to get around a number of established security controls and prolong the life of their attacks for many years to come. 

In an age when artificial intelligence and machine learning are becoming increasingly powerful tools for defending as well as for criminals, detecting and neutralising these shape-shifting threats has become more difficult than ever. It has never been clearer that the pressing need to develop agile, intelligent, and resilient defence strategies has increased in recent years, highlighting that innovation and vigilance are crucial to protecting digital assets. 

In today's world, enterprises are facing a wide range of cyber threats, including ransomware attacks that are highly disruptive, deceptive phishing campaigns that are highly sophisticated, covert insider breaches, and sophisticated advanced persistent threats. Due to the profound transformation of the digital battlefield, traditional defence measures have become inadequate to combat the speed and complexity of modern cyber threats in the 21st century. 

To address this escalating threat, forward-looking companies are increasingly incorporating artificial intelligence into the fabric of their cybersecurity strategies, as a result. When businesses integrate artificial intelligence-powered capabilities into their security architecture, they are able to monitor massive amounts of data in real time, identify anomalies with remarkable accuracy, and evaluate vulnerabilities at a level of precision that cannot be matched by manual processes alone, due to the ability to embed AI-powered capabilities. 

As a result of the technological advancements in cybersecurity, security teams are now able to shift from reactive incident management to proactive and predictive defence postures that can counteract threats before they develop into large-scale breaches. Furthermore, this paradigm shift involves more than simply improving existing tools; it involves a fundamental reimagining of cybersecurity operations as a whole. 

Several layers of defence are being redefined by artificial intelligence, including automated threat detection, streamlining response workflows, as well as enabling smart analytics to inform strategic decisions. The result of this is that organisations have a better chance of remaining resilient in an environment where cyber adversaries are leveraging advanced tactics to exploit even the tiniest vulnerabilities to gain a competitive edge. 

Amidst the relentless digital disruption that people are experiencing today, adopting artificial intelligence-driven cybersecurity has become an essential imperative to safeguard sensitive assets and ensure operational continuity. As a result of its remarkable ability to constantly modify its own code while maintaining its malicious intent, polymorphic malware has emerged as one of the most formidable challenges to modern cybersecurity. 

As opposed to conventional threats that can be detected by their static signatures and predictable behaviours, polymorphic malware is deliberately designed in order to conceal itself by generating a multitude of unique iterations of itself in order to conceal its presence. As a result of its inherent adaptability, it is easily able to evade traditional security tools that are based on static detection techniques. 

Mutation engines are a key tool for enabling polymorphism, as they are able to alter the code of a malware program every time it is replicated or executed. This results in each instance appearing to be distinct to signature-based antivirus software, which effectively neutralises the value of predefined detection rules for those instances. Furthermore, polymorphic threats are often disguised through encryption techniques as a means of concealing their code and payloads, in addition to mutation capabilities.

It is common for malware to apply a different cryptographic key when it spreads, so that it is difficult for security scanners to recognise the components. Further complicating analysis is the use of packing and obfuscation methods, which are typically applied. Obfuscating a code structure makes it difficult for analysts to understand it, while packing is the process of compressing or encrypting an executable to prevent static inspection without revealing the hidden contents. 

As a result of these techniques, even mature security environments are frequently overwhelmed by a constantly shifting threat landscape that can be challenging. There are profound implications associated with polymorphic malware because it consistently evades detection. This makes the chances of a successful compromise even greater, thus giving attackers a longer window of opportunity to exploit systems, steal sensitive information, or disrupt operations. 

In order to defend against such threats, it is essential to employ more than conventional security measures. A layering of defence strategy should be adopted by organisations that combines behavioural analytics, machine learning, and real-time monitoring in order to identify subtle indicators of compromise that static approaches are likely to miss. 

In such a situation, organisations need to continuously adjust their security posture in order to maintain a resilient security posture. With polymorphic techniques becoming increasingly sophisticated, organisations must constantly innovate their defences, invest in intelligent detection solutions, and cultivate the expertise required to recognise and combat these evolving threats to meet the demands of these rapidly changing threats.

In an era when threats no longer stay static, the need for proactive, adaptive security has become critical to ensuring the protection of critical infrastructure and maintaining business continuity. The modern concept of cybersecurity is inspired by a centuries-old Russian military doctrine known as Maskirovka. This doctrine emphasises the strategic use of deception, concealment, and deliberate misinformation to confound adversaries. This philosophy has been adopted in the digital realm as well. 

Maskirovka created illusions on the battlefield in order to make it incomprehensible for the adversary to take action, just like polymorphic defence utilises the same philosophy that Maskirovka used to create a constantly changing digital environment to confuse and outmanoeuvre attackers. Cyber-polymorphism is a paradigm emerging that will enable future defence systems to create an almost limitless variety of dynamic decoys and false artefacts. 

As a result, adversaries will be diverted to elaborate traps, and they will be required to devote substantial amounts of their time and energy to chasing the illusions. By creating sophisticated mirages that ensure that a clear or consistent target remains hidden from an attacker, these sophisticated mirages aim to undermine the attacker's resolve and diminish the attacker's operational effectiveness. 

It is important, however, for organisations to understand that, as the stakes grow higher, the contest will be more determined by the extent to which they invest, how capable the computers are, and how sophisticated the algorithms are. The success of critical assets is not just determined by technological innovation but also by the capability to deploy substantial resources to sustain adaptive defences in scenarios where critical assets are at risk. 

Obtaining this level of agility and resilience requires the implementation of autonomous, orchestrated artificial intelligence systems able to make decisions and execute countermeasures in real time as a result of real-time data. It will become untenable if humans are reliant on manual intervention or human oversight during critical moments during an attack, as modern threats are fast and complex, leaving no room for error. 

It can be argued in this vision of cybersecurity's future that putting a human decision-maker amid defensive responses effectively concedes to the attacker's advantage. A hybrid cyber defence is an advancement of a concept that is referred to as moving target defence by the U.S. Department of Defence. 

It advances the concept a great deal further, however. This approach is much more advanced than mere rotation of system configurations to shrink the attack surface, since it systematically transforms every layer of an organisation’s digital ecosystem through intelligent, continuous transformation. By doing so, we are not just reducing predictability, but actively disrupting the ability of the attacker to map, exploit, and persist within the network environment by actively disrupting it. 

By doing so, it signals a significant move away from static, reactive security strategies to proactive, AI-driven strategies that can anticipate and counter even the most sophisticated threats as they happen. In a world where digital transformation has continued to accelerate across all sectors, integrating artificial intelligence into cybersecurity frameworks has evolved from merely an enhancement to a necessity that cannot be ignored anymore. 

The utilisation of intelligent, AI-driven security capabilities is demonstrated to be a better way for organisations to manage risks, safeguard data integrity, and maintain operational continuity as adversaries become increasingly sophisticated. The core advantage of artificial intelligence lies in its ability to provide actionable intelligence and strategic foresight, regardless of whether it is integrated into an organisation's internal infrastructure or delivered as part of managed security services. 

Cyber threats in today's hyperconnected world are not just possible, but practically guaranteed, so relying on reactive measures is no longer a feasible approach. Today, it is imperative to be aware of potential compromises before they escalate into significant disruptions, so that they can be predicted, detected, and contained in advance.

It is no secret that artificial intelligence has revolutionised the parameters of cybersecurity. It has enabled organisations to gain real-time visibility into their threat environment, prioritise risks based on data-driven insights and deploy automated responses in a matter of hours. Rather than being just another incremental improvement, there is a shift in the conceptualisation and operationalisation of security that constitutes more than an incremental improvement. 

There has been a dramatic increase in cyber attacks in recent years, with severe financial and reputational damage being the consequence of a successful attack. The adoption of proactive, adaptive defences is no longer just a competitive advantage; it has become a key component of business resilience. As businesses integrate AI-enabled security solutions, they are able to stay ahead of evolving threats while keeping stakeholder confidence and trust intact. 

A vital requirement for long-term success for modern enterprises concerned about their ability to cope with digital threats and thrive in the digital age is to develop an intelligent, anticipatory cyber ddefence A growing number of cyber threats and threats are becoming more volatile and complex than ever before, so it has become increasingly important for leaders to adopt a mindset that emphasises relentless adaptation and innovation, rather than simply acquiring advanced technologies. 

They should also establish clear strategies for integrating intelligent automation into their security ecosystems and aligning these capabilities with broader business objectives to gain a competitive advantage. Having said that, it will be imperative to rethink governance to enable faster, decentralised response, develop specialised talent pipelines for emerging technologies and implement continuous validation to ensure that defences remain effective against evolving threat patterns. 

In the age of automating operations and implementing increasingly sophisticated tactics, the true differentiator will be the ability for organisations to evolve at a similar rate and precision as their adversaries. An organisation that is looking ahead will prioritise a comprehensive risk model, invest in resilient architectures that can self-heal when attacked, and leverage AI in order to build dynamic defences that can be used to counter threats before they impact critical operations. 

In a climate like this, protecting digital assets is not just a one-time project. It is a recurring strategic imperative that requires constant vigilance, discipline, and the ability to act decisively when necessary. As a result, organisations that will succeed in the future will be those that embrace cybersecurity as a constant journey-one that combines foresight, adaptability, and an unwavering commitment to remain one step ahead of adversaries who are only going to keep improving.

International Criminal Court Hit by Advanced Cyber Attack, No Major Damage

International Criminal Court Hit by Advanced Cyber Attack, No Major Damage

Swift discovery helped the ICC

Last week, the International Criminal Court (ICC) announced that it had discovered a new advanced and targeted cybersecurity incident. Its response mechanism and prompt discovery helped to contain the attack. 

The ICC did not provide details about the attackers’ intentions, any data leaks, or other compromises. According to the statement, the ICC, which is headquartered in The Hague, the Netherlands, is conducting a threat evaluation after the attack and taking measures to address any injuries. Details about the impact were not provided. 

Collective effort against threat actors

The constant support of nations that have ratified the Rome Statute helps the ICC in ensuring its capacity to enforce its mandate and commitment, a responsibility shared by all States Parties. “The Court considers it essential to inform the public and its States Parties about such incidents as well as efforts to address them, and calls for continued support in the face of such challenges,” ICC said. 

The ICC was founded in 2002 through the Rome Statute, an international treaty, by a coalition of sovereign states, aimed to create an international court that would prosecute individuals for international crimes– war crimes, genocide, terrorism, and crimes against humanity. The ICC works as a separate body from the U.N. International Court of Justice, the latter brings cases against countries but not individuals.

Similar attack in 2023

In 2023, the ICC reported another cybersecurity incident. The attack was said to be an act of espionage and aimed at undermining the Court’s mandate. The incident had caused it to disconnect its system from the internet. 

In the past, the ICC has said that it had experienced increased security concerns as threats against its various elected officials rose. “The evidence available thus far indicates a targeted and sophisticated attack with the objective of espionage. The attack can therefore be interpreted as a serious attempt to undermine the Court's mandate," ICC said. 

The recent notable arrests issued by the ICC include Russian President Vladimir Putin and Israeli Prime Minister Benjamin Netanyahu.

Thousands of WordPress Sites at Risk as Motors Theme Flaw Enables Admin Account Takeovers

 

A critical security flaw tracked as CVE-2025-4322 has left a widely used premium WordPress theme exposed to attackers.

Cybercriminals have been exploiting this vulnerability in the Motors theme to seize administrator accounts, allowing them to fully compromise websites—modifying information, inserting fake content, and distributing malicious payloads.

Developed by StylemixThemes, Motors has become especially popular with automotive websites, recording nearly 22,500 purchases on EnvatoMarket. Security researchers first identified the flaw on May 2, 2025, and a fix was issued with version 5.6.68 on May 14. Users who have updated to this version are protected, while those still running versions up to 5.6.67 remain vulnerable.

“This is due to the theme not properly validating a user’s identity prior to updating their password,” Wordfence explained.

“This makes it possible for unauthenticated attackers to change arbitrary user passwords, including those of administrators, and leverage that to gain access to their account.”

Despite the release of the patch, attacks began surfacing as early as May 20. By June 7, researchers observed widespread exploitation, with Wordfence reporting it had already blocked over 23,000 attack attempts. The firm also shared lists of IP addresses involved in the attacks, many launching thousands of intrusion efforts.

“One obvious sign of infection is if a site’s administrator is unable to log in with the correct password as it may have been changed as a result of this vulnerability,” the researchers explained.

To secure their sites, users of the Motors theme are strongly advised to upgrade to version 5.6.68 immediately, which addresses the flaw and prevents further account takeovers.

Doctors Warned Over Use of Unapproved AI Tools to Record Patient Conversations

 


Healthcare professionals in the UK are under scrutiny for using artificial intelligence tools that haven’t been officially approved to record and transcribe conversations with patients. A recent investigation has uncovered that several doctors and medical facilities are relying on AI software that does not meet basic safety and data protection requirements, raising serious concerns about patient privacy and clinical safety.

This comes despite growing interest in using artificial intelligence to help doctors with routine tasks like note-taking. Known as Ambient Voice Technology (AVT), these tools are designed to save time by automatically recording and summarising patient consultations. In theory, this allows doctors to focus more on care and less on paperwork. However, not all AVT tools being used in medical settings have passed the necessary checks set by national authorities.

Earlier this year, NHS England encouraged the use of AVT and outlined the minimum standards required for such software. But in a more recent internal communication dated 9 June, the agency issued a clear warning. It stated that some AVT providers are not following NHS rules, yet their tools are still being adopted in real-world clinical settings.

The risks associated with these non-compliant tools include possible breaches of patient confidentiality, financial liabilities, and disruption to the wider digital strategy of the NHS. Some AI programs may also produce inaccurate outputs— a phenomenon known as “hallucination”— which can lead to serious errors in medical records or decision-making.

The situation has left many general practitioners in a difficult position. While eager to embrace new technologies, many lack the technical expertise to determine whether a product is safe and compliant. Dr. David Wrigley, a senior representative of the British Medical Association, stressed the need for stronger guidance and oversight. He believes doctors should not be left to evaluate software quality alone and that central NHS support is essential to prevent unsafe usage.

Healthcare leaders are also concerned about the growing number of lesser-known AI companies aggressively marketing their tools to individual clinics and hospitals. With many different options flooding the market, there’s a risk that unsafe or poorly regulated tools might slip through the cracks.

Matthew Taylor, head of the NHS Confederation, called the situation a “turning point” and suggested that national authorities need to offer clearer recommendations on which AI systems are safe to use. Without such leadership, he warned, the current approach could become chaotic and risky.

Interestingly, the UK Health Secretary recently acknowledged that some doctors are already experimenting with AVT tools before receiving official approval. While not endorsing this behaviour, he saw it as a sign that healthcare workers are open to digital innovation.

On a positive note, some AVT software does meet current NHS standards. One such tool, Accurx Scribe, is being used successfully and is developed in close consultation with NHS leaders.

As AI continues to reshape healthcare, experts agree on one thing: innovation must go hand-in-hand with accountability and safety.

New Report Ranks Best And Worst Generative AI Tools For Privacy

 

Most generative AI companies use client data to train their chatbots. For this, they may use private or public data. Some services take a more flexible and non-intrusive approach to gathering customer data. Not so much for others. A recent analysis from data removal firm Incogni weighs the benefits and drawbacks of AI in terms of protecting your personal data and privacy.

As part of its "Gen AI and LLM Data Privacy Ranking 2025," Incogni analysed nine well-known generative AI services and evaluated their data privacy policies using 11 distinct factors. The following queries were addressed by the criteria: 

  • What kind of data do the models get trained on? 
  • Is it possible to train the models using user conversations? 
  • Can non-service providers or other appropriate entities receive prompts? 
  • Can the private data from users be erased from the training dataset?
  • How clear is it when training is done via prompts? 
  • How simple is it to locate details about the training process of models? 
  • Does the data collection process have a clear privacy policy?
  • How easy is it to read the privacy statement? 
  • Which resources are used to gather information about users?
  • Are third parties given access to the data? 
  • What information are gathered by the AI apps? 

The research involved Mistral AI's Le Chat, OpenAI's ChatGPT, xAI's Grok, Anthropic's Claude, Inflection AI's Pi, DeekSeek, Microsoft Copilot, Google Gemini, and Meta AI. Each AI performed well on certain questions but not so well on others. 

For instance, Grok performed poorly on the readability of its privacy policy but received a decent rating for how clearly it communicates that prompts are used for training. As another example, the ratings that ChatGPT and Gemini received for gathering data from their mobile apps varied significantly between the iOS and Android versions.

However, Le Chat emerged as the best privacy-friendly AI service overall. It did well in the transparency category, despite losing a few points. Additionally, it only collects a small amount of data and achieves excellent scores for additional privacy concerns unique to AI. 

Second place went to ChatGPT. Researchers at Incogni were a little worried about how user data interacts with the service and how OpenAI trains its models. However, ChatGPT explains the company's privacy standards in detail, lets you know what happens to your data, and gives you explicit instructions on how to restrict how your data is used. Claude and PI came in third and fourth, respectively, after Grok. Each performed reasonably well in terms of protecting user privacy overall, while there were some issues in certain areas. 

"Le Chat by Mistral AI is the least privacy-invasive platform, with ChatGPT and Grok following closely behind," Incogni noted in its report. "These platforms ranked highest when it comes to how transparent they are on how they use and collect data, and how easy it is to opt out of having personal data used to train underlying models. ChatGPT turned out to be the most transparent about whether prompts will be used for model training and had a clear privacy policy.” 

In its investigation, Incogni discovered that AI firms exchange data with a variety of parties, including service providers, law enforcement, members of the same corporate group, research partners, affiliates, and third parties. 

"Microsoft's privacy policy implies that user prompts may be shared with 'third parties that perform online advertising services for Microsoft or that use Microsoft's advertising technologies,'" Incogni added in the report. "DeepSeek's and Meta's privacy policies indicate that prompts can be shared with companies within its corporate group. Meta's and Anthropic's privacy policies can reasonably be understood to indicate that prompts are shared with research collaborators.” 

You can prevent the models from being trained using your prompts with some providers. This is true for Grok, Mistral AI, Copilot, and ChatGPT. However, based on their privacy rules and other resources, it appears that other services do not allow this kind of data collecting to be stopped. Gemini, DeepSeek, Pi AI, and Meta AI are a few of these. In response to this concern, Anthropic stated that it never gathers user input for model training. 

Ultimately, a clear and understandable privacy policy significantly helps in assisting you in determining what information is being gathered and how to opt out.

How AI Impacts KYC and Financial Security

How AI Impacts KYC and Financial Security

Finance has become a top target for deepfake-enabled fraud in the KYC process, undermining the integrity of identity-verification frameworks that help counter-terrorism financing (CTF) and anti-money laundering (AML) systems.

Experts have found a rise in suspicious activity using AI-generated media, highlighting that threat actors exploit GenAI to “defraud… financial institutions and their customers.”

Wall Street’s FINRA has warned that deepfake audio and video scams can cause losses of $40 billion by 2027 in the finance sector.

Biometric safety measures do not work anymore. A 2024 Regula research revealed that 49% businesses throughout industries such as fintech and banking have faced fraud attacks using deepfakes, with average losses of $450,000 per incident. 

As these numbers rise, it becomes important to understand how deepfake invasion can be prevented to protect customers and the financial industry globally. 

More than 1,100 deepfake attacks in Indonesia

Last year, an Indonesian bank reported over 1,100 attempts to escape its digital KYC loan-application process within 3 months, cybersecurity firm Group-IB reports.

Threat actors teamed AI-powered face-swapping with virtual-camera tools to imitate the bank’s  liveness-detection controls, despite the bank’s “robust, multi-layered security measures." According to Forbes, the estimated losses “from these intrusions have been estimated at $138.5 million in Indonesia alone.”

The AI-driven face-swapping tools allowed actors to replace the target’s facial features with those of another person, allowing them to exploit “virtual camera software to manipulate biometric data, deceiving institutions into approving fraudulent transactions,” Group-IB reports.

How does the deepfake KYC fraud work

Scammers gather personal data via malware, the dark web, social networking sites, or phishing scams. The date is used to mimic identities. 

After data acquisition, scammers use deepfake technology to change identity documents, swapping photos, modifying details, and re-creating entire ID to escape KYC checks.

Threat actors then use virtual cameras and prerecorded deepfake videos, helping them avoid security checks by simulating real-time interactions. 

This highlights that traditional mechanisms are proving to be inadequate against advanced AI scams. A study revealed that every 5 minutes, one deepfake attempt was made. Only 0.1 of people could spot deepfakes. 

Iranian Hackers Threaten More Trump Email Leaks Amid Rising U.S. Cyber Tensions

 

Iran-linked hackers have renewed threats against the U.S., claiming they plan to release more emails allegedly stolen from former President Donald Trump’s associates. The announcement follows earlier leaks during the 2024 presidential race, when a batch of messages was distributed to the media. 

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) responded by calling the incident “digital propaganda,” warning it was a calculated attempt to discredit public officials and mislead the public. CISA added that those responsible would be held accountable, describing the operation as part of a broader campaign by hostile foreign actors to sow division. 

Speaking virtually with Reuters, a hacker using the alias “Robert” claimed the group accessed roughly 100 GB of emails from individuals including Trump adviser Roger Stone, legal counsel Lindsey Halligan, White House chief of staff Susie Wiles, and Trump critic Stormy Daniels. Though the hackers hinted at selling the material, they provided no specifics or content. 

The initial leaks reportedly involved internal discussions, legal matters, and possible financial dealings involving RFK Jr.’s legal team. Some information was verified, but had little influence on the election, which Trump ultimately won. U.S. authorities later linked the operation to Iran’s Revolutionary Guard, though the hackers declined to confirm this. 

Soon after Trump ordered airstrikes on Iranian nuclear sites, Iranian-aligned hackers began launching cyberattacks. Truth Social, Trump’s platform, was briefly knocked offline by a distributed denial-of-service (DDoS) attack claimed by a group known as “313 Team.” Security experts confirmed the group’s ties to Iranian and pro-Palestinian cyber networks. 

The outage occurred shortly after Trump posted about the strikes. Users encountered error messages, and monitoring organizations warned that “313 Team” operates within a wider ecosystem of groups supporting anti-U.S. cyber activity. 

The Department of Homeland Security (DHS) issued a national alert on June 22, citing rising cyber threats linked to Iran-Israel tensions. The bulletin highlighted increased risks to U.S. infrastructure, especially from loosely affiliated hacktivists and state-backed cyber actors. DHS also warned that extremist rhetoric could trigger lone-wolf attacks inspired by Iran’s ideology. 

Federal agencies remain on high alert, with targeted sectors including defense, finance, and energy. Though large-scale service disruptions have not yet occurred, cybersecurity teams have documented attempted breaches. Two groups backing the Palestinian cause claimed responsibility for further attacks across more than a dozen U.S. sectors. 

At the same time, the U.S. faces internal challenges in cyber preparedness. The recent dismissal of Gen. Timothy Haugh, who led both the NSA and Cyber Command, has created leadership uncertainty. Budget cuts to election security programs have added to concerns. 

While a military ceasefire between Iran and Israel may be holding, experts warn the cyber conflict is far from over. Independent threat actors and ideological sympathizers could continue launching attacks. Analysts stress the need for sustained investment in cybersecurity infrastructure—both public and private—as digital warfare becomes a long-term concern.

Russian APT28 Targets Ukraine Using Signal to Deliver New Malware Families

 

The Russian state-sponsored threat group APT28, also known as UAC-0001, has been linked to a fresh wave of cyberattacks against Ukrainian government targets, using Signal messenger chats to distribute two previously undocumented malware strains—BeardShell and SlimAgent. 

While the Signal platform itself remains uncompromised, its rising adoption among government personnel has made it a popular delivery vector for phishing attacks. Ukraine’s Computer Emergency Response Team (CERT-UA) initially discovered these attacks in March 2024, though critical infection vector details only surfaced after ESET notified the agency in May 2025 of unauthorised access to a “gov.ua” email account. 

Investigations revealed that APT28 used Signal to send a macro-laced Microsoft Word document titled "Акт.doc." Once opened, it initiates a macro that drops two payloads—a malicious DLL file (“ctec.dll”) and a disguised PNG file (“windows.png”)—while modifying the Windows Registry to enable persistence via COM-hijacking. 

These payloads execute a memory-resident malware framework named Covenant, which subsequently deploys BeardShell. BeardShell, written in C++, is capable of downloading and executing encrypted PowerShell scripts, with execution results exfiltrated via the Icedrive API. The malware maintains stealth by encrypting communications using the ChaCha20-Poly1305 algorithm. 

Alongside BeardShell, CERT-UA identified another tool dubbed SlimAgent. This lightweight screenshot grabber captures images using multiple Windows API calls, then encrypts them with a combination of AES and RSA before local storage. These are presumed to be extracted later by an auxiliary tool. 

APT28’s involvement was further corroborated through their exploitation of vulnerabilities in Roundcube and other webmail software, using phishing emails mimicking Ukrainian news publications to exploit flaws like CVE-2020-35730, CVE-2021-44026, and CVE-2020-12641. These emails injected malicious JavaScript files—q.js, e.js, and c.js—to hijack inboxes, redirect emails, and extract credentials from over 40 Ukrainian entities. CERT-UA recommends organisations monitor traffic linked to suspicious domains such as “app.koofr.net” and “api.icedrive.net” to detect any signs of compromise.

Navigating AI Security Risks in Professional Settings


 

There is no doubt that generative artificial intelligence is one of the most revolutionary branches of artificial intelligence, capable of producing entirely new content across many different types of media, including text, image, audio, music, and even video. As opposed to conventional machine learning models, which are based on executing specific tasks, generative AI systems learn patterns and structures from large datasets and are able to produce outputs that aren't just original, but are sometimes extremely realistic as well. 

It is because of this ability to simulate human-like creativity that generative AI has become an industry leader in technological innovation. Its applications go well beyond simple automation, touching almost every sector of the modern economy. As generative AI tools reshape content creation workflows, they produce compelling graphics and copy at scale in a way that transforms the way content is created. 

The models are also helpful in software development when it comes to generating code snippets, streamlining testing, and accelerating prototyping. AI also has the potential to support scientific research by allowing the simulation of data, modelling complex scenarios, and supporting discoveries in a wide array of areas, such as biology and material science.

Generative AI, on the other hand, is unpredictable and adaptive, which means that organisations are able to explore new ideas and achieve efficiencies that traditional systems are unable to offer. There is an increasing need for enterprises to understand the capabilities and the risks of this powerful technology as adoption accelerates. 

Understanding these capabilities has become an essential part of staying competitive in a digital world that is rapidly changing. In addition to reproducing human voices and creating harmful software, generative artificial intelligence is rapidly lowering the barriers for launching highly sophisticated cyberattacks that can target humans. There is a significant threat from the proliferation of deepfakes, which are realistic synthetic media that can be used to impersonate individuals in real time in convincing ways. 

In a recent incident in Italy, cybercriminals manipulated and deceived the Defence Minister Guido Crosetto by leveraging advanced audio deepfake technology. These tools demonstrate the alarming ability of such tools for manipulating and deceiving the public. Also, a finance professional recently transferred $25 million after being duped into transferring it by fraudsters using a deepfake simulation of the company's chief financial officer, which was sent to him via email. 

Additionally, the increase in phishing and social engineering campaigns is concerning. As a result of the development of generative AI, adversaries have been able to craft highly personalised and context-aware messages that have significantly enhanced the quality and scale of these attacks. It has now become possible for hackers to create phishing emails that are practically indistinguishable from legitimate correspondence through the analysis of publicly available data and the replication of authentic communication styles. 

Cybercriminals are further able to weaponise these messages through automation, as this enables them to create and distribute a huge volume of tailored lures that are tailored to match the profile and behaviour of each target dynamically. Using the power of AI to generate large language models (LLMs), attackers have also revolutionised malicious code development. 

A large language model can provide attackers with the power to design ransomware, improve exploit techniques, and circumvent conventional security measures. Therefore, organisations across multiple industries have reported an increase in AI-assisted ransomware incidents, with over 58% of them stating that the increase has been significant.

It is because of this trend that security strategies must be adapted to address threats that are evolving at machine speed, making it crucial for organisations to strengthen their so-called “human firewalls”. While it has been demonstrated that employee awareness remains an essential defence, studies have indicated that only 24% of organisations have implemented continuous cyber awareness programs, which is a significant amount. 

As companies become more sophisticated in their security efforts, they should update training initiatives to include practical advice on detecting hyper-personalised phishing attempts, detecting subtle signs of deepfake audio and identifying abnormal system behaviours that can bypass automated scanners in order to protect themselves from these types of attacks. Providing a complement to human vigilance, specialised counter-AI solutions are emerging to mitigate these risks. 

In order to protect against AI-driven phishing campaigns, DuckDuckGoose Suite, for example, uses behavioural analytics and threat intelligence to prevent AI-based phishing campaigns from being initiated. Tessian, on the other hand, employs behavioural analytics and threat intelligence to detect synthetic media. As well as disrupting malicious activity in real time, these technologies also provide adaptive coaching to assist employees in developing stronger, instinctive security habits in the workplace. 
Organisations that combine informed human oversight with intelligent defensive tools will have the capacity to build resilience against the expanding arsenal of AI-enabled cyber threats. Recent legal actions have underscored the complexity of balancing AI use with privacy requirements. It was raised by OpenAI that when a judge ordered ChatGPT to keep all user interactions, including deleted chats, they might inadvertently violate their privacy commitments if they were forced to keep data that should have been wiped out.

AI companies face many challenges when delivering enterprise services, and this dilemma highlights the challenges that these companies face. OpenAI and Anthropic are platforms offering APIs and enterprise products that often include privacy safeguards; however, individuals using their personal accounts are exposed to significant risks when handling sensitive information that is about them or their business. 

AI accounts should be managed by the company, users should understand the specific privacy policies of these tools, and they should not upload proprietary or confidential materials unless specifically authorised by the company. Another critical concern is the phenomenon of AI hallucinations that have occurred in recent years. This is because large language models are constructed to predict language patterns rather than verify facts, which can result in persuasively presented, but entirely fictitious content.

As a result of this, there have been several high-profile incidents that have resulted, including fabricated legal citations in court filings, as well as invented bibliographies. It is therefore imperative that human review remains part of professional workflows when incorporating AI-generated outputs. Bias is another persistent vulnerability.

Due to the fact that artificial intelligence models are trained on extensive and imperfect datasets, these models can serve to mirror and even amplify the prejudices that exist within society as a whole. As a result of the system prompts that are used to prevent offensive outputs, there is an increased risk of introducing new biases, and system prompt adjustments have resulted in unpredictable and problematic responses, complicating efforts to maintain a neutral environment. 

Several cybersecurity threats, including prompt injection and data poisoning, are also on the rise. A malicious actor may use hidden commands or false data to manipulate model behaviour, thus causing outputs that are inaccurate, offensive, or harmful. Additionally, user error remains an important factor as well. Instances such as unintentionally sharing private AI chats or recording confidential conversations illustrate just how easy it is to breach confidentiality, even with simple mistakes.

It has also been widely reported that intellectual property concerns complicate the landscape. Many of the generative tools have been trained on copyrighted material, which has raised legal questions regarding how to use such outputs. Before deploying AI-generated content commercially, companies should seek legal advice. 

As AI systems develop, even their creators are not always able to predict the behaviour of these systems, leaving organisations with a challenging landscape where threats continue to emerge in unexpected ways. However, the most challenging risk is the unknown. The government is facing increasing pressure to establish clear rules and safeguards as artificial intelligence moves from the laboratory to virtually every corner of the economy at a rapid pace. 

Before the 2025 change in administration, there was a growing momentum behind early regulatory efforts in the United States. For instance, Executive Order 14110 outlined the appointment of chief AI officers by federal agencies and the development of uniform guidelines for assessing and managing AI risks. As a result of this initiative, a baseline of accountability for AI usage in the public sector was established. 

A change in strategy has taken place in the administration's approach to artificial intelligence since they rescinded the order. This signalled a departure from proactive federal oversight. The future outlook for artificial intelligence regulation in the United States is highly uncertain, however. The Trump-backed One Big Beautiful Bill proposes sweeping restrictions that would prevent state governments from enacting artificial intelligence regulations for at least the next decade. 

As a result of this measure becoming law, it could effectively halt local and regional governance at a time when AI is gaining a greater influence across practically all industries. Meanwhile, the European Union currently seems to be pursuing a more consistent approach to AI. 

As of March 2024, a comprehensive framework titled the Artificial Intelligence Act was established. This framework categorises artificial intelligence applications according to the level of risk they pose and imposes strict requirements for applications that pose a significant risk, such as those in the healthcare field, education, and law enforcement. 

Also included in the legislation are certain practices, such as the use of facial recognition systems in public places, that are outright banned, reflecting a commitment to protecting the individual's rights. In terms of how AI oversight is defined and enforced, there is a widening gap between regions as a result of these different regulatory strategies. 

Technology will continue to evolve, and to ensure compliance and manage emerging risks effectively, organisations will have to remain vigilant and adapt to the changing legal landscape as a result of this.