Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Google Threat Intelligence Group. Show all posts

Google Links CANFAIL Malware Attacks to Suspected Russia-Aligned Group

 



A newly identified cyber espionage group has been linked to a wave of digital attacks against Ukrainian institutions, according to findings released by the Google Threat Intelligence Group. Investigators say the activity involves a malware strain tracked as CANFAIL and assess that the operator is likely connected to Russian state intelligence interests.

The campaign has primarily focused on Ukrainian government structures at both regional and national levels. Entities tied to defense, the armed forces, and the energy sector have been repeatedly targeted. Analysts state that the selection of victims reflects strategic priorities consistent with wartime intelligence gathering.

Beyond these sectors, researchers observed that the actor’s attention has widened. Aerospace companies, manufacturers producing military equipment and drone technologies, nuclear and chemical research institutions, and international organizations engaged in conflict monitoring or humanitarian assistance in Ukraine have also been included in targeting efforts. This broader focus indicates an attempt to collect information across supply chains and support networks linked to the war.

While the group does not appear to possess the same operational depth as some established Russian hacking units, Google’s analysts note a recent shift in capability. The actor has reportedly begun using large language models to assist in reconnaissance, draft persuasive phishing content, and resolve technical challenges encountered after gaining initial access. These tools have also been used to help configure command-and-control infrastructure, allowing the attackers to manage compromised systems more effectively.

Email-based deception remains central to the intrusion strategy. In several recent operations, the attackers posed as legitimate Ukrainian energy providers in order to obtain unauthorized access to both organizational and personal email accounts. In separate incidents, they impersonated a Romanian energy supplier that serves Ukrainian clients. Investigators also documented targeting of a Romanian company and reconnaissance activity involving organizations in Moldova, suggesting regional expansion of the campaign.

To improve the precision of their phishing efforts, the attackers compile tailored email distribution lists based on geographic region and industry sector. The malicious messages frequently contain links hosted on Google Drive. These links direct recipients to download compressed RAR archives that contain the CANFAIL payload.

CANFAIL itself is a heavily obfuscated JavaScript program. It is commonly disguised with a double file extension, such as “.pdf.js,” to make it appear as a harmless document. When executed, the script launches a PowerShell command that retrieves an additional PowerShell-based dropper. This secondary component runs directly in system memory, a technique designed to reduce forensic traces on disk and evade conventional security tools. At the same time, the malware displays a fabricated error notification to mislead the victim into believing the file failed to open.

Google’s researchers further link this threat activity to a campaign known as PhantomCaptcha. That operation was previously documented in October 2025 by researchers at SentinelOne through its SentinelLABS division. PhantomCaptcha targeted organizations involved in Ukraine-related relief initiatives by sending phishing emails that redirected recipients to fraudulent websites. Those sites presented deceptive instructions intended to trigger the infection process, ultimately delivering a trojan that communicates over WebSocket channels.

The investigation illustrates how state-aligned actors continue to adapt their methods, combining traditional phishing tactics with newer technologies to sustain intelligence collection efforts tied to the conflict in Ukraine.

Google Issues New Security Alert: Six Emerging Scams Targeting Gmail, Google Messages & Play Users

 

Google continues to be a major magnet for cybercriminal activity. Recent incidents—ranging from increased attacks on Google Calendar users to a Chrome browser–freezing exploit and new password-stealing tools aimed at Android—highlight how frequently attackers target the tech giant’s platforms. In response, Google has released an updated advisory warning users of Gmail, Google Messages, and Google Play about six fast-growing scams, along with the protective measures already built into its ecosystem.

According to Laurie Richardson, Google’s vice president of trust and safety, the rise in scams is both widespread and alarming: “57% of adults experienced a scam in the past year, with 23% reporting money stolen.” She further confirmed that scammers are increasingly leveraging AI tools to “efficiently scale and enhance their schemes.” To counter this trend, Google’s safety teams have issued a comprehensive warning outlining the latest scam patterns and reinforcing how its products help defend against them.

Before diving into the specific scam types, Google recommends trying its security awareness game, inspired by inoculation theory, which helps users strengthen their ability to spot fraudulent behavior.

One of the most notable threats involves the misuse of AI services. Richardson explained that “Cybercriminals are exploiting the widespread enthusiasm for AI tools by using it as a powerful social engineering lure,” setting up “sophisticated scams impersonating popular AI services, promising free or exclusive access to ensnare victims.” These traps often appear as fake apps, malicious websites, or harmful browser extensions promoted through deceptive ads—including cloaked malvertising that hides malicious intent from scanners while presenting dangerous content to real users.

Richardson emphasized Google’s strict rules: “Google prohibits ads that distribute Malicious Software and enforces strict rules on Play and Chrome for apps and extension,” noting that Play Store policies allow proactive removal of apps imitating legitimate AI tools. Meanwhile, Chrome’s AI-powered enhanced Safe Browsing mode adds real-time alerts for risky activity.

Google’s Threat Intelligence Group (GTIG) has also issued its own findings in the new GTIG AI Threat Tracker report. GTIG researchers have seen a steady rise in attackers using AI-powered malware over the past year and have identified new strategies in how they try to bypass safeguards. The group observed threat actors “adopting social engineering-like pretexts in their prompts to bypass AI safety guardrails.”

One striking example involved a fabricated “capture-the-flag” security event designed to manipulate Gemini into revealing restricted information useful for developing exploits or attack tools. In one case, a China-linked threat actor used this CTF method to support “phishing, exploitation, and web shell development.”

Google reiterated its commitment to enforcing its AI policies, stating: “Our policy guidelines and prohibited use policies prioritize safety and responsible use of Google's generative AI tools,” and added that “we continuously enhance safeguards in our products to offer scaled protections to users across the globe.”

Beyond AI-related threats, Google highlighted that online job scams continue to surge. Richardson noted that “These campaigns involve impersonating well-known companies through detailed imitations of official career pages, fake recruiter profiles, and fraudulent government recruitment postings distributed via phishing emails and deceptive advertisements across a range of platforms.”

To help protect users, Google relies on features such as scam detection in Google Messages, Gmail’s automatic filtering for phishing and fraud, and two-factor authentication, which adds an additional security layer for user accounts.