Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

US Employs Anthropic’s Claude AI in High-Profile Venezuela Raid

  Using a commercially developed artificial intelligence system in a classified US military operation represents a significant technological...

All the recent news you need to know

Botnet Moves to Blockchain, Evades Traditional Takedowns

 

A newly identified botnet loader is challenging long standing methods used to dismantle cybercrime infrastructure. Security researchers have uncovered a tool known as Aeternum C2 that stores its command instructions on the Polygon blockchain rather than on traditional servers or domains. 

For years, investigators have disrupted major botnets by seizing command and control servers or suspending malicious domains. Operations targeting networks such as Emotet, TrickBot, and QakBot relied heavily on this approach. 

Aeternum C2 appears designed to bypass that model entirely by embedding instructions inside smart contracts on Polygon, a public blockchain replicated across thousands of nodes worldwide. 

According to researchers at Qrator Labs, the loader is written in native C++ and distributed in both 32 bit and 64 bit builds. Instead of connecting to a centralized server, infected systems retrieve commands by reading transactions recorded on the blockchain through public remote procedure call endpoints. 

The seller claims that bots receive updates within two to three minutes of publication, offering relatively fast synchronization without peer to peer infrastructure. The malware is marketed on underground forums either as a lifetime licensed build or as full source code with ongoing updates. Operating costs are minimal. 

Researchers observed that a small amount of MATIC, the Polygon network token, is sufficient to process a significant number of command transactions. With no need to rent servers or register domains, operators face fewer operational hurdles. 

Investigators also found that Aeternum includes anti virtual machine checks intended to avoid execution in sandboxed analysis environments. A bundled scanning feature reportedly measures detection rates across multiple antivirus engines, helping operators test payloads before deployment. 

Because commands are stored on chain, they cannot be altered or removed without access to the controlling wallet. Even if infected devices are cleaned, the underlying smart contracts remain active, allowing operators to resume activity without rebuilding infrastructure. 

Researchers warn that this model could complicate takedown efforts and enable persistent campaigns involving distributed denial of service attacks, credential theft, and other abuse. 

As infrastructure seizures become less effective, defenders may need to focus more heavily on endpoint monitoring, behavioral detection, and careful oversight of outbound connections to blockchain related services.

Trezor and Ledger Impersonated in Physical QR Code Phishing Scam Targeting Crypto Wallet Users

 

Nowadays criminals push fake crypto warnings through paper mail, copying real product packaging from firms like Trezor and Ledger. These printed notes arrive at homes without digital traces, making them feel more trustworthy than email scams. Instead of online messages, fraudsters now use stamps and envelopes to mimic official communication. Because it comes in an envelope, people may believe the request is genuine. Through these letters, attackers aim to steal secret backup codes used to restore wallets. Physical delivery gives the illusion of authenticity, even though the goal remains theft. The method shifts away from screens but keeps the same deceitful intent. 

Pretending to come from company security units, these fake messages tell recipients they need to finish an urgent "Verification Step" or risk being locked out of their wallets. A countdown appears on screen, pushing people to act fast - slowing down feels risky when time runs short. Opening the link means scanning a barcode first, then moving through steps laid out by the site. Pressure builds because delays supposedly lead to immediate consequences. Following directions seems logical under such conditions, especially if trust in the sender feels justified. 

A single message pretending to come from Trezor told users about an upcoming Authentication Check required before February 15, 2026, otherwise access to Trezor Suite could be interrupted. In much the same way, another forged notice aimed at Ledger customers claimed a Transaction Check would turn mandatory, with reduced features expected after October 15, 2025, unless acted upon. Each of these deceptive messages leads people to fake sites designed to look nearly identical to real setup portals. BleepingComputer’s coverage shows the QR codes redirect to websites mimicking real company systems. 

Instead of clear guidance, these fake sites display alerts - claiming accounts may be limited, transactions could fail, or upgrades might stall without immediate action. One warning follows another, each more urgent than the last, pulling users deeper into the trap. Gradually, they reach a point where entering their crypto wallet recovery words seems like the only option left. Fake websites prompt people to type in their 12-, 20-, or 24-word recovery codes, claiming it's needed to confirm device control and turn on protection. 

Though entered privately, those words get sent straight to servers run by criminals. Because these attackers now hold the key, they rebuild the digital wallet elsewhere without delay. Money vanishes quickly after replication occurs. Fewer scammers send fake crypto offers by post, even though email tricks happen daily. Still, real-world fraud attempts using paper mail have appeared before. 

At times, crooks shipped altered hardware wallets meant to steal recovery words at first use. This latest effort shows hackers still test physical channels, especially if past leaks handed them home addresses. Even after past leaks at both Trezor and Ledger revealed user emails, there's no proof those events triggered this specific attack. However the hackers found their targets, one truth holds - your recovery phrase stays private, always. 

Though prior lapses raised alarms, they didn’t require sharing keys; just like now, safety lives in secrecy. Because access begins where trust ends, never hand over seed words. Even when pressure builds, silence protects better than any tool. Imagine a single line of words holding total power over digital money - this is what a recovery phrase does. Ownership shifts completely when someone else learns your seed phrase; control follows instantly. Companies making secure crypto devices do not ask customers to type these codes online or send them through messages. 

Scanning it, emailing it, even mailing it physically - none of this ever happens if the provider is real. Trust vanishes fast when any official brand demands such sharing. Never type a recovery phrase anywhere except the hardware wallet during setup. When messages arrive with urgent requests, skip the QR scans entirely. Official sites hold the real answers - check there first. A single mistake could expose everything. Trust only what you confirm yourself.  

A shift in cyber threats emerges as fake letters appear alongside rising crypto use. Not just online messages now - paper mail becomes a tool for stealing digital assets. The method adapts, reaching inboxes on paper before screens. Physical envelopes carry hidden risks once limited to spam folders. Fraud finds new paths when trust in printed words remains high.

Publicly Exposed Google Cloud API Keys Gain Unintended Access to Gemini Services

 










A recent security analysis has revealed that thousands of Google Cloud API keys available on the public internet could be misused to interact with Google’s Gemini artificial intelligence platform, creating both data exposure and financial risks.

Google Cloud API keys, often recognizable by the prefix “AIza,” are typically used to connect websites and applications to Google services and to track usage for billing. They are not meant to function as high-level authentication credentials. However, researchers from Truffle Security discovered that these keys can be leveraged to access Gemini-related endpoints once the Generative Language API is enabled within a Google Cloud project.

During their investigation, the firm identified nearly 3,000 active API keys embedded directly in publicly accessible client-side code, including JavaScript used to power website features such as maps and other Google integrations. According to security researcher Joe Leon, possession of a valid key may allow an attacker to retrieve stored files, read cached content, and generate large volumes of AI-driven requests that would be billed to the project owner. He further noted that these keys can now authenticate to Gemini services, even though they were not originally designed for that purpose.

The root of the problem lies in how permissions are applied when the Gemini API is activated. If a project owner enables the Generative Language API, all existing API keys tied to that project may automatically inherit access to Gemini endpoints. This includes keys that were previously embedded in publicly visible website code. Critically, there is no automatic alert notifying users that older keys have gained expanded capabilities.

As a result, attackers who routinely scan websites for exposed credentials could capture these keys and use them to access endpoints such as file storage or cached content interfaces. They could also submit repeated Gemini API requests, potentially generating substantial usage charges for victims through quota abuse.

The researchers also observed that when developers create a new API key within Google Cloud, the default configuration is set to “Unrestricted.” This means the key can interact with every enabled API within the same project, including Gemini, unless specific limitations are manually applied. In total, Truffle Security reported identifying 2,863 active keys accessible online, including one associated with a Google-related website.

Separately, Quokka published findings from a large-scale scan of 250,000 Android applications, uncovering more than 35,000 unique Google API keys embedded in mobile software. The company warned that beyond financial abuse through automated AI requests, organizations must consider broader implications. AI-enabled endpoints can interact with prompts, generated outputs, and integrated cloud services in ways that amplify the consequences of a compromised key.

Even in cases where direct customer records are not exposed, the combination of AI inference access, consumption of service quotas, and potential connectivity to other Google Cloud resources creates a substantially different risk profile than developers may have anticipated when treating API keys as simple billing identifiers.

Although the behavior was initially described as functioning as designed, Google later confirmed it had collaborated with researchers to mitigate the issue. A company spokesperson stated that measures have been implemented to detect and block leaked API keys attempting to access Gemini services. There is currently no confirmed evidence that the weakness has been exploited at scale. However, a recent online post described an incident in which a reportedly stolen API key generated over $82,000 in charges within a two-day period, compared to the account’s typical monthly expenditure of approximately $180.

The situation remains under review, and further updates are expected if additional details surface.

Security experts recommend that Google Cloud users audit their projects to determine whether AI-related APIs are enabled. If such services are active and associated API keys are publicly accessible through website code or open repositories, those keys should be rotated immediately. Researchers advise prioritizing older keys, as they are more likely to have been deployed publicly under earlier guidance suggesting limited risk.

Industry analysts emphasize that API security must be continuous. Changes in how APIs operate or what data they can access may not constitute traditional software vulnerabilities, yet they can materially increase exposure. As artificial intelligence becomes more tightly integrated with cloud services, organizations must move beyond periodic testing and instead monitor behavior, detect anomalies, and actively block suspicious activity to reduce evolving risk.

Phishing Campaign Abuses .arpa Domain and IPv6 Tunnels to Evade Enterprise Security Defenses

 

Cybersecurity experts at Infoblox Threat Intel have identified a sophisticated phishing operation that manipulates core internet infrastructure to slip past enterprise security mechanisms.

The campaign introduces an unusual evasion strategy: attackers are exploiting the .arpa top-level domain (TLD) while leveraging IPv6 tunnel services to host phishing pages. This method allows malicious actors to sidestep traditional domain reputation systems, posing a growing challenge for security teams.

Unlike public-facing domains such as .com or .net, the .arpa TLD is reserved strictly for internal internet functions. It primarily supports reverse DNS lookups, translating IP addresses into domain names, and was never intended to serve public web content.

Researchers found that attackers are capitalizing on weaknesses within DNS record management systems. By using free IPv6 tunnel providers, threat actors obtain control over certain IPv6 address ranges. Rather than configuring reverse DNS pointer (PTR) records as expected, they create standard A records under .arpa subdomains. This results in fully qualified domain names that appear to be legitimate infrastructure addresses—entities that security tools generally consider trustworthy and therefore seldom inspect closely.

Attack Chain and CNAME Hijacking

According to Infoblox, the campaign often starts with malspam emails impersonating well-known consumer brands. The emails feature a single clickable image that either advertises a prize or warns about a disrupted subscription.

Once clicked, victims are routed through a sophisticated Traffic Distribution System (TDS). The TDS analyzes the incoming traffic, specifically filtering for mobile users on residential IP networks, before ultimately delivering the malicious content.

In addition to abusing the .arpa namespace, the attackers are also exploiting dangling CNAME records. They have taken control of outdated subdomains belonging to respected government bodies, media outlets, and academic institutions. By registering expired domains that abandoned CNAME records still reference, they effectively inherit the reputation of trusted organizations, allowing malicious traffic to blend in seamlessly.

Dr. Renée Burton, Vice President at Infoblox Threat Intel, emphasized the severity of this tactic, noting that "weaponizing the .arpa namespace effectively turns the core of the internet into a phishing delivery mechanism."

Because reverse DNS domains inherently carry a clean reputation and lack conventional registration details, security systems that depend on URL analysis and blocklists often fail to identify the threat.

Experts recommend that organizations begin viewing foundational DNS infrastructure as a potential attack surface. Proactive monitoring, particularly for unusual record creation within the .arpa namespace, along with specialized filtering controls, will be critical to defending against this evolving threat.

Microsoft AI Chief: 18 Months to Automate White-Collar Jobs

 

Mustafa Suleyman, CEO of Microsoft AI, has issued a stark warning about the future of white-collar work. In a recent Financial Times interview, he predicted that AI will achieve human-level performance on most professional tasks within 18 months, automating jobs involving computer-based work like accounting, legal analysis, marketing, and project management. This timeline echoes concerns from AI leaders, comparing the shift to the pre-pandemic moment in early 2020 but far more disruptive. Suleyman attributes this to exponential growth in computational power, enabling AI to outperform humans in coding and beyond.

Suleyman's forecast revives 2025 predictions from tech executives. Anthropic's Dario Amodei warned AI could eliminate half of entry-level white-collar jobs, while Ford's Jim Farley foresaw a 50% cut in U.S. white-collar roles. Elon Musk recently suggested artificial general intelligence—AI surpassing human intelligence—could arrive this year. These alarms contrast with CEO silence earlier, likened by The Atlantic to ignoring a shark fin in the water. The drumbeat of disruption is growing louder amid rapid AI advances.

Current AI impact on offices remains limited despite hype. A 2025 Thomson Reuters report shows lawyers and accountants using AI for tasks like document review, yielding only marginal productivity gains without mass displacement. Some studies even indicate setbacks: a METR analysis found AI slowed software developers by 20%. Economic benefits are mostly in Big Tech, with profit margins up over 20% in Q4 2025, while broader indices like the Bloomberg 500 show no change.

Early job losses signal brewing changes. Challenger, Gray & Christmas reported 55,000 AI-related cuts in 2025, including Microsoft's 15,000 layoffs as CEO Satya Nadella pushed to "reimagine" for the AI era. Markets reacted sharply last week with a "SaaSpocalypse" selloff in software stocks after Anthropic and OpenAI launched agentic AI systems mimicking SaaS functions. Investors doubt AI will boost non-tech earnings, per Wall Street consensus.

Suleyman envisions customizable AI transforming every organization. He predicts users will design models like podcasts or blogs, tailored for any job, driving his push for Microsoft "superintelligence" and independent foundation models. As the "most important technology of our time," Suleyman aims to reduce reliance on partners like OpenAI. This could redefine the American Dream, once fueled by MBAs and law degrees, urging urgent preparation for AI's white-collar reckoning.

ClawJack Allows Malicous Sites to Control Local OpenClaw AI Agents


Peter Steinberger created OpenClaw, an AI tool that can be a personal assistant for developers. It immediately became famous and got 100,000 GitHub stars in a week. Even OpenAI founder Sam Altman was impressed, bringing Steinberger on board and calling him a “genius.” However, experts from Oasis Security said that the viral success had hidden threats.

OpenClaw addressed a high-severity security threat that could have been exploited to allow a malicious site to link with a locally running AI agent and take control. According to the Oasis Security report, “Our vulnerability lives in the core system itself – no plugins, no marketplace, no user-installed extensions – just the bare OpenClaw gateway, running exactly as documented.” 

ClawJack scare

The threat was codenamed ClawJacked by the experts. CVE-2026-25253 could have become a severe vulnerability chain that would have allowed any site to hack a person’s AI agent. The vulnerability existed in the main gateway of the software. As OpenClaw is built to trust connections from the user’s system, it could have allowed hackers easy access. 

Assuming the threat model

On a developer's laptop, OpenClaw is installed and operational. Its gateway, a local WebSocket server, is password-protected and connected to localhost. When the developer visits a website that is controlled by the attacker via social engineering or another method, the attack begins. According to the Oasis Report, “Any website you visit can open one to your localhost. Unlike regular HTTP requests, the browser doesn't block these cross-origin connections. So while you're browsing any website, JavaScript running on that page can silently open a connection to your local OpenClaw gateway. The user sees nothing.”

Stealthy Attack Tactic 

The research revealed a smart trick using WebSockets. Generally, your browser is active at preventing different websites from meddling with your local files. But WebSockets are an exception as they are built to stay “always-on” to send data simultaneously. 

The OpenClaw gateway assumed that the connection must be safe because it comes from the user's own computer (localhost). But it is dangerous because if a developer running OpenClaw mistakenly visits a malicious website, a hidden script installed in the webpage can connect via WebSocket and interact directly with the AI tool in the background. The user will be clueless.

Featured