Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

Threat Actors Hit Iranian Sites and Apps After the US-Israel Strike

A series of cyber attacks happened last week during the U.S- Israel attack on targets throughout Iran.  The cyberattacks included hijacking ...

All the recent news you need to know

U.S. Blacklists Anthropic as Supply Chain Risk as OpenAI Secures Pentagon AI Deal

 

The Trump administration has designated AI startup Anthropic as a supply chain risk to national security, ordering federal agencies to immediately stop using its AI model Claude. 

The classification has historically been applied to foreign companies and marks a rare move against a U.S. technology firm. 

President Donald Trump announced that agencies must cease use of Anthropic’s technology, allowing a six month phase out for departments heavily reliant on its systems, including the Department of War. 

Defense Secretary Pete Hegseth later formalized the designation and said no contractor, supplier or partner doing business with the U.S. military may conduct commercial activity with Anthropic. 

At the center of the dispute is Anthropic’s refusal to grant the Pentagon unrestricted access to Claude for what officials described as lawful purposes. 

Chief executive Dario Amodei sought two exceptions covering mass domestic surveillance and the development of fully autonomous weapons. 

He argued that current AI systems are not reliable enough for autonomous weapons deployment and warned that mass surveillance could violate Americans’ civil rights. 

Anthropic has said a proposed compromise contract contained loopholes that could allow those safeguards to be bypassed. 

The company had been operating under a 200 million dollar Department of War contract since June 2024 and was the first AI firm to deploy models on classified government networks. 

After negotiations broke down, the Pentagon issued an ultimatum that Anthropic declined, leading to the blacklist. 

The company plans to challenge the designation in court, arguing it may exceed the authority granted under federal law. 

While the restriction applies directly to Defense Department related work, legal analysts say the move could create broader uncertainty across the technology sector. 

Anthropic relies on cloud infrastructure from Amazon, Microsoft and Google, all of which maintain major defense contracts. 

A strict interpretation of the order could complicate those relationships. 

President Trump has warned of serious civil and criminal consequences if Anthropic does not cooperate during the transition. 

Even as Anthropic faces federal restrictions, OpenAI has moved ahead with its own classified agreement with the Pentagon. 

The company said Saturday that it had finalized a deal to deploy advanced AI systems within classified environments under a framework it describes as more restrictive than previous contracts. 

In its official blog post, OpenAI said, "Yesterday we reached an agreement with the Pentagon for deploying advanced AI systems in classified environments, which we requested they also make available to all AI companies." It added, "We think our agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic’s." 

OpenAI outlined three red lines that prohibit the use of its technology for mass domestic surveillance, for directing autonomous weapons systems and for high stakes automated decision making. 

The company said deployment will be cloud only and that it will retain control over its safety systems, with cleared engineers and researchers involved in oversight. 

"We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections," the company wrote. 

The contract references existing U.S. laws governing surveillance and military use of AI, including requirements for human oversight in certain weapons systems and restrictions on monitoring Americans’ private information. 

OpenAI said it would not provide models without safety guardrails and could terminate the agreement if terms are violated, though it added that it does not expect that to happen. 

Despite its dispute with Washington, Anthropic appears to be gaining traction among consumers. 

Claude recently climbed to the top position in Apple’s U.S. App Store free rankings, overtaking OpenAI’s ChatGPT. 

Data from SensorTower shows the app was outside the top 100 at the end of January but steadily rose through February. 

A company spokesperson said daily signups have reached record levels this week, free users have increased more than 60 percent since January and paid subscriptions have more than doubled this year.

Infostealer Malware Targets OpenClaw AI Agent Files to Steal API Keys and Authentication Tokens

 

Now appearing in threat reports, OpenClaw — a local AI assistant that runs directly on personal devices — has rapidly gained popularity. Because it operates on users’ machines, attackers are shifting focus to its configuration files. Recent malware infections have been caught stealing setup data containing API keys, login tokens, and other sensitive credentials, exposing private access points that were meant to remain local. 

Previously known as ClawdBot or MoltBot, OpenClaw functions as a persistent assistant that reads local files, logs into email and messaging apps, and interacts with web services. Since it stores memory and configuration details on the device itself, compromising it can expose deeply personal and professional data. As adoption grows across home and workplace environments, saved credentials are becoming attractive targets. 

Cybersecurity firm Hudson Rock identified what it believes is the first confirmed case of infostealer malware extracting OpenClaw configuration data. The incident marks a shift in tactics: instead of stealing only browser passwords, attackers are now targeting AI assistant environments that store powerful authentication tokens. According to co-founder and CTO Alon Gal, the infection likely involved a Vidar infostealer variant, with stolen data traced to February 13, 2026. 

Researchers say the malware did not specifically target OpenClaw. Instead, it scanned infected systems broadly for files containing keywords like “token” or “private key.” Because OpenClaw stores data in a hidden folder with those identifiers, its files were automatically captured. Among the compromised files, openclaw.json contained a masked email, workspace path, and a high-entropy gateway authentication token that could enable unauthorized access or API impersonation. 

The device.json file stored public and private encryption keys used for pairing and signing, meaning attackers with the private key could mimic the victim’s device and bypass security checks. Additional files such as soul.md, AGENTS.md, and MEMORY.md outlined the agent’s behavior and stored contextual data including logs, messages, and calendar entries. Hudson Rock concluded that the combination of stolen tokens, keys, and memory data could potentially allow near-total digital identity compromise.

Experts expect infostealers to increasingly target AI systems as they become embedded in professional workflows. Separately, Tenable disclosed a critical flaw in Nanobot, an AI assistant inspired by OpenClaw. The vulnerability, tracked as CVE-2026-2577, allowed remote hijacking of exposed instances but was patched in version 0.13.post7. 

Security professionals warn that as AI tools gain deeper access to personal and corporate systems, protecting configuration files is now as critical as safeguarding passwords. Hidden setup files can carry risks equal to — or greater than — stolen login credentials.

Influencers Alarmed as New AI Rules Enforce Three-Hour Takedowns

 

India’s new three-hour takedown rule for online content has triggered unease among influencers, agencies, and brands, who fear it could disrupt campaigns and shrink creative freedom.

The rule, introduced through amendments to the IT Intermediary Rules on February 11, slashes the takedown window from 36 hours to just three, with the stated goal of curbing unlawful and AI-generated deepfake content. Creators argue that while tackling deepfakes and harmful material is essential, such a compressed deadline leaves almost no room to contest wrongful flags or provide context, especially when automated moderation tools make mistakes. They warn that legitimate posts could be penalised simply because systems misread nuance, humour, or sensitive but educational topics.

Influencer Ekta Makhijani described the deadline as “incredibly tight,” noting that if a brand campaign video is misflagged, an entire launch window could be lost in hours rather than days. She highlighted how parenting content around breastfeeding or toddler behaviour has previously been misinterpreted by moderation tools, and said the shorter window magnifies the risk of such false positives. Apparel brand founder Akanksha Kommirelly added that small creators lack round-the-clock legal and compliance teams, making it unrealistic for them to respond to takedown notices at all times.

Experts also worry about a chilling effect on speech, especially satire, political commentary, and advocacy. With platforms facing tighter liability, agencies fear an “act first, verify later” culture in which companies remove anything remotely borderline to stay safe. Raj Mishra of Chtrbox warned that, in practice, the incentive becomes to take down flagged content immediately, which could hit investigative work or edgy creative pieces hardest. India’s linguistic diversity further complicates moderation, as systems trained mainly on English may misinterpret regional content.A

longside takedowns, mandatory AI labelling is reshaping creator workflows and brand strategies. Kommirelly noted that prominent AI tags on visual campaigns may weaken brand recall, while Mishra cautioned that platforms could quietly de-prioritise AI-labelled content in algorithms, reducing reach regardless of audience acceptance. This dual pressure—strict timelines and AI disclosure—forces creators to rethink how they script, edit, and publish content.

Agencies like Kofluence and Chtrbox are responding by building compliance support systems for the creator economy. These include AI content guides, pre-upload checks, documentation protocols, legal support networks, and even insurance options to cover campaign disruptions. While most stakeholders accept that tougher rules are needed against deepfakes and abuse, they are urging the government to differentiate emergency takedowns for clearly illegal content from more contested speech so that speed does not entirely override fairness.

US Employs Anthropic’s Claude AI in High-Profile Venezuela Raid


 

Using a commercially developed artificial intelligence system in a classified US military operation represents a significant technological shift in the design of modern defence strategy. It appears that what was once confined to research laboratories and enterprise software environments has now become integral to high-profile operational planning, signalling the convergence of Silicon Valley innovation with national security doctrines has reached a new stage.

Nicolás Maduro's capture was allegedly assisted by advanced AI tools. This prompted increased scrutiny of how emerging technologies were utilized in conflict scenarios and prompted broader questions regarding accountability, oversight, and the evolving line between corporate governance frameworks and military necessities, in addition to intensifying scrutiny. 

It was striking to see the US military’s recent operation to seize former Venezuelan President Nicolás Maduro at the intersection of cutting-edge technology and modern warfare. In addition to demonstrating the effectiveness of traditional force, the operation also demonstrated that artificial intelligence is becoming increasingly important in high stakes conflict situations. 

Recent operations by the US military to capture former Venezuelan President Nicolás Maduro represent a striking intersection of cutting-edge technology and modern warfare, and are not just a testament to traditional force; they also demonstrate the growing importance of artificial intelligence in high-stakes conflict situations. 

A number of reports citing The Wall Street Journal indicated that Anthropic's Claude AI model was deployed in the operation that led to the capture of Nicolás Maduro. This indicates that advanced artificial intelligence is becoming a significant part of US defence infrastructure, while also highlighting the complex intersection between corporate AI security measures and military requirements. 

A collaborative effort between Palantir Technologies and Claude enables high-level data synthesis, analysis modeling, and operational support through a secure collaboration. The report describes Claude as the first commercially developed artificial intelligence system to be utilized in a classified environment. 

As Anthropic's published usage policies expressly prohibit applications related to violence, weapon development, or surveillance, its reported involvement is significant. However, according to reports, the model was leveraged by defence officials to assist in key planning phases and intelligence coordination surrounding the mission that culminated in Maduro's arrest and transfer to New York to face federal charges. 

It highlights both the operational utility of AI-enabled analytical systems and the legal and ethical challenges associated with deploying commercial technologies in sensitive national security settings. In addition, reports indicate that Claude's capabilities may have been employed for processing complex intelligence datasets, supporting real-time decision workflows, and synthesizing multilingual information streams within compressed operational timeframes; however, specific implementation details remain confidential.

Following the raid, involving coordinated military action in Caracas and the detention of former Venezuelan leader, the debate about the scope and limitations of artificial intelligence within the U.S. Several leading artificial intelligence developers, including Anthropic and OpenAI, have been encouraged to make their models available on classified networks with less operational restrictions than those imposed in civilian environments, according to reports. 

As part of its strategic objectives, the Pentagon seeks to integrate advanced artificial intelligence into intelligence analysis, mission planning, and multi-domain operational coordination. Claude's availability within classified environments facilitated by third-party infrastructure partnerships has become a source of institutional tension, in particular because Anthropic's internal safeguards prohibit the model from being used for violent or surveillance-related tasks. 

The Department of Defense has argued that AI systems must be able to support "all lawful purposes" in order to be available for future operational readiness, including rapid, AI-assisted intelligence fusion across contested domains. This position is considered essential for future operational readiness. 

Because of the company's hesitation to erode certain safeguards, senior defence leadership, including Pete Hegseth, has indicated that authorities such as the Defense Production Act or supply chain risk assessments may be considered when evaluating future contractual relations.

As the technological convergence accelerates, it becomes increasingly challenging for governments and AI developers to reconcile national security imperatives and corporate governance obligations. There is a broader question at the center of this ethical and strategic challenge regarding how advanced artificial intelligence tools should be governed in national security contexts, a discussion which extends beyond single missions and extends to the future architecture of defence technology as well as safeguards placed on autonomous and semi-automated systems. 

In a time when defence institutions are deeply integrating artificial intelligence into operational command structures, this episode underscores a pivotal point in the governance of dual-use technologies. When commercial AI innovation is combined with classified military deployment, robust contractual clarity is necessary, as are enforceable oversight mechanisms, independent review systems and standardized compliance frameworks integrated into both software and procurement processes. 

The strategic planning, operational effectiveness, legal safeguards, and ethical restraint of regulatory architecture must now be harmonised in a manner that maintains operational effectiveness while maintaining accountability, legal safeguards, and ethical constraints. 

Advancement in artificial intelligence systems risks outpacing the supervision mechanisms designed to ensure their safety if such calibrated governance is not in place. As a result of the standards developed in response to this occasion, the national defence doctrines of the future will be significantly influenced, as will global norms governing artificial intelligence in conflict environments for years to come.

The strategic and ethical challenge entails a wider question regarding how advanced artificial intelligence tools should be governed when deployed for national security purposes, which encompasses the future architecture of defence technology as well as safeguards placed around semi-autonomous and autonomous systems. 

In a time when defence institutions are deeply integrating artificial intelligence into operational command structures, this episode underscores a pivotal point in the governance of dual-use technologies. When commercial AI innovation is combined with classified military deployment, robust contractual clarity is necessary, as are enforceable oversight mechanisms, independent review systems and standardized compliance frameworks integrated into both software and procurement processes. 

The strategic planning, operational effectiveness, legal safeguards, and ethical restraint of regulatory architecture must now be harmonised in a manner that maintains operational effectiveness while maintaining accountability, legal safeguards, and ethical constraints.

Advancement in artificial intelligence systems risks outpacing the supervision mechanisms designed to ensure their safety if such calibrated governance is not in place. As a result of the standards developed in response to this occasion, the national defence doctrines of the future will be significantly influenced, as will global norms governing artificial intelligence in conflict environments for years to come.

Botnet Moves to Blockchain, Evades Traditional Takedowns

 

A newly identified botnet loader is challenging long standing methods used to dismantle cybercrime infrastructure. Security researchers have uncovered a tool known as Aeternum C2 that stores its command instructions on the Polygon blockchain rather than on traditional servers or domains. 

For years, investigators have disrupted major botnets by seizing command and control servers or suspending malicious domains. Operations targeting networks such as Emotet, TrickBot, and QakBot relied heavily on this approach. 

Aeternum C2 appears designed to bypass that model entirely by embedding instructions inside smart contracts on Polygon, a public blockchain replicated across thousands of nodes worldwide. 

According to researchers at Qrator Labs, the loader is written in native C++ and distributed in both 32 bit and 64 bit builds. Instead of connecting to a centralized server, infected systems retrieve commands by reading transactions recorded on the blockchain through public remote procedure call endpoints. 

The seller claims that bots receive updates within two to three minutes of publication, offering relatively fast synchronization without peer to peer infrastructure. The malware is marketed on underground forums either as a lifetime licensed build or as full source code with ongoing updates. Operating costs are minimal. 

Researchers observed that a small amount of MATIC, the Polygon network token, is sufficient to process a significant number of command transactions. With no need to rent servers or register domains, operators face fewer operational hurdles. 

Investigators also found that Aeternum includes anti virtual machine checks intended to avoid execution in sandboxed analysis environments. A bundled scanning feature reportedly measures detection rates across multiple antivirus engines, helping operators test payloads before deployment. 

Because commands are stored on chain, they cannot be altered or removed without access to the controlling wallet. Even if infected devices are cleaned, the underlying smart contracts remain active, allowing operators to resume activity without rebuilding infrastructure. 

Researchers warn that this model could complicate takedown efforts and enable persistent campaigns involving distributed denial of service attacks, credential theft, and other abuse. 

As infrastructure seizures become less effective, defenders may need to focus more heavily on endpoint monitoring, behavioral detection, and careful oversight of outbound connections to blockchain related services.

Trezor and Ledger Impersonated in Physical QR Code Phishing Scam Targeting Crypto Wallet Users

 

Nowadays criminals push fake crypto warnings through paper mail, copying real product packaging from firms like Trezor and Ledger. These printed notes arrive at homes without digital traces, making them feel more trustworthy than email scams. Instead of online messages, fraudsters now use stamps and envelopes to mimic official communication. Because it comes in an envelope, people may believe the request is genuine. Through these letters, attackers aim to steal secret backup codes used to restore wallets. Physical delivery gives the illusion of authenticity, even though the goal remains theft. The method shifts away from screens but keeps the same deceitful intent. 

Pretending to come from company security units, these fake messages tell recipients they need to finish an urgent "Verification Step" or risk being locked out of their wallets. A countdown appears on screen, pushing people to act fast - slowing down feels risky when time runs short. Opening the link means scanning a barcode first, then moving through steps laid out by the site. Pressure builds because delays supposedly lead to immediate consequences. Following directions seems logical under such conditions, especially if trust in the sender feels justified. 

A single message pretending to come from Trezor told users about an upcoming Authentication Check required before February 15, 2026, otherwise access to Trezor Suite could be interrupted. In much the same way, another forged notice aimed at Ledger customers claimed a Transaction Check would turn mandatory, with reduced features expected after October 15, 2025, unless acted upon. Each of these deceptive messages leads people to fake sites designed to look nearly identical to real setup portals. BleepingComputer’s coverage shows the QR codes redirect to websites mimicking real company systems. 

Instead of clear guidance, these fake sites display alerts - claiming accounts may be limited, transactions could fail, or upgrades might stall without immediate action. One warning follows another, each more urgent than the last, pulling users deeper into the trap. Gradually, they reach a point where entering their crypto wallet recovery words seems like the only option left. Fake websites prompt people to type in their 12-, 20-, or 24-word recovery codes, claiming it's needed to confirm device control and turn on protection. 

Though entered privately, those words get sent straight to servers run by criminals. Because these attackers now hold the key, they rebuild the digital wallet elsewhere without delay. Money vanishes quickly after replication occurs. Fewer scammers send fake crypto offers by post, even though email tricks happen daily. Still, real-world fraud attempts using paper mail have appeared before. 

At times, crooks shipped altered hardware wallets meant to steal recovery words at first use. This latest effort shows hackers still test physical channels, especially if past leaks handed them home addresses. Even after past leaks at both Trezor and Ledger revealed user emails, there's no proof those events triggered this specific attack. However the hackers found their targets, one truth holds - your recovery phrase stays private, always. 

Though prior lapses raised alarms, they didn’t require sharing keys; just like now, safety lives in secrecy. Because access begins where trust ends, never hand over seed words. Even when pressure builds, silence protects better than any tool. Imagine a single line of words holding total power over digital money - this is what a recovery phrase does. Ownership shifts completely when someone else learns your seed phrase; control follows instantly. Companies making secure crypto devices do not ask customers to type these codes online or send them through messages. 

Scanning it, emailing it, even mailing it physically - none of this ever happens if the provider is real. Trust vanishes fast when any official brand demands such sharing. Never type a recovery phrase anywhere except the hardware wallet during setup. When messages arrive with urgent requests, skip the QR scans entirely. Official sites hold the real answers - check there first. A single mistake could expose everything. Trust only what you confirm yourself.  

A shift in cyber threats emerges as fake letters appear alongside rising crypto use. Not just online messages now - paper mail becomes a tool for stealing digital assets. The method adapts, reaching inboxes on paper before screens. Physical envelopes carry hidden risks once limited to spam folders. Fraud finds new paths when trust in printed words remains high.

Featured