Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

Madison Square Garden Notifies Victims of SSN Data Breach

  The Madison Square Garden Family of Companies has disclosed that it recently alerted an undisclosed number of individuals about a cybersec...

All the recent news you need to know

How a Single Brick Helped Homeland Security Rescue an Abused Child from the Dark Web

 

A years-long investigation by the US Department of Homeland Security led to the dramatic rescue of a young girl whose abuse images had been circulating on the dark web — with a crucial clue hidden in the background of a photograph.

Specialist online investigator Greg Squire had nearly exhausted all leads while trying to identify and locate a 12-year-old girl his team had named Lucy. Explicit images of her were being distributed through encrypted networks designed to conceal users’ identities. The perpetrator had taken deliberate steps to erase identifying features, carefully cropping and altering images to avoid detection.

Despite those efforts, investigators found that the answer was concealed in plain sight.

Squire, part of an elite Homeland Security Investigations unit focused on identifying children in sexual abuse material, became deeply invested in Lucy’s case early in his career. The case struck him personally — Lucy was close in age to his own daughter, and new images of her abuse continued to surface online.

Initially, the team determined only that Lucy was likely somewhere in North America, based on visible electrical outlets and fixtures in the room. Attempts to seek assistance from Facebook proved unsuccessful. Although the company had facial recognition technology, it stated it "did not have the tools" to help with the search.

Investigators then scrutinized every visible detail in Lucy’s bedroom — bedding patterns, toys, clothing, and furniture. A breakthrough came when they realized that a sofa appearing in some images had only been sold regionally rather than nationwide, reducing the potential customer base to roughly 40,000 buyers.

"At that point in the investigation, we're [still] looking at 29 states here in the US. I mean, you're talking about tens of thousands of addresses, and that's a very, very daunting task," says Squire.

Still searching for more clues, Squire turned his attention to an exposed brick wall visible in the background of several photos. He contacted the Brick Industry Association after researching brick manufacturers.

"And the woman on the phone was awesome. She was like, 'how can the brick industry help?'"

The association circulated the image among brick specialists nationwide. One expert, John Harp — a veteran in brick sales since 1981 — quickly identified the material.

"I noticed that the brick was a very pink-cast brick, and it had a little bit of a charcoal overlay on it. It was a modular eight-inch brick and it was square-edged," he says. "When I saw that, I knew exactly what the brick was," he adds.

Harp identified it as a "Flaming Alamo".

"[Our company] made that brick from the late 60s through about the middle part of the 80s, and I had sold millions of bricks from that plant."

Although sales records were not digitized and existed only as a "pile of notes", Harp shared a vital insight.

"He goes: 'Bricks are heavy.' And he said: 'So heavy bricks don't go very far.'"

That observation narrowed the search dramatically. Investigators filtered the sofa buyers list to those living within a 100-mile radius of the brick factory in the American southwest


From there, social media analysis uncovered a photograph of Lucy alongside an adult woman believed to be a relative. Tracking related addresses and household members eventually led authorities to a single residence.

Investigators discovered that Lucy lived there with her mother’s boyfriend — a convicted sex offender. Within hours, local Homeland Security agents arrested the man, who had abused Lucy for six years. He was later sentenced to more than 70 years in prison.

Harp, who has fostered over 150 children and adopted three, said the rescue resonated deeply with him.

"We've had over 150 different children in our home. We've adopted three. So, doing that over those years, we have a lot of children in our home that were [previously] abused," he said.

"What [Squire's team] do day in and day out, and what they see, is a magnification of hundreds of times of what I've seen or had to deal with."

The emotional toll of the work eventually affected Squire’s mental health. He admits that outside of work, "alcohol was a bigger part of my life than it should have been".

Reflecting on that period, he said:

"At that point my kids were a bit older… and, you know, that almost enables you to push harder. Like… 'I bet if I get up at three this morning, I can surprise [a perpetrator] online.'

"But meanwhile, personally… 'Who's Greg? I don't even know what he likes to do.' All of your friends… during the day, you know, they're criminals… All they do is talk about the most horrific things all day long."

After his marriage ended and he experienced suicidal thoughts, colleague Pete Manning urged him to seek help.

"It's hard when the thing that brings you so much energy and drive is also the thing that's slowly destroying you," Manning says.

Squire credits confronting his struggles openly as the turning point.

"I feel honoured to be part of the team that can make a difference instead of watching it on TV or hearing about it… I'd rather be right in there in the fight trying to stop it."

Years later, Squire met Lucy — now in her 20s — for the first time. She said healing and support have helped her speak openly about her past.

"I have more stability. I'm able to have the energy to talk to people [about the abuse], which I could not have done… even, like, a couple years ago."

She revealed that when authorities intervened, she had been "praying actively for it to end".

"Not to sound cliché, but it was a prayer answered."

Squire shared that he wished he could have reassured her during those years.

"You wish there was some telepathy and you could reach out and be like, 'listen, we're coming'."

When questioned about its earlier role, Facebook responded: "To protect user privacy, it's important that we follow the appropriate legal process, but we work to support law enforcement as much as we can."

Threat Actors Hit Iranian Sites and Apps After the US-Israel Strike


A series of cyber attacks happened last week during the U.S- Israel attack on targets throughout Iran. 

The cyberattacks included hijacking the various news sites to show messages and also hacking BadeSaba, a religious calendar application over 5 million downloads, which showed messages warning users “It’s time for reckoning” and telling armed forces to give up and quit. 

The U.S Cyber Command spokesperson didn't comment on the issue. 

Internet connectivity in Iran has dropped significantly at 0706 GMT, with minimum connectivity remaining, according to Kentik’s director of internet analysis. It was a smart move to launch a cyberattack on BadeSaba as pro-government people use it and are more religious, said Hamid Kashfi, a security expert and founder of DarkCell, a cybersecurity firm. 

Cyberattacks also hit various Iranian military targets and government services to restrict a coordinated Iranian response, according to the Jerusalem Post. Reuters hasn't verified the claims yet. Sophos director of threat intelligence said that “As Iran considers its options, ‌the likelihood increases that proxy groups and hacktivists may take action, including cyberattacks, against Israeli and U.S.-affiliated military, commercial, or civilian targets,” said Rafe Pilling, the director of threat intelligence with cybersecurity firm.”

These cyber operations may include old data breaches reported as new, vain efforts to breach interne-exposed industrial systems, and may also redirect offensive cyber operations. 

Cynthia Kaiser, a senior vice president at the anti-ransomware company Halcyon and a former top FBI cyber official, stated that activity has escalated in the Middle East. 

According to Kaiser, the company has also received calls to action from well-known pro-Iranian cyber personalities who have previously carried out ransomware attacks, hack-and-leak operations, and distributed denial-of-service (DDoS) attacks, which overload internet services and make them unavailable. He stated, "CrowdStrike is already seeing activity consistent with Iranian-aligned threat actors and hacktivist groups conducting reconnaissance and initiating DDoS attacks.”

Experts also believe that state-sponsored Iranian hacking gangs already launched “wiper “ attacks that remove data on Israeli targets before the strikes. 

Apart from a brief disruption of services in Tirana, the capital of Albania, there was little indication of the disruptive cyberattacks frequently mentioned during discussions about Iran's digital capabilities in June following the U.S. strike on Iranian nuclear targets, according to media sources.

U.S. Blacklists Anthropic as Supply Chain Risk as OpenAI Secures Pentagon AI Deal

 

The Trump administration has designated AI startup Anthropic as a supply chain risk to national security, ordering federal agencies to immediately stop using its AI model Claude. 

The classification has historically been applied to foreign companies and marks a rare move against a U.S. technology firm. 

President Donald Trump announced that agencies must cease use of Anthropic’s technology, allowing a six month phase out for departments heavily reliant on its systems, including the Department of War. 

Defense Secretary Pete Hegseth later formalized the designation and said no contractor, supplier or partner doing business with the U.S. military may conduct commercial activity with Anthropic. 

At the center of the dispute is Anthropic’s refusal to grant the Pentagon unrestricted access to Claude for what officials described as lawful purposes. 

Chief executive Dario Amodei sought two exceptions covering mass domestic surveillance and the development of fully autonomous weapons. 

He argued that current AI systems are not reliable enough for autonomous weapons deployment and warned that mass surveillance could violate Americans’ civil rights. 

Anthropic has said a proposed compromise contract contained loopholes that could allow those safeguards to be bypassed. 

The company had been operating under a 200 million dollar Department of War contract since June 2024 and was the first AI firm to deploy models on classified government networks. 

After negotiations broke down, the Pentagon issued an ultimatum that Anthropic declined, leading to the blacklist. 

The company plans to challenge the designation in court, arguing it may exceed the authority granted under federal law. 

While the restriction applies directly to Defense Department related work, legal analysts say the move could create broader uncertainty across the technology sector. 

Anthropic relies on cloud infrastructure from Amazon, Microsoft and Google, all of which maintain major defense contracts. 

A strict interpretation of the order could complicate those relationships. 

President Trump has warned of serious civil and criminal consequences if Anthropic does not cooperate during the transition. 

Even as Anthropic faces federal restrictions, OpenAI has moved ahead with its own classified agreement with the Pentagon. 

The company said Saturday that it had finalized a deal to deploy advanced AI systems within classified environments under a framework it describes as more restrictive than previous contracts. 

In its official blog post, OpenAI said, "Yesterday we reached an agreement with the Pentagon for deploying advanced AI systems in classified environments, which we requested they also make available to all AI companies." It added, "We think our agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic’s." 

OpenAI outlined three red lines that prohibit the use of its technology for mass domestic surveillance, for directing autonomous weapons systems and for high stakes automated decision making. 

The company said deployment will be cloud only and that it will retain control over its safety systems, with cleared engineers and researchers involved in oversight. 

"We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections," the company wrote. 

The contract references existing U.S. laws governing surveillance and military use of AI, including requirements for human oversight in certain weapons systems and restrictions on monitoring Americans’ private information. 

OpenAI said it would not provide models without safety guardrails and could terminate the agreement if terms are violated, though it added that it does not expect that to happen. 

Despite its dispute with Washington, Anthropic appears to be gaining traction among consumers. 

Claude recently climbed to the top position in Apple’s U.S. App Store free rankings, overtaking OpenAI’s ChatGPT. 

Data from SensorTower shows the app was outside the top 100 at the end of January but steadily rose through February. 

A company spokesperson said daily signups have reached record levels this week, free users have increased more than 60 percent since January and paid subscriptions have more than doubled this year.

Infostealer Malware Targets OpenClaw AI Agent Files to Steal API Keys and Authentication Tokens

 

Now appearing in threat reports, OpenClaw — a local AI assistant that runs directly on personal devices — has rapidly gained popularity. Because it operates on users’ machines, attackers are shifting focus to its configuration files. Recent malware infections have been caught stealing setup data containing API keys, login tokens, and other sensitive credentials, exposing private access points that were meant to remain local. 

Previously known as ClawdBot or MoltBot, OpenClaw functions as a persistent assistant that reads local files, logs into email and messaging apps, and interacts with web services. Since it stores memory and configuration details on the device itself, compromising it can expose deeply personal and professional data. As adoption grows across home and workplace environments, saved credentials are becoming attractive targets. 

Cybersecurity firm Hudson Rock identified what it believes is the first confirmed case of infostealer malware extracting OpenClaw configuration data. The incident marks a shift in tactics: instead of stealing only browser passwords, attackers are now targeting AI assistant environments that store powerful authentication tokens. According to co-founder and CTO Alon Gal, the infection likely involved a Vidar infostealer variant, with stolen data traced to February 13, 2026. 

Researchers say the malware did not specifically target OpenClaw. Instead, it scanned infected systems broadly for files containing keywords like “token” or “private key.” Because OpenClaw stores data in a hidden folder with those identifiers, its files were automatically captured. Among the compromised files, openclaw.json contained a masked email, workspace path, and a high-entropy gateway authentication token that could enable unauthorized access or API impersonation. 

The device.json file stored public and private encryption keys used for pairing and signing, meaning attackers with the private key could mimic the victim’s device and bypass security checks. Additional files such as soul.md, AGENTS.md, and MEMORY.md outlined the agent’s behavior and stored contextual data including logs, messages, and calendar entries. Hudson Rock concluded that the combination of stolen tokens, keys, and memory data could potentially allow near-total digital identity compromise.

Experts expect infostealers to increasingly target AI systems as they become embedded in professional workflows. Separately, Tenable disclosed a critical flaw in Nanobot, an AI assistant inspired by OpenClaw. The vulnerability, tracked as CVE-2026-2577, allowed remote hijacking of exposed instances but was patched in version 0.13.post7. 

Security professionals warn that as AI tools gain deeper access to personal and corporate systems, protecting configuration files is now as critical as safeguarding passwords. Hidden setup files can carry risks equal to — or greater than — stolen login credentials.

Influencers Alarmed as New AI Rules Enforce Three-Hour Takedowns

 

India’s new three-hour takedown rule for online content has triggered unease among influencers, agencies, and brands, who fear it could disrupt campaigns and shrink creative freedom.

The rule, introduced through amendments to the IT Intermediary Rules on February 11, slashes the takedown window from 36 hours to just three, with the stated goal of curbing unlawful and AI-generated deepfake content. Creators argue that while tackling deepfakes and harmful material is essential, such a compressed deadline leaves almost no room to contest wrongful flags or provide context, especially when automated moderation tools make mistakes. They warn that legitimate posts could be penalised simply because systems misread nuance, humour, or sensitive but educational topics.

Influencer Ekta Makhijani described the deadline as “incredibly tight,” noting that if a brand campaign video is misflagged, an entire launch window could be lost in hours rather than days. She highlighted how parenting content around breastfeeding or toddler behaviour has previously been misinterpreted by moderation tools, and said the shorter window magnifies the risk of such false positives. Apparel brand founder Akanksha Kommirelly added that small creators lack round-the-clock legal and compliance teams, making it unrealistic for them to respond to takedown notices at all times.

Experts also worry about a chilling effect on speech, especially satire, political commentary, and advocacy. With platforms facing tighter liability, agencies fear an “act first, verify later” culture in which companies remove anything remotely borderline to stay safe. Raj Mishra of Chtrbox warned that, in practice, the incentive becomes to take down flagged content immediately, which could hit investigative work or edgy creative pieces hardest. India’s linguistic diversity further complicates moderation, as systems trained mainly on English may misinterpret regional content.A

longside takedowns, mandatory AI labelling is reshaping creator workflows and brand strategies. Kommirelly noted that prominent AI tags on visual campaigns may weaken brand recall, while Mishra cautioned that platforms could quietly de-prioritise AI-labelled content in algorithms, reducing reach regardless of audience acceptance. This dual pressure—strict timelines and AI disclosure—forces creators to rethink how they script, edit, and publish content.

Agencies like Kofluence and Chtrbox are responding by building compliance support systems for the creator economy. These include AI content guides, pre-upload checks, documentation protocols, legal support networks, and even insurance options to cover campaign disruptions. While most stakeholders accept that tougher rules are needed against deepfakes and abuse, they are urging the government to differentiate emergency takedowns for clearly illegal content from more contested speech so that speed does not entirely override fairness.

US Employs Anthropic’s Claude AI in High-Profile Venezuela Raid


 

Using a commercially developed artificial intelligence system in a classified US military operation represents a significant technological shift in the design of modern defence strategy. It appears that what was once confined to research laboratories and enterprise software environments has now become integral to high-profile operational planning, signalling the convergence of Silicon Valley innovation with national security doctrines has reached a new stage.

Nicolás Maduro's capture was allegedly assisted by advanced AI tools. This prompted increased scrutiny of how emerging technologies were utilized in conflict scenarios and prompted broader questions regarding accountability, oversight, and the evolving line between corporate governance frameworks and military necessities, in addition to intensifying scrutiny. 

It was striking to see the US military’s recent operation to seize former Venezuelan President Nicolás Maduro at the intersection of cutting-edge technology and modern warfare. In addition to demonstrating the effectiveness of traditional force, the operation also demonstrated that artificial intelligence is becoming increasingly important in high stakes conflict situations. 

Recent operations by the US military to capture former Venezuelan President Nicolás Maduro represent a striking intersection of cutting-edge technology and modern warfare, and are not just a testament to traditional force; they also demonstrate the growing importance of artificial intelligence in high-stakes conflict situations. 

A number of reports citing The Wall Street Journal indicated that Anthropic's Claude AI model was deployed in the operation that led to the capture of Nicolás Maduro. This indicates that advanced artificial intelligence is becoming a significant part of US defence infrastructure, while also highlighting the complex intersection between corporate AI security measures and military requirements. 

A collaborative effort between Palantir Technologies and Claude enables high-level data synthesis, analysis modeling, and operational support through a secure collaboration. The report describes Claude as the first commercially developed artificial intelligence system to be utilized in a classified environment. 

As Anthropic's published usage policies expressly prohibit applications related to violence, weapon development, or surveillance, its reported involvement is significant. However, according to reports, the model was leveraged by defence officials to assist in key planning phases and intelligence coordination surrounding the mission that culminated in Maduro's arrest and transfer to New York to face federal charges. 

It highlights both the operational utility of AI-enabled analytical systems and the legal and ethical challenges associated with deploying commercial technologies in sensitive national security settings. In addition, reports indicate that Claude's capabilities may have been employed for processing complex intelligence datasets, supporting real-time decision workflows, and synthesizing multilingual information streams within compressed operational timeframes; however, specific implementation details remain confidential.

Following the raid, involving coordinated military action in Caracas and the detention of former Venezuelan leader, the debate about the scope and limitations of artificial intelligence within the U.S. Several leading artificial intelligence developers, including Anthropic and OpenAI, have been encouraged to make their models available on classified networks with less operational restrictions than those imposed in civilian environments, according to reports. 

As part of its strategic objectives, the Pentagon seeks to integrate advanced artificial intelligence into intelligence analysis, mission planning, and multi-domain operational coordination. Claude's availability within classified environments facilitated by third-party infrastructure partnerships has become a source of institutional tension, in particular because Anthropic's internal safeguards prohibit the model from being used for violent or surveillance-related tasks. 

The Department of Defense has argued that AI systems must be able to support "all lawful purposes" in order to be available for future operational readiness, including rapid, AI-assisted intelligence fusion across contested domains. This position is considered essential for future operational readiness. 

Because of the company's hesitation to erode certain safeguards, senior defence leadership, including Pete Hegseth, has indicated that authorities such as the Defense Production Act or supply chain risk assessments may be considered when evaluating future contractual relations.

As the technological convergence accelerates, it becomes increasingly challenging for governments and AI developers to reconcile national security imperatives and corporate governance obligations. There is a broader question at the center of this ethical and strategic challenge regarding how advanced artificial intelligence tools should be governed in national security contexts, a discussion which extends beyond single missions and extends to the future architecture of defence technology as well as safeguards placed on autonomous and semi-automated systems. 

In a time when defence institutions are deeply integrating artificial intelligence into operational command structures, this episode underscores a pivotal point in the governance of dual-use technologies. When commercial AI innovation is combined with classified military deployment, robust contractual clarity is necessary, as are enforceable oversight mechanisms, independent review systems and standardized compliance frameworks integrated into both software and procurement processes. 

The strategic planning, operational effectiveness, legal safeguards, and ethical restraint of regulatory architecture must now be harmonised in a manner that maintains operational effectiveness while maintaining accountability, legal safeguards, and ethical constraints. 

Advancement in artificial intelligence systems risks outpacing the supervision mechanisms designed to ensure their safety if such calibrated governance is not in place. As a result of the standards developed in response to this occasion, the national defence doctrines of the future will be significantly influenced, as will global norms governing artificial intelligence in conflict environments for years to come.

The strategic and ethical challenge entails a wider question regarding how advanced artificial intelligence tools should be governed when deployed for national security purposes, which encompasses the future architecture of defence technology as well as safeguards placed around semi-autonomous and autonomous systems. 

In a time when defence institutions are deeply integrating artificial intelligence into operational command structures, this episode underscores a pivotal point in the governance of dual-use technologies. When commercial AI innovation is combined with classified military deployment, robust contractual clarity is necessary, as are enforceable oversight mechanisms, independent review systems and standardized compliance frameworks integrated into both software and procurement processes. 

The strategic planning, operational effectiveness, legal safeguards, and ethical restraint of regulatory architecture must now be harmonised in a manner that maintains operational effectiveness while maintaining accountability, legal safeguards, and ethical constraints.

Advancement in artificial intelligence systems risks outpacing the supervision mechanisms designed to ensure their safety if such calibrated governance is not in place. As a result of the standards developed in response to this occasion, the national defence doctrines of the future will be significantly influenced, as will global norms governing artificial intelligence in conflict environments for years to come.

Featured