Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

New Massiv Malware Targets Android Banking Users Through Fake IPTV App

  As a result of the convenience of mobile streaming, user behavior has quietly been reshaped, normalizing the practice of downloading appli...

All the recent news you need to know

Madison Square Garden Notifies Victims of SSN Data Breach

 



The Madison Square Garden Family of Companies has disclosed that it recently alerted an undisclosed number of individuals about a cybersecurity incident that occurred in August 2025. The company confirmed that the exposed information includes names and Social Security numbers.

According to MSG’s notification letter, attackers exploited a previously unknown vulnerability in Oracle’s E-Business Suite, an enterprise software platform widely used for finance, human resources, and back-office operations. The affected system was hosted and managed by an unnamed third-party vendor, indicating the intrusion occurred through an externally maintained environment rather than MSG’s core internal network.

Oracle informed customers that an undisclosed condition in the application had been abused by an unauthorized party to obtain access to stored data. MSG stated that its investigation, completed in late November 2025, determined that unauthorized access had taken place in August 2025. The gap between compromise and confirmation reflects a common pattern in zero-day attacks, where flaws are exploited before vendors are aware of their existence or able to issue patches.

In November 2025, the ransomware group known as Clop, also stylized as Cl0p, publicly claimed responsibility for the breach. During the same period, the group carried out a broader campaign targeting hundreds of organizations by leveraging the same Oracle vulnerability. MSG has not acknowledged Clop’s claim, and independent verification of the group’s involvement has not been established. The company has not disclosed how many people were notified, whether a ransom demand was made, or whether any payment occurred. A request for further comment remains pending.

MSG is offering eligible individuals one year of complimentary credit monitoring through TransUnion. Affected recipients have 90 days from receiving the notice letter to enroll.

Clop first appeared in 2019 and has become known for exploiting zero-day flaws in enterprise software. Beyond Oracle’s E-Business Suite, the group has targeted Cleo file transfer software and, more recently, vulnerabilities in Gladinet CentreStack file servers. Unlike traditional ransomware operators that focus primarily on encrypting systems, Clop frequently prioritizes data theft. The group exfiltrates information and then threatens to publish or sell it if payment is not made.

In 2025, Clop claimed responsibility for 456 ransomware incidents. Of those, 31 targeted organizations publicly confirmed resulting data breaches, collectively exposing approximately 3.75 million personal records. Institutions reportedly affected by the Oracle zero-day campaign include Harvard University, GlobalLogic, SATO Corporation, and Dartmouth College.

So far in 2026, Clop has claimed another 123 victims, including the French labor union CFDT. Its most recent operations reportedly leverage a newer vulnerability in Gladinet CentreStack servers.

Ransomware activity across the United States remains extensive. In 2025, researchers recorded 646 confirmed ransomware attacks against U.S. organizations, along with 3,193 additional unverified claims made by ransomware groups. Confirmed incidents resulted in nearly 42 million exposed records. One of the largest cases linked to Clop involved exploitation of the Oracle vulnerability at the University of Phoenix, which later notified 3.5 million individuals. In 2026 to date, 17 confirmed attacks and 624 unconfirmed claims are under review.

Other incidents disclosed this week include a December 2024 breach affecting the City of Carthage, Texas, reportedly claimed by Rhysida; a March 2025 breach at Hennessy Advisors impacting 12,643 individuals and attributed to LockBit; an August 2025 breach at KCI Telecommunications linked to Akira; and a December 2025 incident at The Lewis Bear Company affecting 555 individuals and also claimed by Akira.

Ransomware attacks can both disable systems through encryption and involve large-scale data theft. In Clop’s case, data exfiltration appears to be the primary tactic. Organizations that refuse to meet ransom demands may face public disclosure of stolen data, extended operational disruption, and increased fraud risks for affected individuals.

The Madison Square Garden Family of Companies includes Madison Square Garden Sports Corp., Madison Square Garden Entertainment Corp., and Sphere Entertainment Co.. The group owns and operates major venues such as Madison Square Garden, Radio City Music Hall, and the Las Vegas Sphere.



How a Single Brick Helped Homeland Security Rescue an Abused Child from the Dark Web

 

A years-long investigation by the US Department of Homeland Security led to the dramatic rescue of a young girl whose abuse images had been circulating on the dark web — with a crucial clue hidden in the background of a photograph.

Specialist online investigator Greg Squire had nearly exhausted all leads while trying to identify and locate a 12-year-old girl his team had named Lucy. Explicit images of her were being distributed through encrypted networks designed to conceal users’ identities. The perpetrator had taken deliberate steps to erase identifying features, carefully cropping and altering images to avoid detection.

Despite those efforts, investigators found that the answer was concealed in plain sight.

Squire, part of an elite Homeland Security Investigations unit focused on identifying children in sexual abuse material, became deeply invested in Lucy’s case early in his career. The case struck him personally — Lucy was close in age to his own daughter, and new images of her abuse continued to surface online.

Initially, the team determined only that Lucy was likely somewhere in North America, based on visible electrical outlets and fixtures in the room. Attempts to seek assistance from Facebook proved unsuccessful. Although the company had facial recognition technology, it stated it "did not have the tools" to help with the search.

Investigators then scrutinized every visible detail in Lucy’s bedroom — bedding patterns, toys, clothing, and furniture. A breakthrough came when they realized that a sofa appearing in some images had only been sold regionally rather than nationwide, reducing the potential customer base to roughly 40,000 buyers.

"At that point in the investigation, we're [still] looking at 29 states here in the US. I mean, you're talking about tens of thousands of addresses, and that's a very, very daunting task," says Squire.

Still searching for more clues, Squire turned his attention to an exposed brick wall visible in the background of several photos. He contacted the Brick Industry Association after researching brick manufacturers.

"And the woman on the phone was awesome. She was like, 'how can the brick industry help?'"

The association circulated the image among brick specialists nationwide. One expert, John Harp — a veteran in brick sales since 1981 — quickly identified the material.

"I noticed that the brick was a very pink-cast brick, and it had a little bit of a charcoal overlay on it. It was a modular eight-inch brick and it was square-edged," he says. "When I saw that, I knew exactly what the brick was," he adds.

Harp identified it as a "Flaming Alamo".

"[Our company] made that brick from the late 60s through about the middle part of the 80s, and I had sold millions of bricks from that plant."

Although sales records were not digitized and existed only as a "pile of notes", Harp shared a vital insight.

"He goes: 'Bricks are heavy.' And he said: 'So heavy bricks don't go very far.'"

That observation narrowed the search dramatically. Investigators filtered the sofa buyers list to those living within a 100-mile radius of the brick factory in the American southwest


From there, social media analysis uncovered a photograph of Lucy alongside an adult woman believed to be a relative. Tracking related addresses and household members eventually led authorities to a single residence.

Investigators discovered that Lucy lived there with her mother’s boyfriend — a convicted sex offender. Within hours, local Homeland Security agents arrested the man, who had abused Lucy for six years. He was later sentenced to more than 70 years in prison.

Harp, who has fostered over 150 children and adopted three, said the rescue resonated deeply with him.

"We've had over 150 different children in our home. We've adopted three. So, doing that over those years, we have a lot of children in our home that were [previously] abused," he said.

"What [Squire's team] do day in and day out, and what they see, is a magnification of hundreds of times of what I've seen or had to deal with."

The emotional toll of the work eventually affected Squire’s mental health. He admits that outside of work, "alcohol was a bigger part of my life than it should have been".

Reflecting on that period, he said:

"At that point my kids were a bit older… and, you know, that almost enables you to push harder. Like… 'I bet if I get up at three this morning, I can surprise [a perpetrator] online.'

"But meanwhile, personally… 'Who's Greg? I don't even know what he likes to do.' All of your friends… during the day, you know, they're criminals… All they do is talk about the most horrific things all day long."

After his marriage ended and he experienced suicidal thoughts, colleague Pete Manning urged him to seek help.

"It's hard when the thing that brings you so much energy and drive is also the thing that's slowly destroying you," Manning says.

Squire credits confronting his struggles openly as the turning point.

"I feel honoured to be part of the team that can make a difference instead of watching it on TV or hearing about it… I'd rather be right in there in the fight trying to stop it."

Years later, Squire met Lucy — now in her 20s — for the first time. She said healing and support have helped her speak openly about her past.

"I have more stability. I'm able to have the energy to talk to people [about the abuse], which I could not have done… even, like, a couple years ago."

She revealed that when authorities intervened, she had been "praying actively for it to end".

"Not to sound cliché, but it was a prayer answered."

Squire shared that he wished he could have reassured her during those years.

"You wish there was some telepathy and you could reach out and be like, 'listen, we're coming'."

When questioned about its earlier role, Facebook responded: "To protect user privacy, it's important that we follow the appropriate legal process, but we work to support law enforcement as much as we can."

Threat Actors Hit Iranian Sites and Apps After the US-Israel Strike


A series of cyber attacks happened last week during the U.S- Israel attack on targets throughout Iran. 

The cyberattacks included hijacking the various news sites to show messages and also hacking BadeSaba, a religious calendar application over 5 million downloads, which showed messages warning users “It’s time for reckoning” and telling armed forces to give up and quit. 

The U.S Cyber Command spokesperson didn't comment on the issue. 

Internet connectivity in Iran has dropped significantly at 0706 GMT, with minimum connectivity remaining, according to Kentik’s director of internet analysis. It was a smart move to launch a cyberattack on BadeSaba as pro-government people use it and are more religious, said Hamid Kashfi, a security expert and founder of DarkCell, a cybersecurity firm. 

Cyberattacks also hit various Iranian military targets and government services to restrict a coordinated Iranian response, according to the Jerusalem Post. Reuters hasn't verified the claims yet. Sophos director of threat intelligence said that “As Iran considers its options, ‌the likelihood increases that proxy groups and hacktivists may take action, including cyberattacks, against Israeli and U.S.-affiliated military, commercial, or civilian targets,” said Rafe Pilling, the director of threat intelligence with cybersecurity firm.”

These cyber operations may include old data breaches reported as new, vain efforts to breach interne-exposed industrial systems, and may also redirect offensive cyber operations. 

Cynthia Kaiser, a senior vice president at the anti-ransomware company Halcyon and a former top FBI cyber official, stated that activity has escalated in the Middle East. 

According to Kaiser, the company has also received calls to action from well-known pro-Iranian cyber personalities who have previously carried out ransomware attacks, hack-and-leak operations, and distributed denial-of-service (DDoS) attacks, which overload internet services and make them unavailable. He stated, "CrowdStrike is already seeing activity consistent with Iranian-aligned threat actors and hacktivist groups conducting reconnaissance and initiating DDoS attacks.”

Experts also believe that state-sponsored Iranian hacking gangs already launched “wiper “ attacks that remove data on Israeli targets before the strikes. 

Apart from a brief disruption of services in Tirana, the capital of Albania, there was little indication of the disruptive cyberattacks frequently mentioned during discussions about Iran's digital capabilities in June following the U.S. strike on Iranian nuclear targets, according to media sources.

U.S. Blacklists Anthropic as Supply Chain Risk as OpenAI Secures Pentagon AI Deal

 

The Trump administration has designated AI startup Anthropic as a supply chain risk to national security, ordering federal agencies to immediately stop using its AI model Claude. 

The classification has historically been applied to foreign companies and marks a rare move against a U.S. technology firm. 

President Donald Trump announced that agencies must cease use of Anthropic’s technology, allowing a six month phase out for departments heavily reliant on its systems, including the Department of War. 

Defense Secretary Pete Hegseth later formalized the designation and said no contractor, supplier or partner doing business with the U.S. military may conduct commercial activity with Anthropic. 

At the center of the dispute is Anthropic’s refusal to grant the Pentagon unrestricted access to Claude for what officials described as lawful purposes. 

Chief executive Dario Amodei sought two exceptions covering mass domestic surveillance and the development of fully autonomous weapons. 

He argued that current AI systems are not reliable enough for autonomous weapons deployment and warned that mass surveillance could violate Americans’ civil rights. 

Anthropic has said a proposed compromise contract contained loopholes that could allow those safeguards to be bypassed. 

The company had been operating under a 200 million dollar Department of War contract since June 2024 and was the first AI firm to deploy models on classified government networks. 

After negotiations broke down, the Pentagon issued an ultimatum that Anthropic declined, leading to the blacklist. 

The company plans to challenge the designation in court, arguing it may exceed the authority granted under federal law. 

While the restriction applies directly to Defense Department related work, legal analysts say the move could create broader uncertainty across the technology sector. 

Anthropic relies on cloud infrastructure from Amazon, Microsoft and Google, all of which maintain major defense contracts. 

A strict interpretation of the order could complicate those relationships. 

President Trump has warned of serious civil and criminal consequences if Anthropic does not cooperate during the transition. 

Even as Anthropic faces federal restrictions, OpenAI has moved ahead with its own classified agreement with the Pentagon. 

The company said Saturday that it had finalized a deal to deploy advanced AI systems within classified environments under a framework it describes as more restrictive than previous contracts. 

In its official blog post, OpenAI said, "Yesterday we reached an agreement with the Pentagon for deploying advanced AI systems in classified environments, which we requested they also make available to all AI companies." It added, "We think our agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic’s." 

OpenAI outlined three red lines that prohibit the use of its technology for mass domestic surveillance, for directing autonomous weapons systems and for high stakes automated decision making. 

The company said deployment will be cloud only and that it will retain control over its safety systems, with cleared engineers and researchers involved in oversight. 

"We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections," the company wrote. 

The contract references existing U.S. laws governing surveillance and military use of AI, including requirements for human oversight in certain weapons systems and restrictions on monitoring Americans’ private information. 

OpenAI said it would not provide models without safety guardrails and could terminate the agreement if terms are violated, though it added that it does not expect that to happen. 

Despite its dispute with Washington, Anthropic appears to be gaining traction among consumers. 

Claude recently climbed to the top position in Apple’s U.S. App Store free rankings, overtaking OpenAI’s ChatGPT. 

Data from SensorTower shows the app was outside the top 100 at the end of January but steadily rose through February. 

A company spokesperson said daily signups have reached record levels this week, free users have increased more than 60 percent since January and paid subscriptions have more than doubled this year.

Infostealer Malware Targets OpenClaw AI Agent Files to Steal API Keys and Authentication Tokens

 

Now appearing in threat reports, OpenClaw — a local AI assistant that runs directly on personal devices — has rapidly gained popularity. Because it operates on users’ machines, attackers are shifting focus to its configuration files. Recent malware infections have been caught stealing setup data containing API keys, login tokens, and other sensitive credentials, exposing private access points that were meant to remain local. 

Previously known as ClawdBot or MoltBot, OpenClaw functions as a persistent assistant that reads local files, logs into email and messaging apps, and interacts with web services. Since it stores memory and configuration details on the device itself, compromising it can expose deeply personal and professional data. As adoption grows across home and workplace environments, saved credentials are becoming attractive targets. 

Cybersecurity firm Hudson Rock identified what it believes is the first confirmed case of infostealer malware extracting OpenClaw configuration data. The incident marks a shift in tactics: instead of stealing only browser passwords, attackers are now targeting AI assistant environments that store powerful authentication tokens. According to co-founder and CTO Alon Gal, the infection likely involved a Vidar infostealer variant, with stolen data traced to February 13, 2026. 

Researchers say the malware did not specifically target OpenClaw. Instead, it scanned infected systems broadly for files containing keywords like “token” or “private key.” Because OpenClaw stores data in a hidden folder with those identifiers, its files were automatically captured. Among the compromised files, openclaw.json contained a masked email, workspace path, and a high-entropy gateway authentication token that could enable unauthorized access or API impersonation. 

The device.json file stored public and private encryption keys used for pairing and signing, meaning attackers with the private key could mimic the victim’s device and bypass security checks. Additional files such as soul.md, AGENTS.md, and MEMORY.md outlined the agent’s behavior and stored contextual data including logs, messages, and calendar entries. Hudson Rock concluded that the combination of stolen tokens, keys, and memory data could potentially allow near-total digital identity compromise.

Experts expect infostealers to increasingly target AI systems as they become embedded in professional workflows. Separately, Tenable disclosed a critical flaw in Nanobot, an AI assistant inspired by OpenClaw. The vulnerability, tracked as CVE-2026-2577, allowed remote hijacking of exposed instances but was patched in version 0.13.post7. 

Security professionals warn that as AI tools gain deeper access to personal and corporate systems, protecting configuration files is now as critical as safeguarding passwords. Hidden setup files can carry risks equal to — or greater than — stolen login credentials.

Influencers Alarmed as New AI Rules Enforce Three-Hour Takedowns

 

India’s new three-hour takedown rule for online content has triggered unease among influencers, agencies, and brands, who fear it could disrupt campaigns and shrink creative freedom.

The rule, introduced through amendments to the IT Intermediary Rules on February 11, slashes the takedown window from 36 hours to just three, with the stated goal of curbing unlawful and AI-generated deepfake content. Creators argue that while tackling deepfakes and harmful material is essential, such a compressed deadline leaves almost no room to contest wrongful flags or provide context, especially when automated moderation tools make mistakes. They warn that legitimate posts could be penalised simply because systems misread nuance, humour, or sensitive but educational topics.

Influencer Ekta Makhijani described the deadline as “incredibly tight,” noting that if a brand campaign video is misflagged, an entire launch window could be lost in hours rather than days. She highlighted how parenting content around breastfeeding or toddler behaviour has previously been misinterpreted by moderation tools, and said the shorter window magnifies the risk of such false positives. Apparel brand founder Akanksha Kommirelly added that small creators lack round-the-clock legal and compliance teams, making it unrealistic for them to respond to takedown notices at all times.

Experts also worry about a chilling effect on speech, especially satire, political commentary, and advocacy. With platforms facing tighter liability, agencies fear an “act first, verify later” culture in which companies remove anything remotely borderline to stay safe. Raj Mishra of Chtrbox warned that, in practice, the incentive becomes to take down flagged content immediately, which could hit investigative work or edgy creative pieces hardest. India’s linguistic diversity further complicates moderation, as systems trained mainly on English may misinterpret regional content.A

longside takedowns, mandatory AI labelling is reshaping creator workflows and brand strategies. Kommirelly noted that prominent AI tags on visual campaigns may weaken brand recall, while Mishra cautioned that platforms could quietly de-prioritise AI-labelled content in algorithms, reducing reach regardless of audience acceptance. This dual pressure—strict timelines and AI disclosure—forces creators to rethink how they script, edit, and publish content.

Agencies like Kofluence and Chtrbox are responding by building compliance support systems for the creator economy. These include AI content guides, pre-upload checks, documentation protocols, legal support networks, and even insurance options to cover campaign disruptions. While most stakeholders accept that tougher rules are needed against deepfakes and abuse, they are urging the government to differentiate emergency takedowns for clearly illegal content from more contested speech so that speed does not entirely override fairness.

Featured