Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Cyber Security. Show all posts

Quantum Cybersecurity Risks Rise as Organizations Prepare for Post-Quantum Cryptography

 

Security experts often trust encrypted data since today's cryptography aims to block unapproved users. Still, some warn new forms of computation might one day weaken common encryption techniques. Even now, as quantum machines advance, potential threats are starting to shape strategies for what comes after today’s security models. 

A rising worry for some cybersecurity professionals involves what they call "harvest now, decrypt later." Rather than cracking secure transmissions at once, attackers save encoded information today, waiting until conditions improve. When machines powered by quantum computing reach sufficient strength, old ciphers may unravel overnight. Data believed safe could then spill into view years after being taken. Such delays in threats make preparation harder to justify before damage appears. 

This threat weighs heavily on institutions tasked with protecting sensitive records over long durations. Finance, public administration, health services, and digital infrastructure sectors routinely manage details requiring protection across many years. When coded messages get captured today and kept aside, future advances in quantum machines might unlock them later. What worries experts is how current encryption often depends on math challenges too tough for regular computers to crack quickly. Built around this idea are systems like RSA and elliptic curve cryptography. 

Yet quantum machines might handle specific intricate computations much faster than conventional ones. That speed could erode the security these common encryption methods now provide. Facing new risks, experts in cybersecurity now push forward with post-quantum methods. Security built on these models holds up under extreme computing strength - like that of quantum machines. A growing favorite? Hybrid setups appear more often, linking older ciphers alongside fresh defenses ready for future attacks. With hybrid cryptography, companies boost protection without abandoning older tech setups. 

Instead of full system swaps, new quantum-resistant codes mix into present-day encryption layers. Slow shifts like these ease strain on operations yet build stronger shields for future threats. One of the recent additions to digital security is ML-KEM, built to withstand threats posed by future quantum machines. Though still emerging, this method works alongside existing encryption instead of replacing it outright. As processing power grows, blending such tools into current systems helps maintain protection over time. Progress here does not erase older methods but layers new defenses on top. Even now, early adoption supports long-term resilience without requiring immediate overhaul. 

One step at a time, security specialists stress the need for methodical planning ahead of the quantum shift. What often gets overlooked is which data must stay secure over many years, so mapping sensitive information comes first. After that, reviewing existing encryption methods across IT environments helps reveal gaps. Where needed, combining classical and post-quantum algorithms slowly becomes part of the solution. Tracking all crypto tools in use supports better oversight down the line. Staying aligned with new regulations isn’t optional - it’s built into the process from the start. 

Even while stronger encryption matters, defenses cannot rely on math alone. To stay ahead, teams need ways to examine encrypted data streams without weakening protection. Watching for risks demands consistent oversight within tangled network setups. Because trust is never assumed today, systems built around verification help sustain both access checks and threat spotting. Such designs make sure safeguards work even when connections are hidden. 

When companies start tackling these issues, advice from specialists often highlights realistic steps for adapting to quantum-safe protections. Because insights spread through training programs, conversations among engineers emerge that clarify risk assessment methods. While joint efforts across sectors continue growing, approaches to safeguarding critical data gradually take shape in response. 

A clearer path forward forms where knowledge exchange meets real-world testing. Expectations grow around how quantum computing might shift cybersecurity in the years ahead. Those who prepare sooner, using methods resistant to quantum risks, stand a better chance at safeguarding information. Staying secure means adjusting before changes arrive, not after they disrupt. Progress in technology demands constant review of protection strategies. Forward-thinking steps today could define resilience tomorrow.

FBI Informant Allegedly Ran Most Operations on Incognito Market While Fentanyl-Laced Drugs Caused Overdose Deaths

 

An FBI informant reportedly handled the majority of activity on Incognito Market—one of the largest drug marketplaces on the dark web—for nearly two years, even as fentanyl-laced pills linked to the platform caused fatal overdoses across the United States. Court documents indicate that the unnamed confidential source managed roughly 95% of transactions on the site between 2022 and 2024, effectively helping operate the $100 million marketplace.

According to filings, the informant approved vendor listings, mediated disputes among users, and oversaw cryptocurrency payments on the platform. These activities allegedly continued even after buyers warned about near-fatal overdoses connected to certain suppliers.

Taiwanese national Rui-Siang Lin, who used the alias “Pharoah,” created Incognito Market and ran it from October 2020 until March 2024. The Tor-based platform hosted nearly 1,800 vendors who sold drugs such as cocaine, methamphetamine, MDMA, and opioids to hundreds of thousands of buyers worldwide.

In October, Judge Colleen McMahon sentenced Lin to 30 years in federal prison and ordered him to forfeit $105 million. The judge described him as a “drug kingpin,” despite the defense raising serious questions about the extent of FBI involvement in the operation.

During sentencing in Manhattan federal court, Arkansas physician David Churchill spoke about the death of his son Reed in September 2022. The 22-year-old died after taking fentanyl-laced oxycodone pills purchased through Incognito Market. The drugs were supplied by a vendor known as RedLightLabs, whose operators—Michael Ta and Raj Srinivasan—later pleaded guilty to charges tied to five overdose deaths.

Churchill asked Lin to remember his son’s face while serving his sentence. However, the revelation that the FBI’s own confidential asset was moderating the marketplace at the time of Reed’s death added another troubling dimension to the case.

When Law Enforcement Becomes the Accomplice

Lin’s defense team argued that the FBI informant functioned more like a partner than an undercover observer. According to defense filings, the government’s source did more than infiltrate the marketplace—it played a central operational role.

Documents suggest the informant approved vendors, handled user complaints, and processed transactions while allegedly overlooking warnings about fentanyl contamination in certain drug listings.

In November 2023, users reported severe overdoses and hospitalizations tied to a particular vendor who nevertheless continued fulfilling more than 1,000 orders. Court records also show the informant debated Lin about maintaining bans on fentanyl, reportedly advocating for “free markets” before Lin conducted a user poll—later described as rigged—that maintained the prohibition.

Defense attorney Noam Biale described the situation as a joint operation, saying: “The government had the ability to mitigate the harm—and didn’t do it.”

Judge McMahon also questioned the length of the investigation, asking why authorities allowed the marketplace to remain active for such an extended period after gaining access.

Prosecutors, however, argued that the informant was simply following Lin’s instructions as part of a broader strategy to identify “Pharaoh.” Authorities ultimately traced Lin through blockchain analysis and seized servers tied to the marketplace.

While Lin’s 30-year sentence remains in place, his planned appeal and the debate surrounding the informant’s role indicate that the legal and ethical questions surrounding the Incognito Market investigation are far from resolved.

Coruna Exploit Kit Targets iPhones With 23 Vulnerabilities Across Multiple iOS Versions

 

Security researchers have identified a powerful exploit framework targeting Apple iPhones running older versions of the iOS operating system. 

The toolkit, called Coruna and also known as CryptoWaters, includes multiple exploit chains capable of targeting devices running iOS versions from 13.0 through 17.2.1, according to researchers from Google’s Threat Intelligence Group. 

The framework contains five full exploit chains and a total of 23 vulnerabilities. Researchers said the exploit kit is not effective against the most recent versions of iOS. 

“The core technical value of this exploit kit lies in its comprehensive collection of iOS exploits, with the most advanced ones using non public exploitation techniques and mitigation bypasses,” Google researchers said. 

They added that the infrastructure supporting the kit is carefully designed and integrates several exploit components into a unified framework. 

“The framework surrounding the exploit kit is extremely well engineered. The exploit pieces are all connected naturally and combined together using common utility and exploitation frameworks.” 

According to researchers, the exploit kit has circulated among several types of threat actors since early 2025. 

The toolkit first appeared in a commercial surveillance operation before being used by a government backed attacker. 

By late 2025, it had reached a financially motivated threat group operating from China. Investigators say the movement of the exploit kit between groups suggests a growing underground market where previously developed zero day tools are resold and reused. 

Security firm iVerify said the spread of Coruna demonstrates how advanced surveillance tools can move beyond their original operators. 

“Coruna is one of the most significant examples we’ve observed of sophisticated spyware grade capabilities proliferating from commercial surveillance vendors into the hands of nation state actors and ultimately mass scale criminal operations,” the company said. 

Researchers first detected elements of the exploit chain in early 2025 when a surveillance customer used it within a JavaScript framework that had not been previously documented. 

The framework gathers information about the targeted device including the model and the iOS version running on it. Based on this fingerprinting data, the framework delivers a suitable WebKit remote code execution exploit. 

One of the vulnerabilities used in the chain was CVE-2024-23222, a type confusion flaw in Apple’s WebKit browser engine that was patched in January 2024. 

The framework appeared again in July 2025 when it was discovered on a domain used to deliver malicious content through hidden iframes on compromised websites in Ukraine. 

These sites included pages related to industrial tools, retail services and e commerce platforms. 

Researchers believe a suspected Russian espionage group tracked as UNC6353 was responsible for that activity. The exploit framework was delivered only to certain users based on their geographic location and device characteristics. 

A third wave of activity was identified in December 2025. In that campaign, attackers used a network of fake Chinese websites related to financial topics to distribute the exploit kit. 

Visitors were encouraged to access the sites from iPhones or iPads for a better browsing experience. Once accessed from an Apple device, the websites inserted a hidden iframe that triggered the Coruna exploit kit. This campaign has been linked to a threat cluster tracked as UNC6691. 

Further investigation uncovered a debug version of the exploit kit along with several exploit samples spanning five complete attack chains. 

Researchers said the kit includes vulnerabilities affecting several generations of iOS. These include exploits targeting iOS 13 through iOS 17.2.1 using vulnerabilities such as CVE-2020-27932, CVE-2022-48503, CVE-2023-32409 and CVE-2024-23222. 

Some of the vulnerabilities in the toolkit had previously been used as zero day exploits in earlier operations. 

“Photon and Gallium are exploiting vulnerabilities that were also used as zero days as part of Operation Triangulation,” Google researchers said. 

Once a device is compromised, attackers can deploy additional malware components. In the case of the UNC6691 campaign, the exploit chain delivered a stager called PlasmaLoader. 

The program is designed to decode QR codes embedded in images and retrieve additional modules from external servers. These modules can then collect sensitive data from cryptocurrency wallet applications including Base, Bitget Wallet, Exodus and MetaMask. 

Researchers said the malware contains hard coded command and control servers along with a fallback system that generates domain names automatically using a domain generation algorithm seeded with the word lazarus. 

A notable characteristic of the Coruna exploit kit is that it avoids running on devices using Apple’s Lockdown Mode or devices browsing in private mode. Security researchers recommend that iPhone users update their devices to the latest version of iOS and enable Lockdown Mode when additional protection is needed.

China Tightens Control Over Official Data Available to the Outside World


 

Early in the Internet's history, the global network architecture was widely recognized as an evolving system for transferring government documents, statistical records, and institutional disclosures across jurisdictions a borderless repository of knowledge that enabled government documents to travel freely across jurisdictions. 

A number of scholars, investors, journalists, and policymakers have become accustomed to considering publicly hosted websites as a reliable window into distant government administration. However, recent observations suggest that the assumption of digital openness in China's online ecosystem may be changing quietly. 

There has been a steady decline in the international accessibility of Chinese government portals over the past few years: more and more official websites that once appeared regularly in global search results cannot be accessed when searching outside the country's boundaries. 

In addition to a broader recalibration of information governance, the emerging pattern is interpreted by analysts as a result of an overall pattern rather than isolated technical disruptions. China's institutional data may also be shaped by these practices, not only by managing the flow of foreign content into the country, but also by how much of it remains public.

Over the past few decades, the internet has facilitated unprecedented accessibility to information, dissolving borders that once restricted public records, statistics, and government disclosures. However, new evidence suggests that this openness may be gradually waning in one of the most influential digital ecosystems in the world.

According to researchers who have examined the accessibility of official Chinese government websites, an increasing number of them are no longer accessible from abroad. Despite the pattern, it does not seem to be isolated technical failures, but rather a subtle architectural shift in Chinese information governance that analysts are increasingly describing: a system that restricts not only what citizens of the country are allowed to observe, but also what the outside world can see about China. 

A detailed analysis conducted in February 2025 indicates these interruptions are not simply a consequence of technical inconsistencies, but rather are the result of deliberate policy restrictions. According to researchers, approximately sixty percent of failed connections to Chinese government portals are a consequence of deliberate policy restrictions, while the remaining cases are attributed to network congestion, legacy infrastructure, or fragmented hosting systems. 

It reverses the logic of Chinese domestic internet controls well known to the public. In contrast to the original system, which limited what users were allowed to view abroad, the new configuration appears to be intended to restrict what audiences outside the country may see regarding China's own administrative, economic, and regulatory landscape. These restrictions are unevenly distributed.

As opposed to a uniform nationwide block of geo-filtering, it is more common to detect clusters of it across specific provinces or prefectures. Due to this, certain municipal or regional data portals remain available to overseas users despite neighboring jurisdictions appearing systematically unreachable from overseas. 

As a consequence of this fragmented pattern, it is increasingly challenging for foreign researchers and analysts to construct consistent datasets, since information availability varies greatly according to the level of administration and technology in place to support government websites.

The tightening of external access has also extended beyond government portals into major commercial information services that have long served as research infrastructure for international observers of China’s economy. 

Several commonly used platforms - such as Qichacha, a corporate registry database, the China National Knowledge Infrastructure academic repository, and Wind - were restricted from allowing foreign connectivity in 2022 and 2023. 

A wide range of multinational companies, consulting firms, and academic institutions used these tools to conduct competitor analysis, regulatory monitoring, and market research within China. As a result of their removal from overseas networks, external stakeholders are significantly limited in the number of verifiable public data they can access. 

In May 2024, another similar episode occurred when the National People’s Congress website temporarily implemented geographical restrictions preventing access to its website from outside mainland China, Hong Kong, Macao, and Taiwan. 

Although the restriction was eventually lifted, the incident illustrated how even the highest legislative information portals of the country can be subject to sudden changes in accessibility without prior notice. It was evident by early 2025 that there was a growing access gap within China's own digital ecosystem as well.

For the phrase "government website" in Chinese, autocompletion suggestions increasingly included queries such as "cannot enter government website" and "cannot open government website." According to the trend, it appears that the issue is not just affecting international analysts, but also Chinese citizens living abroad, overseas scholars, and global business teams seeking official information from abroad. 

Chinese digital governance has been closely linked to what has become known as the Great Firewall, a layered system of network filtering and regulatory oversight designed to limit domestic access to foreign platforms for much of the modern internet era. 

The framework has made a wide range of international services largely inaccessible to mainland China for a number of years, including major technology platforms and a number of prominent global news outlets. 

Some residents have historically used virtual private networks to circumvent these restrictions; however, authorities have repeatedly moved to tighten regulations pertaining to such tools, framing them as potential threats to national security and information sovereignty, resulting in unauthorized circumvention technologies becoming more prevalent. 

Due to the emerging pattern of restricted access to Chinese government websites, this long-established architecture has been markedly inverted. Rather than focusing exclusively on filtering inbound information, new evidence indicates that outward visibility of Chinese public-sector data could also be limited. 

Lennart Brussee conducted a recent technical assessment, compiled from over 13,000 websites operated by governments at all levels of government, to determine the extent and scope of the phenomenon. Researches conducted by the researcher during November were conducted to evaluate their accessibility from more than a dozen locations outside China, using residential proxy infrastructure to simulate standard user connections. 

Several of these official websites were unable to be accessed from overseas networks, according to the results. Despite some failures appearing consistent with routine connectivity problems, there was a significant share of failures that were consistent with intentional filtering.

Approximately one in ten access attempts encountered mechanisms commonly associated with deliberate blocking. These included server-side restrictions and domain name system filtering, preventing foreign queries from properly resolving. 

The findings together indicate that limitations on external access are not limited to isolated platforms but may also occur on administrative websites of all types. As researchers, investors, and policy analysts utilize public government records to track regulatory developments, demographics, and economic indicators, the increasing opacity of these digital sources presents a challenge in interpreting China's rapidly evolving information environment.

It has already been noticed that such restrictions are likely to have long-term consequences among policy researchers studying the long-term consequences of data opacity. It was argued in 2023 that the limiting international access to publicly available Chinese data would undermine informed policy decisions, according to analysts Dewey Murdick and Owen Daniels of Georgetown University's Centre for Security and Emerging Technology.

The authors cautioned that the continued closure of official datasets would lead to a diminished ability to analyze China's political and economic systems based on evidence. They observed that researchers who cannot verify developments through open information can create speculative narratives and reinforce polarized interpretations as a consequence of the resulting vacuum. 

At a time when geopolitical tensions between China and the United States are already shaping global policy debate, this can be especially problematic. A decline in public data access, they claim, may unintentionally contribute to policy miscalculations, such as poor economic decoupling strategies or protectionist responses that are based primarily on uncertainty rather than verifiable evidence. 

There are broader implications beyond academic research. It has been suggested by Brussee that selective geoblocking of government resources could adversely affect people-to-people exchanges and complicate foreign companies’ attempts to interpret regulatory signals, market conditions, and administrative guidance from official sources. 

As an essential layer of informational infrastructure for international firms operating in or studying the Chinese market, publicly accessible government portals have long been an integral part of this process. In response, reduced accessibility may result in a greater reliance on secondary interpretations rather than direct examination of primary data. 

Nevertheless, the researchers warn against the implication that the phenomenon is unique to Chinese culture. In recent years, governments across several jurisdictions, including the United States and Russia, have explored ways of limiting the exposure of certain domestic information systems to the outside world. In Chinese territory, geo-blocking does not appear to be uniformly distributed. 

The restrictions, however, tend to occur in clusters at the provincial or prefectural administrative level, which suggests that local authorities may be implementing technical controls in response to national policy signals at the same time. 

Consequently, researchers have described the process as a gradual experiment in institutional design. There appears to be a wide range of technical approaches adopted by different agencies and regional governments, potentially evaluating the effectiveness of external access controls before deciding whether to expand them more widely. 

Observers point out that China's approach to digital governance has historically influenced internet management practices beyond its borders, suggesting that such experimentation could suggest the development of a more comprehensive data governance strategy.

The development of network filtering systems by countries such as Russia, Uganda, and Myanmar has often been based on elements of Chinese experience, sometimes accompanied by technical guidance.

Meta and Apple Face Court Scrutiny Over Child Safety, Encryption, and Platform Responsibility

 

The child safety practices of major technology companies are coming under intense legal scrutiny. This week, court proceedings in California, New Mexico, and West Virginia have placed Meta CEO Mark Zuckerberg and Apple CEO Tim Cook at the center of debates surrounding user privacy, free speech, and platform safety—issues that technology firms weigh carefully when launching new features.

If courts rule against the companies in these cases, the outcomes could lead to significant product changes that affect billions of users worldwide.

During testimony in a Los Angeles courtroom on Wednesday, Zuckerberg defended his leadership decisions as attorneys questioned him about Instagram’s beauty filters and whether Meta’s push for growth overshadowed concerns about the mental health of younger users.

Documents disclosed in the New Mexico case reveal internal conversations among Meta employees about roughly 7.5 million annual reports related to child sexual abuse material that might no longer be reported after Zuckerberg’s 2019 decision to implement default end-to-end encryption in Facebook Messenger.

These messages were made public through a newly unsealed legal filing submitted by the state of New Mexico earlier this week.

“There goes our CSER [Community Standards Enforcement Report] numbers next year,” an employee wrote in a message dated Dec. 14, 2023, according to the filing. It was the same month that Meta said in a public blog post that it would begin “rolling out default end-to-end encryption for personal messages and calls on Messenger and Facebook.”

The employee added that it was as if the company “put a big rug down to cover the rocks” and said it was sending fewer child exploitation reports, the filing shows.

Addressing concerns during the Los Angeles hearing, Zuckerberg stated, “I care about the wellbeing of teens and kids who are using our services,” when asked about email exchanges he had with Cook.

Meanwhile, West Virginia filed a lawsuit on Thursday accusing Apple of failing to adequately address child sexual abuse material, commonly referred to as CSAM, on its platforms.

The New Mexico case, brought by Attorney General Raúl Torrez, began opening arguments on Feb. 9. Zuckerberg is not expected to take the stand during the trial.

Torrez alleges that Meta did not sufficiently protect platforms like Facebook and Instagram from online predators and misrepresented the overall safety of its services.

“Meta knew that E2EE would make its platforms less safe by preventing it from detecting and reporting child sexual exploitation and the solicitation and distribution of child exploitation images sent in encrypted messages,” lawyers said in the filing. “Meta further knew that its safety mitigations would be inadequate to address the risks.”

E2EE refers to end-to-end encryption.

In response to the unsealed documents, Meta said it continues to build safety features and tools designed to protect users. The company also noted that it can still review encrypted messages when they are reported for child safety issues.

Meta has earlier rejected the allegations from the New Mexico Attorney General, stating it remains “focused on demonstrating our longstanding commitment to supporting young people.”

Court filings from the New Mexico case also reveal internal warnings from company staff about how encryption might affect its ability to detect and report harmful content.

A senior member of Meta’s Global Affairs team wrote in a note dated Feb. 25, 2019, that “Without robust mitigations, E2EE on Messenger will mean we are significantly less able to prevent harm against children.”

Another internal document from June 2019 warned, “We will never find all of the potential harm we do today on Messenger when our security systems can see the messages themselves.”

While privacy advocates have supported encryption as a vital tool that protects private conversations from third-party surveillance, many law enforcement officials argue that it can hinder investigations into certain criminal activities.

After Meta completed its encryption rollout for Facebook Messenger, attorneys involved in the case argued in the filing that “the fears conveyed by law enforcement and even its employees were born out.”

Alphabet-owned YouTube is also named as a defendant in the Los Angeles case. However, TikTok and Snap are no longer part of the proceedings after reaching settlements with a plaintiff before the trial began in January.

Apple is now facing similar scrutiny over encryption and privacy protections.

In the lawsuit filed Thursday, West Virginia Attorney General John “JB” McCuskey accused Apple of not doing enough to stop the storage and sharing of CSAM through iOS devices and iCloud services.

Like the allegations against Meta, the complaint points to Apple’s encryption systems as a potential obstacle for investigators.

“Fundamentally, E2E encryption is a barrier to law enforcement, including the identification and prosecution of CSAM offenders and abusers,” lawyers wrote in the Apple legal filing.

Apple responded by emphasizing that user safety remains a core priority. In a statement, the company said that “protecting the safety and privacy of our users, especially children, is central to what we do.”

The ongoing lawsuits against both companies—and communication between Zuckerberg and Cook regarding child safety—are intensifying debate over the responsibilities that tech companies have toward users and society.

“I thought there were opportunities that our company and Apple could be doing, and I wanted to talk to Tim about that,” Zuckerberg said of his emails with Cook.

As these cases continue through the courts, they are expected to shed more light on decisions made by tech giants that influence billions of people around the world.

Rhysida Claims Responsibility for November 2025 Ransomware Attack on Southold, New York

 

A ransomware gang known as Rhysida has claimed it was behind a cyberattack carried out in November 2025 against the local government of Southold, New York.

Town authorities first disclosed the incident on November 24, 2025, revealing that a ransomware attack had disrupted critical municipal services. Impacted systems included email communications, payroll processing, tax collection, permitting, and other essential operations. While most systems were restored within two weeks, some remained offline through mid-January.

On its data leak portal, Rhysida demanded a ransom payment of 10 bitcoin—valued at approximately $661,400 at the time of reporting. The group gave the town a seven-day deadline, threatening to auction the allegedly stolen data to other cybercriminal actors if the ransom was not paid. Southold Supervisor Al Krupski stated that the town does not plan to comply with the ransom demand.

Town officials have not confirmed Rhysida’s involvement, and independent verification of the gang’s claims has not been established. It remains unclear what specific data may have been compromised or how attackers gained access to the town’s network. Officials were contacted for further comment, and updates are expected if additional information becomes available.

Following the breach, the town allocated $500,000 toward cybersecurity enhancements.

“Please be advised that the Town of Southold is investigating a potential cyber incident affecting town servers, which affects our ability to communicate with residents via email,” said the city’s November 24 announcement. “During the course of this investigation, we regret to inform you that all town services will be limited.”

Rhysida emerged in May 2023 and operates a ransomware-as-a-service (RaaS) model. The group’s malware is capable of encrypting systems and exfiltrating sensitive data. Victims are typically pressured to pay for both a decryption key and assurances that stolen information will be deleted. Affiliates can lease Rhysida’s infrastructure to conduct attacks and share in ransom proceeds.

In 2025, the group claimed responsibility for 21 verified ransomware incidents and made an additional 70 unconfirmed claims. Several confirmed attacks targeted public-sector entities, including:
  • Oregon Department of Environmental Quality (April 2025 – $2.6 million ransom, unpaid)
  • Maryland Department of Transportation (August 2025 – $3.4 million ransom, unpaid)
  • Cleveland County Sheriff’s Office (November 2025 – $782,000 ransom)
  • Cheyenne and Arapaho Tribes (December 2025 – $682,000 ransom, unpaid)
So far in 2026, the group has claimed six additional breaches.

Security researchers documented 84 confirmed ransomware incidents targeting U.S. government entities in 2025, exposing roughly 639,000 personal records. The average ransom demand across these cases reached $987,000.

In 2026, confirmed government-sector victims include Midway, Florida, Winona County, Minnesota, New Britain, Connecticut, and Tulsa International Airport.

Ransomware attacks on public institutions often involve both data theft and system encryption, disrupting services such as bill payments, court records management, and emergency response operations. Governments that refuse to pay may face prolonged outages, data loss, and heightened risks of fraud for affected residents.

Southold is a town located on Long Island in New York, with a population of approximately 24,000 residents. It falls within Suffolk County, which experienced a significant ransomware incident in 2021 that exposed the personal data of around 470,000 residents and severely disrupted county services.

Rocket Software Research Highlights Data Security and AI Infrastructure Gaps in Enterprise IT Modernization

 

Stress is rising among IT decision-makers as organizations accelerate technology upgrades and introduce AI into hybrid infrastructure. Data security now leads modernization concerns, with nearly 70 percent identifying it as their primary pressure point. As transformation speeds up, safeguarding digital assets becomes more complex, especially as risks expand across both legacy systems and cloud environments. 

Aligning security improvements with system upgrades remains difficult. Close to seven in ten technology leaders rank data protection as their biggest modernization hurdle. Many rely on AI-based monitoring, stricter access controls, and stronger data governance frameworks to manage risk. However, confidence in these safeguards is limited. Fewer than one-third feel highly certain about passing upcoming regulatory audits. While 78 percent believe they can detect insider threats, only about a quarter express complete confidence in doing so. 

Hybrid IT environments add further strain. Just over half of respondents report difficulty integrating cloud platforms with on-premises infrastructure. Poor data quality emerges as the biggest obstacle to managing workloads effectively across these mixed systems. Secure data movement challenges affect half of those surveyed, while 52 percent cite access control issues and 46 percent point to inconsistent governance. Rising storage costs also weigh on 45 percent, slowing modernization and increasing operational risk. 

Workforce shortages compound these challenges. Nearly 48 percent of organizations continue to depend on legacy systems for critical operations, yet only 35 percent of IT leaders believe their teams have the necessary expertise to manage them effectively. Additionally, 52 percent struggle to recruit professionals skilled in older technologies, underscoring the need for reskilling to prevent operational vulnerabilities. 

AI remains a strategic priority, particularly in areas such as fraud detection, process optimization, and customer experience. Still, infrastructure readiness lags behind ambition. Only one-quarter of leaders feel fully confident their systems can support AI workloads. Meanwhile, 66 percent identify data accessibility as the most significant factor shaping future modernization plans. 

Looking ahead, organizations are prioritizing stronger data protection, closing infrastructure gaps to support AI, and improving data availability. Progress increasingly depends on integrated systems that securely connect applications and databases across hybrid environments. The findings are based on a survey conducted with 276 IT directors and vice presidents from companies with more than 1,000 employees across the United States, the United Kingdom, France, and Germany during October 2025.

Two AI Data Breaches Leak Over Billion KYC Records


About the leaks

Two significant data leaks connected to two AI-related apps have been discovered by cybersecurity researchers, exposing the private information and media files of millions of users worldwide. 

The security researchers cautioned that more than a billion records might be exposed in two different studies published by Cybernews, which were initially reported by Forbes. An AI-powered Know Your Customer (KYC) technology utilized by digital identity verification company IDMerit has been blamed for the initial leak. The business offers real-time verification tools to the fintech and financial services industries as part of its AI-powered digital identity verification solutions.

Attack tactic 

When the researchers discovered the unprotected instance on November 11, 2025, they informed the company right away, and they quickly secured the database. The cybersecurity researchers said, "Automated crawlers set up by threat actors constantly prowl the web for exposed instances, downloading them almost instantly once they appear, even though there is currently no evidence of malicious misuse." 

Leaked records

One billion private documents belonging to people in 26 different nations were compromised. With almost 203 million exposed data, the United States was the most impacted, followed by Mexico (124 million) and the Philippines (72 million). Full names, residences, postcodes, dates of birth, national IDs, phone numbers, genders, email addresses, and telecom information were among the "core personal identifiers used for your financial and digital life" that were made public.

According to researchers, account takeovers, targeted phishing, credit fraud, SIM swaps, and long-term privacy losses are some of the downstream hazards associated with this data leak. The Android software "Video AI Art Generator & Maker," which has received over 500,000 downloads on Google Play and has received over 11,000 reviews with a rating of 4.3 stars, is connected to the second leak. Due to a Google Cloud Storage bucket that was improperly configured, allowing anyone to access stored files without authentication, the app was discovered to be leaking user data. According to researchers, the app exposed millions of media assets created by users utilizing AI, as well as more than 1.5 million user photos and 385,000 videos.

The app was created by Codeway Dijital Hizmetler Anonim Sirketi, a company registered in Turkey. Previously, the company's Chat & Ask AI app leaked around 300 million messages associated with over 25 million users.

Google Chrome Introduces Merkle Tree Certificates to Build Quantum-Resistant HTTPS

 

A fresh move inside Google Chrome targets long-term security of HTTPS links against risks tied to quantum machines. Instead of dropping standard X.509 certificates straight into the Chrome Root Store - ones using post-quantum methods - the team leans on an alternate design path. Speed stays high, system growth remains smooth, thanks to this structural twist shaping how protection rolls out online. 

The decision comes from Chrome’s Secure Web and Networking Team: conventional post-quantum X.509 certificates won’t enter the root program right now. Rather than adopt them outright, Google works alongside others on a different path - Merkle Tree Certificates (MTCs). Progress unfolds inside the PLANTS working group, shifting how HTTPS verification could function down the line. 

One way to look at MTCs, according to Cloudflare, is as an updated framework for how online trust systems operate today. Instead of relying on long chains of verification, these models aim to cut down excess - fewer keys, fewer signatures traded when devices connect securely. A key feature involves certification authorities signing just one root structure, known as a Tree Head, which stands in for vast groups of individual certificates. During a web visit, the user's browser gets a small cryptographic note confirming the site’s credentials live inside that larger authenticated structure. Rather than pulling multiple files across networks, only minimal evidence travels each time. 

One way this setup works is by fitting new quantum-resistant codes without needing much extra data flow. Large certificates often grow bulkier when using tougher encryption methods. Instead of linking security directly to file size, these compact certificates help maintain speed during secure browsing. With less information needed at connection start, performance stays high even under upgraded protection levels. 

Testing of MTCs is now happening, using actual internet data flows, alongside a step-by-step introduction schedule that runs until 2027. Right now, the opening stage focuses on checking viability through joint work with Cloudflare, observing how things run when exposed to active TLS environments. Instead of waiting, preparations are shifting ahead - by early 2027, those running Certificate Transparency logs, provided they had at least one accepted by Chrome prior to February 1, 2026, may join efforts to kickstart broader MTC availability. Moving forward, around late 2027, rules for admitting CAs into Google's new quantum-safe root store should be set, a system built only to handle MTC certificates. 

A shift like this one sits at the core of Google's approach to future-proofing online security. Rather than wait, the team is rebuilding trust systems so they handle both emerging risks and current efficiency needs. With updated certificates in place, stronger defenses can spread faster across services. Speed does not take a back seat - performance stays aligned with how people actually use browsers now.

How a Single Brick Helped Homeland Security Rescue an Abused Child from the Dark Web

 

A years-long investigation by the US Department of Homeland Security led to the dramatic rescue of a young girl whose abuse images had been circulating on the dark web — with a crucial clue hidden in the background of a photograph.

Specialist online investigator Greg Squire had nearly exhausted all leads while trying to identify and locate a 12-year-old girl his team had named Lucy. Explicit images of her were being distributed through encrypted networks designed to conceal users’ identities. The perpetrator had taken deliberate steps to erase identifying features, carefully cropping and altering images to avoid detection.

Despite those efforts, investigators found that the answer was concealed in plain sight.

Squire, part of an elite Homeland Security Investigations unit focused on identifying children in sexual abuse material, became deeply invested in Lucy’s case early in his career. The case struck him personally — Lucy was close in age to his own daughter, and new images of her abuse continued to surface online.

Initially, the team determined only that Lucy was likely somewhere in North America, based on visible electrical outlets and fixtures in the room. Attempts to seek assistance from Facebook proved unsuccessful. Although the company had facial recognition technology, it stated it "did not have the tools" to help with the search.

Investigators then scrutinized every visible detail in Lucy’s bedroom — bedding patterns, toys, clothing, and furniture. A breakthrough came when they realized that a sofa appearing in some images had only been sold regionally rather than nationwide, reducing the potential customer base to roughly 40,000 buyers.

"At that point in the investigation, we're [still] looking at 29 states here in the US. I mean, you're talking about tens of thousands of addresses, and that's a very, very daunting task," says Squire.

Still searching for more clues, Squire turned his attention to an exposed brick wall visible in the background of several photos. He contacted the Brick Industry Association after researching brick manufacturers.

"And the woman on the phone was awesome. She was like, 'how can the brick industry help?'"

The association circulated the image among brick specialists nationwide. One expert, John Harp — a veteran in brick sales since 1981 — quickly identified the material.

"I noticed that the brick was a very pink-cast brick, and it had a little bit of a charcoal overlay on it. It was a modular eight-inch brick and it was square-edged," he says. "When I saw that, I knew exactly what the brick was," he adds.

Harp identified it as a "Flaming Alamo".

"[Our company] made that brick from the late 60s through about the middle part of the 80s, and I had sold millions of bricks from that plant."

Although sales records were not digitized and existed only as a "pile of notes", Harp shared a vital insight.

"He goes: 'Bricks are heavy.' And he said: 'So heavy bricks don't go very far.'"

That observation narrowed the search dramatically. Investigators filtered the sofa buyers list to those living within a 100-mile radius of the brick factory in the American southwest


From there, social media analysis uncovered a photograph of Lucy alongside an adult woman believed to be a relative. Tracking related addresses and household members eventually led authorities to a single residence.

Investigators discovered that Lucy lived there with her mother’s boyfriend — a convicted sex offender. Within hours, local Homeland Security agents arrested the man, who had abused Lucy for six years. He was later sentenced to more than 70 years in prison.

Harp, who has fostered over 150 children and adopted three, said the rescue resonated deeply with him.

"We've had over 150 different children in our home. We've adopted three. So, doing that over those years, we have a lot of children in our home that were [previously] abused," he said.

"What [Squire's team] do day in and day out, and what they see, is a magnification of hundreds of times of what I've seen or had to deal with."

The emotional toll of the work eventually affected Squire’s mental health. He admits that outside of work, "alcohol was a bigger part of my life than it should have been".

Reflecting on that period, he said:

"At that point my kids were a bit older… and, you know, that almost enables you to push harder. Like… 'I bet if I get up at three this morning, I can surprise [a perpetrator] online.'

"But meanwhile, personally… 'Who's Greg? I don't even know what he likes to do.' All of your friends… during the day, you know, they're criminals… All they do is talk about the most horrific things all day long."

After his marriage ended and he experienced suicidal thoughts, colleague Pete Manning urged him to seek help.

"It's hard when the thing that brings you so much energy and drive is also the thing that's slowly destroying you," Manning says.

Squire credits confronting his struggles openly as the turning point.

"I feel honoured to be part of the team that can make a difference instead of watching it on TV or hearing about it… I'd rather be right in there in the fight trying to stop it."

Years later, Squire met Lucy — now in her 20s — for the first time. She said healing and support have helped her speak openly about her past.

"I have more stability. I'm able to have the energy to talk to people [about the abuse], which I could not have done… even, like, a couple years ago."

She revealed that when authorities intervened, she had been "praying actively for it to end".

"Not to sound cliché, but it was a prayer answered."

Squire shared that he wished he could have reassured her during those years.

"You wish there was some telepathy and you could reach out and be like, 'listen, we're coming'."

When questioned about its earlier role, Facebook responded: "To protect user privacy, it's important that we follow the appropriate legal process, but we work to support law enforcement as much as we can."

Infostealer Malware Targets OpenClaw AI Agent Files to Steal API Keys and Authentication Tokens

 

Now appearing in threat reports, OpenClaw — a local AI assistant that runs directly on personal devices — has rapidly gained popularity. Because it operates on users’ machines, attackers are shifting focus to its configuration files. Recent malware infections have been caught stealing setup data containing API keys, login tokens, and other sensitive credentials, exposing private access points that were meant to remain local. 

Previously known as ClawdBot or MoltBot, OpenClaw functions as a persistent assistant that reads local files, logs into email and messaging apps, and interacts with web services. Since it stores memory and configuration details on the device itself, compromising it can expose deeply personal and professional data. As adoption grows across home and workplace environments, saved credentials are becoming attractive targets. 

Cybersecurity firm Hudson Rock identified what it believes is the first confirmed case of infostealer malware extracting OpenClaw configuration data. The incident marks a shift in tactics: instead of stealing only browser passwords, attackers are now targeting AI assistant environments that store powerful authentication tokens. According to co-founder and CTO Alon Gal, the infection likely involved a Vidar infostealer variant, with stolen data traced to February 13, 2026. 

Researchers say the malware did not specifically target OpenClaw. Instead, it scanned infected systems broadly for files containing keywords like “token” or “private key.” Because OpenClaw stores data in a hidden folder with those identifiers, its files were automatically captured. Among the compromised files, openclaw.json contained a masked email, workspace path, and a high-entropy gateway authentication token that could enable unauthorized access or API impersonation. 

The device.json file stored public and private encryption keys used for pairing and signing, meaning attackers with the private key could mimic the victim’s device and bypass security checks. Additional files such as soul.md, AGENTS.md, and MEMORY.md outlined the agent’s behavior and stored contextual data including logs, messages, and calendar entries. Hudson Rock concluded that the combination of stolen tokens, keys, and memory data could potentially allow near-total digital identity compromise.

Experts expect infostealers to increasingly target AI systems as they become embedded in professional workflows. Separately, Tenable disclosed a critical flaw in Nanobot, an AI assistant inspired by OpenClaw. The vulnerability, tracked as CVE-2026-2577, allowed remote hijacking of exposed instances but was patched in version 0.13.post7. 

Security professionals warn that as AI tools gain deeper access to personal and corporate systems, protecting configuration files is now as critical as safeguarding passwords. Hidden setup files can carry risks equal to — or greater than — stolen login credentials.

Influencers Alarmed as New AI Rules Enforce Three-Hour Takedowns

 

India’s new three-hour takedown rule for online content has triggered unease among influencers, agencies, and brands, who fear it could disrupt campaigns and shrink creative freedom.

The rule, introduced through amendments to the IT Intermediary Rules on February 11, slashes the takedown window from 36 hours to just three, with the stated goal of curbing unlawful and AI-generated deepfake content. Creators argue that while tackling deepfakes and harmful material is essential, such a compressed deadline leaves almost no room to contest wrongful flags or provide context, especially when automated moderation tools make mistakes. They warn that legitimate posts could be penalised simply because systems misread nuance, humour, or sensitive but educational topics.

Influencer Ekta Makhijani described the deadline as “incredibly tight,” noting that if a brand campaign video is misflagged, an entire launch window could be lost in hours rather than days. She highlighted how parenting content around breastfeeding or toddler behaviour has previously been misinterpreted by moderation tools, and said the shorter window magnifies the risk of such false positives. Apparel brand founder Akanksha Kommirelly added that small creators lack round-the-clock legal and compliance teams, making it unrealistic for them to respond to takedown notices at all times.

Experts also worry about a chilling effect on speech, especially satire, political commentary, and advocacy. With platforms facing tighter liability, agencies fear an “act first, verify later” culture in which companies remove anything remotely borderline to stay safe. Raj Mishra of Chtrbox warned that, in practice, the incentive becomes to take down flagged content immediately, which could hit investigative work or edgy creative pieces hardest. India’s linguistic diversity further complicates moderation, as systems trained mainly on English may misinterpret regional content.A

longside takedowns, mandatory AI labelling is reshaping creator workflows and brand strategies. Kommirelly noted that prominent AI tags on visual campaigns may weaken brand recall, while Mishra cautioned that platforms could quietly de-prioritise AI-labelled content in algorithms, reducing reach regardless of audience acceptance. This dual pressure—strict timelines and AI disclosure—forces creators to rethink how they script, edit, and publish content.

Agencies like Kofluence and Chtrbox are responding by building compliance support systems for the creator economy. These include AI content guides, pre-upload checks, documentation protocols, legal support networks, and even insurance options to cover campaign disruptions. While most stakeholders accept that tougher rules are needed against deepfakes and abuse, they are urging the government to differentiate emergency takedowns for clearly illegal content from more contested speech so that speed does not entirely override fairness.

Botnet Moves to Blockchain, Evades Traditional Takedowns

 

A newly identified botnet loader is challenging long standing methods used to dismantle cybercrime infrastructure. Security researchers have uncovered a tool known as Aeternum C2 that stores its command instructions on the Polygon blockchain rather than on traditional servers or domains. 

For years, investigators have disrupted major botnets by seizing command and control servers or suspending malicious domains. Operations targeting networks such as Emotet, TrickBot, and QakBot relied heavily on this approach. 

Aeternum C2 appears designed to bypass that model entirely by embedding instructions inside smart contracts on Polygon, a public blockchain replicated across thousands of nodes worldwide. 

According to researchers at Qrator Labs, the loader is written in native C++ and distributed in both 32 bit and 64 bit builds. Instead of connecting to a centralized server, infected systems retrieve commands by reading transactions recorded on the blockchain through public remote procedure call endpoints. 

The seller claims that bots receive updates within two to three minutes of publication, offering relatively fast synchronization without peer to peer infrastructure. The malware is marketed on underground forums either as a lifetime licensed build or as full source code with ongoing updates. Operating costs are minimal. 

Researchers observed that a small amount of MATIC, the Polygon network token, is sufficient to process a significant number of command transactions. With no need to rent servers or register domains, operators face fewer operational hurdles. 

Investigators also found that Aeternum includes anti virtual machine checks intended to avoid execution in sandboxed analysis environments. A bundled scanning feature reportedly measures detection rates across multiple antivirus engines, helping operators test payloads before deployment. 

Because commands are stored on chain, they cannot be altered or removed without access to the controlling wallet. Even if infected devices are cleaned, the underlying smart contracts remain active, allowing operators to resume activity without rebuilding infrastructure. 

Researchers warn that this model could complicate takedown efforts and enable persistent campaigns involving distributed denial of service attacks, credential theft, and other abuse. 

As infrastructure seizures become less effective, defenders may need to focus more heavily on endpoint monitoring, behavioral detection, and careful oversight of outbound connections to blockchain related services.

Trezor and Ledger Impersonated in Physical QR Code Phishing Scam Targeting Crypto Wallet Users

 

Nowadays criminals push fake crypto warnings through paper mail, copying real product packaging from firms like Trezor and Ledger. These printed notes arrive at homes without digital traces, making them feel more trustworthy than email scams. Instead of online messages, fraudsters now use stamps and envelopes to mimic official communication. Because it comes in an envelope, people may believe the request is genuine. Through these letters, attackers aim to steal secret backup codes used to restore wallets. Physical delivery gives the illusion of authenticity, even though the goal remains theft. The method shifts away from screens but keeps the same deceitful intent. 

Pretending to come from company security units, these fake messages tell recipients they need to finish an urgent "Verification Step" or risk being locked out of their wallets. A countdown appears on screen, pushing people to act fast - slowing down feels risky when time runs short. Opening the link means scanning a barcode first, then moving through steps laid out by the site. Pressure builds because delays supposedly lead to immediate consequences. Following directions seems logical under such conditions, especially if trust in the sender feels justified. 

A single message pretending to come from Trezor told users about an upcoming Authentication Check required before February 15, 2026, otherwise access to Trezor Suite could be interrupted. In much the same way, another forged notice aimed at Ledger customers claimed a Transaction Check would turn mandatory, with reduced features expected after October 15, 2025, unless acted upon. Each of these deceptive messages leads people to fake sites designed to look nearly identical to real setup portals. BleepingComputer’s coverage shows the QR codes redirect to websites mimicking real company systems. 

Instead of clear guidance, these fake sites display alerts - claiming accounts may be limited, transactions could fail, or upgrades might stall without immediate action. One warning follows another, each more urgent than the last, pulling users deeper into the trap. Gradually, they reach a point where entering their crypto wallet recovery words seems like the only option left. Fake websites prompt people to type in their 12-, 20-, or 24-word recovery codes, claiming it's needed to confirm device control and turn on protection. 

Though entered privately, those words get sent straight to servers run by criminals. Because these attackers now hold the key, they rebuild the digital wallet elsewhere without delay. Money vanishes quickly after replication occurs. Fewer scammers send fake crypto offers by post, even though email tricks happen daily. Still, real-world fraud attempts using paper mail have appeared before. 

At times, crooks shipped altered hardware wallets meant to steal recovery words at first use. This latest effort shows hackers still test physical channels, especially if past leaks handed them home addresses. Even after past leaks at both Trezor and Ledger revealed user emails, there's no proof those events triggered this specific attack. However the hackers found their targets, one truth holds - your recovery phrase stays private, always. 

Though prior lapses raised alarms, they didn’t require sharing keys; just like now, safety lives in secrecy. Because access begins where trust ends, never hand over seed words. Even when pressure builds, silence protects better than any tool. Imagine a single line of words holding total power over digital money - this is what a recovery phrase does. Ownership shifts completely when someone else learns your seed phrase; control follows instantly. Companies making secure crypto devices do not ask customers to type these codes online or send them through messages. 

Scanning it, emailing it, even mailing it physically - none of this ever happens if the provider is real. Trust vanishes fast when any official brand demands such sharing. Never type a recovery phrase anywhere except the hardware wallet during setup. When messages arrive with urgent requests, skip the QR scans entirely. Official sites hold the real answers - check there first. A single mistake could expose everything. Trust only what you confirm yourself.  

A shift in cyber threats emerges as fake letters appear alongside rising crypto use. Not just online messages now - paper mail becomes a tool for stealing digital assets. The method adapts, reaching inboxes on paper before screens. Physical envelopes carry hidden risks once limited to spam folders. Fraud finds new paths when trust in printed words remains high.

Fake Go Crypto Package Caught Stealing Passwords and Spreading Linux Backdoor

 



Cybersecurity investigators have revealed a rogue Go module engineered to capture passwords, establish long-term SSH access, and deploy a Linux backdoor known as Rekoobe.

The package, published as github[.]com/xinfeisoft/crypto, imitates the legitimate Go cryptography repository widely imported by developers. Instead of delivering standard encryption utilities, the altered version embeds hidden instructions that intercept sensitive input entered in terminal password prompts. The stolen credentials are transmitted to a remote server, which then responds by delivering a shell script that the compromised system executes.

Researchers at Socket explained that the attack relies on namespace confusion. The authentic cryptography project identifies its canonical source as go.googlesource.com/crypto, while GitHub merely hosts a mirror copy. By exploiting this distinction, the threat actor made the counterfeit repository appear routine in dependency graphs, increasing the likelihood that developers would mistake it for the genuine library.

The malicious modification is embedded inside the ssh/terminal/terminal.go file. Each time an application calls the ReadPassword() function, which is designed to securely capture hidden input from a user, the manipulated code silently records the data. What should have been a secure input mechanism becomes a covert data collection point.

Once credentials are exfiltrated, the downloaded script functions as a Linux stager. It appends the attacker’s SSH public key to the /home/ubuntu/.ssh/authorized_keys file, enabling passwordless remote logins. It also changes default iptables policies to ACCEPT, reducing firewall restrictions and increasing exposure. The script proceeds to fetch further payloads from an external server, disguising them with a misleading .mp5 file extension to avoid suspicion.

Two additional components are retrieved. The first acts as a helper utility that checks internet connectivity and attempts to communicate with the IP address 154.84.63[.]184 over TCP port 443, commonly used for encrypted web traffic. Researchers believe this tool likely serves as reconnaissance or as a loader preparing the system for subsequent stages.

The second payload has been identified as Rekoobe, a Linux trojan active in the wild since at least 2015. Rekoobe allows remote operators to receive commands from a control server, download additional malware, extract files, and open reverse shell sessions that grant interactive system control. Security reporting as recently as August 2023 has linked the malware’s use to advanced threat groups, including APT31.

While the malicious module remained listed on the Go package index at the time of analysis, the Go security team has since taken measures to block it as harmful.

Researchers caution that this operation reflects a repeatable, low-effort strategy with glaring impact. By targeting high-value functions such as ReadPassword() and hosting staged payloads through commonly trusted platforms, attackers can rotate infrastructure without republishing code. Defenders are advised to anticipate similar supply chain campaigns aimed at credential-handling libraries, including SSH utilities, command-line authentication tools, and database connectors, with increased use of layered hosting services to conceal corrupted infrastructure.


Russia Blocks WhatsApp, Pushes State Surveillance App

 

Russia has effectively erased WhatsApp from its internet, impacting up to 100 million users in a bold move by regulator Roskomnadzor. On Wednesday, the app was removed from the national directory, severing access without prior slowdown warnings, as reported by the Financial Times and Gizmodo. WhatsApp condemned this as an attempt to force users onto a "state-owned surveillance app," highlighting the isolation of millions from secure communication. 

This crackdown escalates Russia's long-running battle against foreign messaging services amid its push for digital sovereignty. Restrictions began in August 2025 with blocks on voice and video calls, citing WhatsApp's failure to aid fraud and terrorism probes. Courts fined the Meta-owned app repeatedly for not removing banned content or opening a local office; by December, speeds dropped 70%, but full removal came after ongoing non-compliance. Telegram faced similar cuts this week, leaving Russians scrambling.

Enter Max, VK's 2025-launched "superapp" modeled on China's WeChat, now aggressively promoted as the national alternative. Preinstalled on devices and endorsed by celebrities and educators, it offers chats, video calls, file sharing up to 4GB, payments via Russia's Faster Payment System, and government services like digital IDs and e-signatures. Unlike WhatsApp's encryption, Max mandates activity sharing with authorities and lacks apparent privacy safeguards, per The Insider. 

The Kremlin justifies the ban as protecting citizens from scams and terrorism while achieving tech independence under sanctions. Spokesman Dmitry Peskov cited Meta's refusal to follow Russian law, though WhatsApp could return via compliance talks. Critics see it as unprecedented speech suppression, building on post-2022 Ukraine invasion censorship labeled "unprecedented" by Amnesty International. Yet past efforts, like the failed 2018 Telegram block, exposed regime overreach.

Users are turning to VPNs or rivals, but Max's rise could cement state surveillance in daily life. This mirrors global trends—France pushes local apps, and Meta faces U.S. spying claims—but Russia's unencrypted alternative raises alarms for privacy. As Putin eyes indefinite rule, such controls signal deepening authoritarianism, forcing 100 million into monitored chats.

Is Spyware Secretly Hiding on Your Phone? How to Detect It, Remove It, and Prevent It

 



If your phone has started behaving in ways you cannot explain, such as draining power unusually fast, heating up during minimal use, crashing, or displaying unfamiliar apps, it may be more than a routine technical fault. In some cases, these irregularities signal the presence of spyware, a type of malicious software designed to quietly monitor users and extract personal information.

Spyware typically enters smartphones through deceptive mobile applications, phishing emails, malicious attachments, fraudulent text messages, manipulated social media links, or unauthorized physical access. These programs are often disguised as legitimate utilities or helpful tools. Once installed, they operate discreetly in the background, avoiding obvious detection.

Depending on the variant, spyware can log incoming and outgoing calls, capture SMS and MMS messages, monitor conversations on platforms such as Facebook and WhatsApp, and intercept Voice over IP communications. Some strains are capable of taking screenshots, activating cameras or microphones, tracking location through GPS, copying clipboard data, recording keystrokes, and harvesting login credentials or cryptocurrency wallet details. The stolen information is transmitted to external servers controlled by unknown operators.

Not all spyware functions the same way. Some applications focus on aggressive advertising tactics, overwhelming users with pop-ups, altering browser settings, and collecting browsing data for revenue generation. Broader mobile surveillance tools extract system-level data and financial credentials, often distributed through mass phishing campaigns. More intrusive software, frequently described as stalkerware, is designed to monitor specific individuals and has been widely associated with domestic abuse cases. At the highest level, intricately designed commercial surveillance platforms such as Pegasus have been deployed in targeted operations, although these tools are costly and rarely directed at the general public.

Applications marketed as parental supervision or employee productivity tools also require caution. While such software may have legitimate oversight purposes, its monitoring capabilities mirror those of spyware if misused or installed without informed consent.

Identifying spyware can be difficult because it is engineered to remain hidden. However, several warning indicators may appear. These include sudden battery drain, overheating, sluggish performance, unexplained crashes, random restarts, increased mobile data consumption, distorted calls, persistent pop-up advertisements, modified search engine settings, unfamiliar applications, difficulty shutting down the device, or unexpected subscription charges. Receiving suspicious messages that prompt downloads or permission changes may also signal targeting attempts. If a device has been out of your possession and returns with altered settings, tampering should be considered.

On Android devices, reviewing whether installation from unofficial sources has been enabled is critical, as this setting allows apps outside the Google Play Store to be installed. Users should also inspect special app access and administrative permissions for unfamiliar entries. Malicious programs often disguise themselves with neutral names such as system utilities. Although iPhones are generally more resistant without jailbreaking or exploited vulnerabilities, they are not immune. Failing to install firmware updates increases exposure to known security flaws.

If spyware is suspected, measured action is necessary. Begin by installing reputable mobile security software from verified vendors and running a comprehensive scan. Manually review installed applications and remove anything unfamiliar. Examine permission settings and revoke excessive access. On Android, restarting the device in Safe Mode temporarily disables third-party apps, which may assist in removal. Updating the operating system can also disrupt malicious processes. If the issue persists, a factory reset may be required. Important data should be securely backed up before proceeding, as this step erases all stored content. In rare instances, professional technical assistance or device replacement may be needed.

Long-term protection depends on consistent preventive practices. Maintain strict physical control over your phone and secure it with a strong password or biometric authentication. Configure automatic screen locking to reduce the risk of unauthorized access. Install operating system updates promptly, as they contain critical security patches. Download applications only from official app stores and review developer credibility, ratings, and permission requests carefully before installation. Enable built-in security scanners and avoid disabling system warnings. Regularly audit app permissions, especially for access to location, camera, microphone, contacts, and messages.

Remain cautious when interacting with links or attachments received through email, SMS, or social media, as phishing remains a primary delivery method for spyware. Avoid jailbreaking or rooting devices, since doing so weakens built-in protections and increases vulnerability. Activate multi-factor authentication on essential accounts such as email, banking, and cloud storage services, and monitor login activity for irregular access. Periodically review mobile data usage and billing statements for unexplained charges. Maintain encrypted backups so decisive action, including a factory reset, can be taken without permanent data loss.

No mobile device can be guaranteed completely immune from surveillance threats. However, informed digital habits, timely updates, disciplined permission management, and layered account security significantly reduce the likelihood of covert monitoring. In an era where smartphones store personal, financial, and professional data, vigilance remains the strongest defense.