Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Meta Targets 150K Accounts in Southeast Asia Scam Operation

 



Meta announced that it has removed more than 150,000 accounts tied to organized scam centers operating in Southeast Asia, describing the move as part of a large international effort to disrupt coordinated online fraud networks.

The enforcement action was carried out with assistance from authorities in several countries. Law enforcement agencies and government partners involved in the operation included officials from Thailand, the United States, the United Kingdom, Canada, South Korea, Japan, Singapore, the Philippines, Australia, New Zealand, and Indonesia. According to Meta, the joint effort resulted in 21 individuals being arrested by the Royal Thai Police.

This latest crackdown builds on an earlier pilot initiative launched in December 2025. During that initial phase, Meta removed approximately 59,000 accounts, Pages, and Groups from its platforms that were connected to similar fraudulent activity. The earlier investigation also led to the issuance of six arrest warrants by authorities.

In a statement explaining the action, Meta said that online scams have grown increasingly complex and organized over recent years. Criminal networks, often operating from countries such as Cambodia, Myanmar, and Laos, have established large scam compounds that function in many ways like organized business operations. These groups typically use structured teams, scripted communication strategies, and digital tools designed to evade detection while targeting victims on a global scale. According to the company, the impact of such scams extends far beyond financial loss, as they can severely disrupt lives and weaken trust in digital communication platforms.

Alongside the enforcement action, Meta also announced several new safety features aimed at helping users identify and avoid scam attempts.

One of these tools introduces new warning messages on Facebook that notify users when they receive communication from accounts that display characteristics commonly linked to fraudulent activity. Another safeguard has been introduced on WhatsApp to address a tactic used by scammers who attempt to persuade users to scan a QR code. If successful, this method can link the attacker’s device to the victim’s WhatsApp account, allowing them to access messages and impersonate the account holder. Meta said its system will now notify users when suspicious device-linking requests are detected.

The company is also expanding scam detection on Messenger. When a conversation with a new contact begins to resemble known fraud patterns, such as questionable job opportunities or requests that appear unusual, the platform may prompt users to share recent messages so that an artificial intelligence system can evaluate whether the interaction matches known scam behavior.

Meta also disclosed broader enforcement statistics related to scams on its platforms. Throughout 2025, the company removed more than 159 million advertisements that violated its policies related to fraud and deception. In addition, it disabled approximately 10.9 million Facebook and Instagram accounts that investigators linked to organized scam centers.

To further address fraudulent activity, the company said it plans to expand its advertiser verification program. The goal of this measure is to increase transparency by confirming the identities of advertisers and reducing the ability of malicious actors to misrepresent themselves while running advertisements.

The announcement comes at a time when governments are intensifying efforts to address online fraud. The UK Government recently introduced a new Online Crime Centre designed to focus specifically on cybercrime, including scams connected to organized fraud operations operating in regions such as Southeast Asia, West Africa, Eastern Europe, India, and China.

The centre will bring together specialists from several sectors, including government agencies, law enforcement, intelligence services, financial institutions, mobile network providers, and major technology companies. The initiative is expected to begin operations next month.

The project forms part of the United Kingdom’s broader Fraud Strategy 2026–2029, a policy framework aimed at strengthening the country’s response to fraud and financial crime. As part of this strategy, authorities plan to use artificial intelligence to detect emerging scam patterns, identify suspicious bank transfers more quickly, and deploy “scam-baiting” chatbots designed to interact with fraudsters in order to gather intelligence.

Officials said the new centre, supported by more than £30 million in funding, will focus on identifying the digital infrastructure used by organized crime groups. This includes tracking fraudulent accounts, websites, and phone numbers used in scam operations. Authorities aim to shut down these resources at scale by blocking scam messages, freezing financial accounts linked to criminal activity, removing fraudulent social media profiles, and disrupting scam networks at their source.

Google API Keys Expose Gemini AI Data via Leaked Credentials

 

Google API keys, once considered harmless when embedded in public websites for services like Maps or YouTube, have turned into a serious security risk following the integration of Google's Gemini AI assistant. Security researchers at Truffle Security uncovered this issue, revealing that nearly 3,000 live API keys—prefixed with "AIza"—are exposed in client-side JavaScript code across popular sites.

Truffle Security's scan of the November 2025 Common Crawl dataset, which captures snapshots of major websites, identified 2,863 active keys from diverse sectors including finance, security firms, and even Google's own infrastructure. These keys, deployed sometimes years ago (one traced back to February 2023), were originally safe as mere billing identifiers but gained unauthorized access to Gemini endpoints without developers' knowledge.Attackers can simply copy a key from page source, authenticate to Gemini, and extract sensitive data like uploaded files, cached contexts, or datasets via simple prompts.

The danger extends beyond data theft to massive financial abuse, as Gemini API calls consume tokens that rack up charges—potentially thousands of dollars daily per compromised account, depending on the model and context window. Truffle Security demonstrated this by querying the /models endpoint with exposed keys, confirming access to private Gemini features. One reported case highlighted an $82,314 bill from a stolen key, underscoring the real-world impact.

Google acknowledged the flaw as "single-service privilege escalation" after Truffle's disclosure on November 21, 2025, and implemented fixes by January 2026, including blocking leaked keys from Gemini access, defaulting new AI Studio keys to Gemini-only scope, and sending proactive leak notifications. Despite these measures, the "retroactive privilege expansion" caught many off-guard, as enabling Gemini in projects silently empowered old keys.

Developers must immediately audit Google Cloud projects for Gemini API enablement, rotate all exposed keys, and restrict scopes to essentials—avoiding the default "unrestricted" setting. Tools like TruffleHog can scan code repositories for leaks, while regular monitoring prevents future exposures in an era where AI services amplify API risks. This incident highlights the need for vigilance as cloud features evolve.

Silent Scam Calls Used to Verify Active Phone Numbers, Cybersecurity Experts Warn

 

Many people have answered calls from unfamiliar numbers only to hear silence on the other end. In some cases, no one speaks at all. In others, there is a short delay before a caller finally responds. While this may appear to be a simple mistake or a wrong number, cybersecurity experts say these calls are often part of a deliberate scam tactic used to verify active phone numbers. 

According to security specialists, these silent calls function as a form of automated reconnaissance. Fraud operations run large-scale calling systems that dial thousands of numbers to determine which ones belong to real people. When someone answers, the system confirms that the number is active and marks it as a potential target for future scams. 

Keeper Security Chief Information Security Officer Shane Barney explained that such calls are rarely accidental. Instead, they help attackers filter out inactive numbers before investing more time and resources into scams. Verified contact information has value in modern cybercrime networks, where data about reachable individuals can be bought, sold, and reused across different fraud campaigns. 

Once a phone number is confirmed as active, it may be used in several ways. In some cases, scammers follow up with phishing calls or messages designed to trick victims into revealing personal or financial information. In more advanced attacks, a verified phone number could be combined with leaked email addresses from data breaches or used in schemes such as SIM-swap fraud, where attackers attempt to gain control of a victim’s mobile account. 

Another variation occurs when callers respond only after a brief pause. This delay is typically caused by predictive dialing systems that automatically place large volumes of calls. These systems detect when a human answers and then route the call to a live operator. The short silence represents the time it takes for the system to transfer the connection. 

Some people also worry that speaking during these calls could allow scammers to clone their voice using artificial intelligence. While voice cloning technology exists, experts say creating a convincing replica generally requires longer and clearer audio samples than a brief greeting. 

However, voice cloning could still become part of larger scams if criminals already possess other personal details about a victim. Security professionals recommend simple precautions when receiving suspicious calls. If an unknown number produces silence, hanging up immediately is usually the safest option. 

Another tactic is answering without speaking, which prevents automated systems from detecting a human voice. Spam-filtering tools can also help reduce nuisance calls. Applications such as Truecaller, RoboKiller, and Hiya identify numbers previously reported as spam. However, experts caution that no filtering system is perfect because scammers frequently change phone numbers. 

Ultimately, while call-blocking tools can reduce the volume of unwanted calls, maintaining strong account security and being cautious with unknown callers remain the most effective ways to avoid phone-based scams.

Perplexity's Comet AI Browser Tricked Into Phishing Scam Within Four Minutes


Agentic browser at risk

Agentic web browsers that use AI tools to autonomously do tasks across various websites for a user could be trained and fooled into phishing attacks. Hackers exploit the AI browsers’ tendency to assert their actions and deploy them against the same model to remove security checks. 

According to security expert Shaked Chen, “The AI now operates in real time, inside messy and dynamic pages, while continuously requesting information, making decisions, and narrating its actions along the way. Well, 'narrating' is quite an understatement - It blabbers, and way too much!,” the Hacker News reported. Agentic Blabbering is an AI browser that displays what it sees, thinks, and plans to do next, and what it deems safe or a threat. 

Tricking the browsers

By hacking the traffic between the AI services on the vendor’s servers and putting it as input to a Generative Adversarial Network (GAN), it made Perplexity’s Comet AI browser fall prey to a phishing attack within four minutes. 

The research is based on established tactics such as Scamlexity and VibeScamming, which revealed that vibe-coding platforms and AI browsers can be coerced into generating scam pages and performing malicious tasks via prompt injection. 

Attack tactic

There is a change in the attack surface as a result of the AI agent managing the tasks without frequent human oversight, meaning that a scammer no longer has to trick a user. Instead, it seeks to deceive the AI model itself. 

Chen said, “If you can observe what the agent flags as suspicious, hesitates on, and more importantly, what it thinks and blabbers about the page, you can use that as a training signal.” Chen added that the “scam evolves until the AI Browser reliably walks into the trap another AI set for it."

End goal?

The aim is to make a “scamming machine” that improves and recreates a phishing page until the agentic browser accepts the commands and carries out the hacker’s command, like putting the victim’s passwords on a malicious web page built for refund scams. 

Guardio is concerned about the development, saying that, “This reveals the unfortunate near future we are facing: scams will not just be launched and adjusted in the wild, they will be trained offline, against the exact model millions rely on, until they work flawlessly on first contact.”

ShinyHunters Threatens Data Leak After Alleged Salesforce Breach

 

The hacking group ShinyHunters has warned roughly 400 companies that it may publish stolen data online if ransom demands are not met. The group claims it accessed private records through websites built on Salesforce Experience Cloud, a platform companies use to create public portals and customer support sites. 

According to earlier findings by cybersecurity firm Mandiant, the attackers targeted organisations that used Salesforce’s Experience Cloud for external-facing services such as help centres and information portals. 

How the breach allegedly happened? The reported intrusion appears linked to the configuration of public access settings within these websites. 

Salesforce allows websites built on Experience Cloud to include a “guest user” profile so visitors can view limited information without logging in. 

If these settings are configured too broadly, however, the access permissions can expose internal data to the public internet. Investigations suggest the attackers used a modified version of a tool called Aura Inspector to scan websites for such weaknesses. 

Once vulnerabilities were identified, the hackers were able to extract information including names and phone numbers. Security experts say the stolen data may already be fueling vishing attacks. 

In such scams, attackers contact employees by phone and attempt to trick them into revealing additional confidential information. 

Dispute over the root cause There is disagreement over whether the problem stems from a software flaw or from how companies configured their systems. Salesforce has said the platform itself remains secure and that the issue is related to customer settings rather than a vulnerability in the product. 

“Our investigation to date confirms that this activity relates to a customer-configured guest user setting, not a platform security flaw,” the company said in a blog post. 

ShinyHunters disputes that explanation, claiming it discovered a previously unknown flaw that allows it to bypass certain protections even on sites that appear properly configured. 

Independent researchers have not yet verified that claim. Pressure tactics used by hackers ShinyHunters is known for using aggressive extortion strategies to pressure victims into paying ransom demands. The group often releases stolen data in stages to increase pressure on organisations that refuse to negotiate. 

A recent example involved Dutch telecommunications provider Odido and its brand Ben. After the company declined to pay a ransom reportedly worth one million euros, the hackers began publishing large quantities of customer data on the dark web. 

Security guidance for companies Salesforce is urging customers to review their portal configurations and tighten access controls. The company recommends applying a “least privilege” approach, meaning guest users should only have the minimum permissions required to use a site. 

Businesses are also advised to keep data private by default, disable settings that expose internal staff information, and turn off public application programming interfaces where possible. 

These interfaces can allow external systems to exchange data and may create additional entry points if left open. 

The incident highlights the growing risks associated with misconfigured cloud services, which security analysts say have become a common target for cybercriminal groups seeking large volumes of corporate data.

AI is Reshaping How Hackers Discover and Exploit Digital Weaknesses


 

Throughout history, artificial intelligence has been hailed as the engine of innovation, revolutionizing data analysis, automation of business processes, and strategic decision-making. However, the same capabilities that enable organizations to work more efficiently and efficiently are quietly transforming the cyber threat landscape in far less constructive ways. 

In the hands of threat actors, artificial intelligence becomes a force multiplier, lowering the barrier to sophisticated attacks dramatically. It is now possible to accomplish tasks once requiring extensive technical expertise, patience, and careful coordination at unprecedented speed and efficiency by utilizing AI-based tools for scanning vast digital environments, analyzing weaknesses, and refining attack strategies in real time. 

As a result of AI-driven tools, cybercriminals are reducing the length of the preparation process to a matter of minutes. Consequently, cyber risk is experiencing a new era in which traditional timelines for detecting, understanding, and responding to threats are rapidly disappearing, leaving organizations unable to keep up with adversaries that are increasingly automated, adaptive, and relentless. 

In recent years, threat intelligence has indicated that this acceleration has become measurable across the global attack landscape rather than merely theoretical. 

Researchers have observed that threats actors are increasingly incorporating generative AI tools into their operational workflows, thus facilitating the identification and exploitation of vulnerabilities in corporate infrastructure much faster and more consistently than they have in the past. 

In the IBM XForce Threat Intelligence Index 2026, which was released in 2026, the scale of this shift is evident. In comparison with the previous year, cyberattacks targeting public-facing applications increased by 44 percent, according to the report. 

Many applications, including corporate websites, ecommerce platforms, email gateways, financial portals, APIs, and other externally accessible services, have developed into attractive entry points because they often expose complex codebases directly to the Internet and are often easy to access. 

Based on the same analysis, vulnerability exploitation is one of the most prevalent methods of gaining access to modern networks. It has been estimated that approximately 40% of cyber incidents in 2025 have been the result of attackers successfully exploiting previously identified security vulnerabilities before their organizations have been able to correct them. 

Parallel trends indicate the expansion of the cybercrime ecosystem as a whole. It has been reported that the number of active ransomware groups operating globally has nearly doubled during the same period, whereas the number of attacks that have been publicly disclosed has increased by approximately 12 percent. 

As a consequence of these indicators, it appears that the convergence of automated discovery tools, readily available exploit frameworks, and artificial intelligence-assisted reconnaissance is accelerating the speed with which vulnerabilities are disclosed and exploited, increasing the amount of pressure on enterprise security teams already confronted with a complex threat environment. 

Artificial intelligence is rapidly becoming an integral part of cyber operations, and as such is altering the way vulnerabilities are discovered and addressed within legitimate security practices. These technological developments are accompanied by an evolution of ethical hacking, once considered a key component of modern defense strategies. 

Advanced machine learning models are increasingly being utilized by security researchers to speed up tasks which previously required painful manual analysis. The use of artificial intelligence-driven tools enables defenders to detect anomalies and potential security gaps at a scale traditional auditing methods are rarely able to attain by processing large volumes of application code, system logs, and network telemetry in seconds. 

Several experiments have already demonstrated the practical benefits of this capability. A controlled research environment has been demonstrated where AI-powered analysis systems can identify exploitable weaknesses in extensive code repositories by analyzing extensive code repositories. These systems significantly shorten the time required for vulnerability triage and remediation. 

It is becoming increasingly important for organizations operating complex digital infrastructure to perform automated security analysis. Threat actors are integrating AI-assisted techniques into their own reconnaissance and development workflows, enabling them to automate tasks that previously required experienced security researchers by leveraging the same technological advantages. 

Adversaries, however, have similar technological advantages. As a consequence of polymorphic malware, malicious code can evade signature-based detection systems by altering its structure each time it executes. A number of modified large language model toolkits have been observed in underground forums, marketed as resources to generate malware variants or scripts for exploiting vulnerabilities. 

A parallel development effort is underway to develop experimental attack frameworks that utilize artificial intelligence agents to scan open-source repositories, cloud environments, and embedded device firmware for exploitable vulnerabilities. In many ways, these approaches are similar to those employed by legitimate researchers to locate bugs, however the objective is to accelerate intrusion campaigns rather than prevent them. 

Another area which is receiving considerable attention is the security of artificial intelligence systems themselves. A growing number of organizations are incorporating AI copilots, automation agents, and data analysis models into their everyday operations, thereby creating new attack surfaces. 

In some cases, hidden instructions embedded within web content or metadata have been consumed by automated artificial intelligence systems without their knowledge, altering their behavior or triggering unauthorized actions. 

The occurrence of such incidents illustrates the potential risks associated with prompt injection and data poisoning, where malicious inputs influence the interpretation of information by AI models or the interaction between enterprise systems with them. 

In addition to exploiting weaknesses in the way AI models process context and instructions, these vulnerabilities are particularly concerning since they are not necessarily caused by traditional software vulnerabilities. In light of these developments, both industry and regulatory bodies are responding to them. 

Security frameworks and policy discussions are increasingly recognizing AI as a dual-purpose technology that can strengthen cyber defenses as well as enabling more sophisticated attack techniques. 

A number of government agencies, international policing organizations, and leading technology vendors have published guidance on addressing adversarial AI threats, emphasizing that stronger safeguards must be implemented around AI deployments, monitoring mechanisms need to be improved, and standards for model development need to be clearer. 

According to cybersecurity specialists, artificial intelligence should no longer be considered to be an unimportant or theoretical risk factor. In reality, it has already developed the tactics used by both defenders and attackers in real-world environments. 

To adapt to this environment, enterprise security teams must develop more proactive and automated defensive strategies. A growing number of organizations are evaluating artificial intelligence-assisted "red teaming" capabilities in order to simulate adversarial behavior within controlled environments and identify weaknesses in corporate infrastructure before they can be exploited by external parties. 

A key element of the security industry is the development of threat intelligence platforms that utilize machine learning to identify emerging malware patterns and accelerate incident response. Additionally, it is important to design AI systems with security considerations built in from the outset.

In order to ensure that these technologies strengthen digital resilience, rather than inadvertently expanding the attack surface, organizations are required to integrate rigorous auditing processes, secure-by-design development practices, and continuous monitoring into their automation platforms as AI-driven tools and automation platforms are increasingly used.

Increasingly, adversaries are utilizing artificial intelligence in offensive operations, which is expected to be refined and expanded as artificial intelligence matures. There is now no doubt that AI will be included in cyberattacks, but the question is whether defensive capabilities can evolve at a pace that is comparable to the evolution of AI. 

Organizations that are relying on a slow remediation cycle, fragmented monitoring, and manual investigative process risk falling behind attackers that have the capability to automate reconnaissance, vulnerability discovery, and exploit development processes.

Compared to this, security strategies that incorporate continuous visibility, automated analysis, and rapid response mechanisms have proven to be more resilient in a threat environment that is characterized by speed and scale. 

Identifying vulnerabilities and remediating them within a reasonable period of time has rapidly become a critical metric for cyber security. The security industry is responding to this challenge by introducing tools that provide more comprehensive and continuous insight into enterprise environments. 

VulnDetect, an integrated platform that helps IT and security teams stay up to date on vulnerabilities across endpoint infrastructures, is one example. Instead of tracking known or managed software with traditional asset management tools, the platform identifies obsolete, misconfigured, or unmanaged applications that often remain invisible within large enterprise networks. These overlooked assets frequently serve as attractive entry points for attackers conducting automated vulnerability scans.

A system such as VulnDetect is designed to bridge the gap between vulnerability discovery and mitigation by continuously monitoring endpoints and mapping software exposure across the network. By focusing remediation efforts on the weaknesses that present the greatest operational risk, security teams can prioritize actionable intelligence over static inventories. 

The reduction of this exposure window is becoming increasingly important in an environment where attackers are increasingly relying on artificial intelligence-assisted techniques for identifying and exploiting weaknesses.

In addition to improving incident response capabilities, the increased visibility across digital infrastructure also gives organizations a strategic control over their security posture as the cyber threat landscape becomes increasingly automated and unpredictable.

Due to this background, cybersecurity professionals are increasingly arguing that artificial intelligence should now be integrated into the defensive architecture as a whole rather than being treated as an experimental addition. Threat actors are increasingly utilizing automated reconnaissance, adaptive malware development, and artificial intelligence-assisted exploit discovery.

In order to compete effectively, defensive systems must operate at similar speeds. It is imperative that enterprise environments have greater control over how artificial intelligence models are accessed and integrated, as well as better safeguards to prevent model manipulation or jailbreaks. 

Additionally, behavioural analytics are becoming increasingly integrated into security platforms, allowing defenders to distinguish traditional threats from automated attack campaigns by identifying activity patterns that suggest machine-driven intrusion attempts. 

Furthermore, it is becoming increasingly apparent that no single organization can address these challenges alone. Cybersecurity specialists emphasize that collaboration between private corporations, government agencies, academic researchers, and international security alliances is necessary. 

It is still being actively studied how artificial intelligence introduces layers of technical complexity, and effective responses to its misuse require rapid information sharing and coordinated strategies that cross national boundaries. 

In order to counter highly automated threats, defenders can construct adaptive and responsive security postures combining the contextual judgment of experienced security professionals with the analytical capabilities of advanced artificial intelligence systems. 

While AI-assisted cybercrime is becoming increasingly sophisticated, security experts warn that organizations do not have all that is necessary to protect themselves. There are many defensive principles already in existence within established cybersecurity frameworks that can mitigate these risks.

Rather than finding entirely new defenses, enterprise leaders must strengthen visibility, governance, and operational discipline around the tools already in place in order to strengthen the visibility, governance, and operational discipline.

Organizations' resilience in an era where cyberattacks are increasingly characterized by intelligent and autonomous technologies may be determined by understanding the extent of the evolving threat landscape and taking proactive measures to enhance modern defensive capabilities.

KadNap Malware Compromises Over 14,000 Edge Devices to Operate Hidden Proxy Botnet

 


Cybersecurity researchers have identified a previously undocumented malware strain called KadNap that is primarily infecting Asus routers and other internet-facing networking devices. The attackers are using these compromised systems to form a botnet that routes malicious traffic through residential connections, effectively turning infected hardware into anonymous proxy nodes.

The threat was first observed in real-world attacks in August 2025. Since that time, the number of affected devices has grown to more than 14,000, according to investigators at Black Lotus Labs. A large share of infections, exceeding 60 percent, has been detected within the United States. Smaller groups of compromised devices have also been identified across Taiwan, Hong Kong, Russia, the United Kingdom, Australia, Brazil, France, Italy, and Spain.

Researchers report that the malware uses a modified version of the Kademlia Distributed Hash Table (DHT) protocol. This peer-to-peer networking technology enables the attackers to conceal the true location of their infrastructure by distributing communication across multiple nodes. By embedding command traffic inside decentralized peer-to-peer activity, the operators can evade traditional network monitoring systems that rely on detecting centralized servers.

Within this architecture, infected devices communicate with one another using the DHT network to discover and establish connections with command-and-control servers. This design improves the botnet’s resilience, as it reduces the chances that defenders can disable operations by shutting down a single control point.

Once a router or other edge device has been compromised, the system can be sold or rented through a proxy platform known as Doppelgänger. Investigators believe this service is a rebranded version of another proxy operation called Faceless, which previously had links to TheMoon router malware. According to information published on the Doppelgänger website, the service launched around May or June 2025 and advertises access to residential proxy connections in more than 50 countries, promoting what it claims is complete anonymity for users.

Although many of the observed infections involve Asus routers, researchers found that the malware operators are also capable of targeting a wider range of edge networking equipment.

The attack chain begins with the download of a shell script named aic.sh, retrieved from a command server located at 212.104.141[.]140. This script initiates the infection process by connecting the compromised device to the botnet’s peer-to-peer network.

To ensure the malware remains active, the script establishes persistence by creating a cron task that downloads the same script again at the 55-minute mark of every hour. During this process, the file is renamed “.asusrouter” and executed automatically.

After persistence is secured, the script downloads an ELF executable, renames it “kad,” and runs it on the device. This program installs the KadNap malware itself. The malware is capable of operating on hardware that uses ARM and MIPS processor architectures, which are commonly found in routers and networking appliances.

KadNap also contacts a Network Time Protocol (NTP) server to retrieve the current system time and store it along with the device’s uptime. These values are combined to produce a hash that allows the malware to identify and connect with other peers within the decentralized network, enabling it to receive commands or download additional components.

Two additional files used during the infection process, fwr.sh and /tmp/.sose, contain instructions that close port 22, which is the default port used by Secure Shell (SSH). These files also extract lists of command server addresses in IP-address-and-port format, which the malware uses to establish communication with control infrastructure.

According to researchers, the use of the DHT protocol provides the botnet with durable communication channels that are difficult to shut down because its traffic blends with legitimate peer-to-peer network activity.

Further examination revealed that not every infected device communicates with every command server. This suggests the attackers are segmenting their infrastructure, possibly grouping devices based on hardware type or model.

Investigators also noted that routers infected with KadNap may sometimes contain multiple malware infections simultaneously. Because of this overlap, it can be challenging to determine which threat actor is responsible for particular malicious activity originating from those systems.

Security experts recommend that individuals and organizations operating small-office or home-office (SOHO) routers take several precautions. These include installing firmware updates, restarting devices periodically, replacing default administrator credentials, restricting management access, and replacing routers that have reached end-of-life status and no longer receive security patches.

Researchers concluded that KadNap’s reliance on a peer-to-peer command structure distinguishes it from many other proxy-based botnets designed to provide anonymity services. The decentralized approach allows operators to remain hidden while making it significantly harder for defenders to detect and block the network.

In a separate report, security analysts at Cyble disclosed a new Linux malware threat named ClipXDaemon.

The malware targets cryptocurrency users by intercepting wallet addresses that victims copy to their clipboard and secretly replacing them with addresses controlled by attackers. This type of threat is commonly known as clipper malware.

ClipXDaemon is distributed through a Linux post-exploitation framework called ShadowHS and has been described as an automated clipboard-hijacking tool designed specifically for systems running Linux X11 graphical environments.

The malware operates entirely in memory, which reduces traces on disk and improves its ability to remain undetected. It also employs several stealth techniques, including disguising its process names and deliberately avoiding execution in Wayland sessions.

This design choice is intentional because Wayland’s security architecture introduces stricter restrictions on clipboard access. Applications must usually involve explicit user interaction before they can read clipboard contents. By disabling itself when Wayland is detected, the malware avoids triggering errors or suspicious behavior.

Once active in an X11 session, ClipXDaemon continuously checks the system clipboard every 200 milliseconds. If it detects a copied cryptocurrency wallet address, it immediately substitutes it with an attacker-controlled address before the victim pastes the information.

The malware currently targets a wide range of digital currencies, including Bitcoin, Ethereum, Litecoin, Monero, Tron, Dogecoin, Ripple, and TON.

Researchers noted that ClipXDaemon differs significantly from traditional Linux malware families. It does not include command-and-control communication, does not send beaconing signals to remote servers, and does not rely on external instructions to operate.

Instead, the malware generates profits directly by manipulating cryptocurrency transactions in real time, silently redirecting funds when victims paste compromised wallet addresses during transfers.

Commercial Spy Trackers Breach U.S. Army Networks, Jeopardizing National Security

 

U.S. Army networks face a hidden invasion from commercial spy technology, compromising soldier data and national security in alarming ways. A groundbreaking study by the Army Cyber Institute at West Point analyzed traffic on military networks, discovering that 21.2% of the most frequently visited websites host tracker domains. These trackers relentlessly collect sensitive information like geolocation, email addresses, and detailed browsing histories from troops during routine online activities.

The infiltration stems from ubiquitous commercial tools embedded in popular sites. Companies such as Adobe, Microsoft, Akamai, and even the banned TikTok deploy these trackers, funneling harvested data to brokers who resell it without regard for buyers' intentions. This surveillance capitalism mirrors civilian web tracking but strikes deeper when targeting military personnel, turning everyday internet use into a potential intelligence leak.

Researchers from Duke University exposed the severity by purchasing dossiers on active-duty service members from data brokers with ease. They acquired names, home addresses, personal emails, and military branch details, often from non-U.S. domains, highlighting how adversaries could exploit this for blackmail, targeting installations, or cyber campaigns . One expert called the process "disturbingly simple," underscoring the broker market's indifference to national security risks.

Persistent vulnerabilities echo the 2018 Strava fitness app scandal, where heatmap data revealed covert base locations worldwide. The latest findings show trackers in 42% of network requests and 10.4% of sites, exceeding privacy safeguards on mainstream streaming platforms. Cybersecurity professor Alan Woodward of the University of Surrey warns, "If you’re not paying, you are the product," a harsh reality for soldiers navigating the open web.

The Pentagon is responding aggressively through its 2023 Cyber Strategy, implementing Zero Trust architecture, enhanced endpoint detection, and widespread tracker blocking . The National Defense Authorization Act bolsters these efforts with mandates for spyware mitigation and stricter social media vetting. The Army Cyber Institute advocates quantifying trackers and extending blocks to personal devices, elevating data privacy to a core element of force protection in the digital age.

AI Agents Boost Productivity but Introduce New Cybersecurity Risks for Organizations

 

Artificial Intelligence is rapidly evolving from a conversational tool into a system capable of performing real-world tasks independently. Known as AI Agents, these systems can carry out activities such as sending emails, transferring data, and managing software workflows without constant human supervision.

While this automation significantly improves efficiency, it also creates a new entry point for cyber threats.

AI agents can be compared to a new employee who has access to every room in a company building but lacks proper identification. Because these digital systems operate autonomously, they often hold permissions to sensitive resources and information, sometimes without sufficient monitoring.

Cybercriminals have begun exploiting this reality. Instead of attempting to steal passwords or break into systems directly, attackers may manipulate AI agents into performing malicious actions on their behalf.

Organizations that rely on AI-driven automation could therefore face new risks. Many conventional cybersecurity systems were originally designed to protect human users rather than automated digital workers, leaving a potential gap in defense.

To address these concerns, an upcoming webinar titled “Beyond the Model: The Expanded Attack Surface of AI Agents” will explore how this evolving technology is being targeted by threat actors.

During the session, Rahul Parwani, Head of Product for AI Security at Airia, will explain how attackers exploit AI agents and what organizations can do to strengthen their defenses.

What You Will Learn
  • The "Dark Matter" of Identity: Why AI agents are often invisible to your security team and how to find them.
  • How Agents Get Tricked: Learn how a simple "bad idea" hidden in a document can make an AI agent leak your company secrets.
  • The Safety Blueprint: Simple steps to give your AI agents the power they need without giving them "God Mode" over your data.
This session is aimed at business leaders, IT professionals, and anyone responsible for safeguarding corporate data. The discussion will break down complex security concepts in a way that does not require deep coding expertise.

As organizations continue adopting AI-driven automation, understanding the security implications of AI agents is becoming increasingly important. Without proper safeguards, the same tools designed to improve productivity could also become unexpected vulnerabilities.

Hackers Exploit FortiGate Devices to Hack Networks and Credentials


Exploiting network points to hack victims 

Cybersecurity experts have warned about a new campaign where hackers are exploiting FortiGate Next-Gen Firewall (NGFW) devices as entry points to hack target networks. 

The campaign involves abusing the recently revealed security flaws or weak password to take out configuration files. The activity has singled out class linked to government, healthcare, and managed service providers. 

Attack tactic 

According to experts, “FortiGate network appliances have considerable access to the environments they were installed to protect. In many configurations, this includes service accounts which are connected to the authentication infrastructure, such as Active Directory (AD) and Lightweight Directory Access Protocol (LDAP).”

"This setup can enable the appliance to map roles to specific users by fetching attributes about the connection that’s being analyzed and correlating with the Directory information, which is useful in cases where role-based policies are set or for increasing response speed for network security alerts detected by the device,” the experts added. 

Misconfigurations opening doors for hackers 

But the experts noticed that this access could be compromised by hackers who hack into FortiGate devices via flaws or misconfigurations.

In one attack, the hackers breached a FortiGate appliance last year in November to make a new local admin account “support” and built four new firewall policies that let the account to travel across all zones without any limitations. 

The hacker then routinely checked device access. “Evidence demonstrates the attacker authenticated to the AD using clear text credentials from the fortidcagent service account, suggesting the attacker decrypted the configuration file and extracted the service account credentials,” SentinelOne reported. 

How was the account used?

After this, hacker leveraged the service account to verify the target's environment and put rogue workstations in the AD for further access. Following this, network scanning started and the breach was found, and lateral movement was stopped. 

The contents of the NTDS.dit file and SYSTEM registry hive were exfiltrated to an external server ("172.67.196[.]232") over port 443 by the Java malware, which was triggered via DLL side-loading.

SentinelOne said that “While the actor may have attempted to crack passwords from the data, no such credential usage was identified between the time of credential harvesting and incident containment.”

Iran-Linked Handala Hackers Claim Breach of Israel’s Clalit Healthcare Network

 

A breach at Israel’s biggest health provider has been tied to an Iranian-affiliated hacking collective, which posted stolen patient records online. Claiming credit, a network calling itself Handala detailed the intrusion via public posts. Access reportedly reached Clalit Health Services’ core data stores. That institution cares for around fifty percent of the country’s residents. 

More than ten thousand people saw their medical files exposed, the hackers stated. Samples of what they say is real data now sit on public servers - names, test results, health scans tucked inside. Handala issued a statement saying Israel's hospital networks were left reeling after the breach, calling defenses weak and slow. What followed was not subtle: laughter at how easily systems gave way.  

Not just an attack, but positioned as resistance - this action followed claims of long-standing control and abuse. Echoing past messages, the announcement carried familiar tones seen when digital strikes hit Israeli bodies before. 

A strange post appeared online just hours before the reveal - hinting at something unfolding within Israel’s medical system. By next morning, reports confirmed a possible leak of sensitive information. Right after hearing about it, Clalit's cyber defense units started looking into what happened. Government agencies got updates right away, since detection tools kicked in under standard procedures. 

While checks are still underway, hospital networks remain stable and running without disruption. A fresh incident highlights ongoing digital operations tied to Iran, aimed at entities and people in Israel. In recent years, outfits connected to Tehran have faced claims of seeking information, interfering with key bodies, while also trying to pull in collaborators using internet exchanges along with money offers. 

Now known for bold statements, Handala has taken credit for multiple major cyber events, experts note. While Check Point Research points out that some assertions appear inflated, a few of those declarations align with verified breaches. Unexpected overlaps between claim and evidence keep scrutiny alive. 

In December, hackers revealed they had gained access to ex-Prime Minister Naftali Bennett’s Telegram messages. Confirmation came from Bennett's team - yes, the account was reached, yet his device remained untouched. 

Later, these attackers stated they went after more individuals in politics. Among them: ex-minister Ayelet Shaked and Tzachi Braverman, a close associate of Netanyahu. Earlier, Israel's medical system dealt with digital attacks. Last October, hackers targeted Assaf Harofeh Medical Center using ransomware linked to Qilin. Patient records were at risk when the criminals asked for 70,000 dollars. Threats to expose sensitive information followed if payment failed. 

Later, officials pointed to Iran’s likely involvement in that incident too - showing how digital attacks are becoming a key part of the strain between these nations.

Hackers Exploit Claude to Target Multiple Mexican Government Agencies

 


As generative artificial intelligence emerges, digital innovation is evolving at an unprecedented rate, but it is also quietly reshaping cybercrime in a subtle way. Tools originally designed for the purpose of research, coding, and problem-solving are now being explored for a variety of less benign purposes as well. 

This fact has been illustrated in a troubling fashion by recent revelations that threat actors have exploited the capabilities of Claude in order to support a large-scale intrusion targeting Mexican government networks. 

A security researcher at Gambit Security reported that attackers extracted approximately 150 gigabytes of sensitive information from multiple Mexican government agencies, demonstrating how widely accessible artificial intelligence systems can be manipulated to assist sophisticated cyber operations despite built-in safeguards despite their ease of use. 

It has been determined that the intrusion was not limited to passive reconnaissance. The attacker is believed to have used Claude throughout the campaign as an interactive tool for research and development. 

Gambit Security has released an analysis that indicates that the activity began in December, and continued for approximately a month, during which the chatbot was repeatedly instructed to identify potential vulnerabilities within government networks and to create scripts for exploiting those vulnerabilities. 

Using the same AI model, methods were also outlined for automating sensitive information extraction, effectively turning the model into an assistant for data extraction. In a series of carefully structured prompts, the operator gradually weakened the built-in safeguards of the model, thereby manipulating it slowly. 

There have been reports that the system has rejected initial requests, but subsequent iterations seem to have bypassed the platform's guardrails and generated increasingly more actionable material. The extent of the assistance presented by the model raised particular concerns among analysts. 

According to Curtis Simpson, the system produced thousands of analytical outputs which detailed potential attack paths, internal network targets, and credential-related strategies, thereby providing guidance on how to proceed within compromised environments. These outputs were more structured operational guidance for the campaign's human operator than casual responses. 

According to Anthropic, an internal investigation had been initiated following the disclosure and that the activity had been disrupted and the accounts associated with the misuse were permanently banned. According to a company representative, safeguards are continuing to develop. 

For example, the Claude Opus 4.6 model incorporates additional mechanisms to detect and block similar forms of abuse in the latest iteration. In the time of publishing, it had not been officially determined that the individuals responsible for the intrusion were part of any advanced persistent threat group that had been publicly identified.

Nonetheless, analysts examining the operation noted several similarities with tactics historically associated with espionage campaigns involving Chinese actors. As a result of intelligence gathered by Gambit Security and corroborated by SecurityAffairs, the tradecraft demonstrated in the operation - particularly disciplined operational security and systematic reconnaissance - appears to resemble patterns previously observed in state-aligned cyber espionage. 

A separate disclosure from Anthropic confirmed that state-sponsored actors have misused its AI programming tools to benefit dozens of organizations worldwide. It has been determined that investigators at this incident heavily relied on artificial intelligence-assisted workflows to accelerate the exploit development process, effectively reducing the technical barrier to assembling complex multi-stage intrusion chains while retaining high levels of operational secrecy. 

Technical analysis indicates that the campaign aimed at weaponizing Claude Code, by utilizing prompt engineering techniques in order to circumvent the system's built-in security measures. Over 1,000 prompts were submitted to the artificial intelligence environment, some of which were presented as legitimate bug bounty testing scenarios to bypass ethical restrictions embedded within the model by the researchers. 

In this iterative process, attackers were reported to have developed customized exploit scripts, lateral movement tooling, and operational playbooks tailored to the architecture of compromised networks through this iterative interaction. 

Following the generation of AI-generated material, successive phases of the intrusion chain, including privilege escalation, credential harvesting, and automated data extraction, were carried out. According to reports, the operators began shifting portions of their workflow to GPT-4.1 to continue developing credential handling utilities and refine network traversal techniques when restrictions began limiting output from Claude's environment. 

It was possible for the attackers to maintain a workflow that was largely automated and able to quickly adapt to defensive obstacles within the targeted infrastructure by chaining outputs from both AI systems. As a result of this approach, investigators identified behavioural indicators that stood out during forensic examination.

Among them were unusually large amounts of automated scripting activity, repeated instances of AI-generated code fragments appearing within attack tools, and the presence of AI-aided development processes operating from compromised government infrastructures. 

A series of stages has been involved in the intrusion, which began with compromising systems related to the Mexican tax authority before spreading to other public infrastructures. The attacker, according to investigators, then moved through a network of interconnected systems involving several regional government environments, municipal systems in Mexico City, public utility infrastructure in Monterrey, as well as at least one major financial institution, as well as the national electoral institute. 

As a result of the operation, approximately 150 gigabytes of sensitive data - including administrative information and individually identifiable information - were exfiltrated from these environments. MITER ATT&CK knowledge base analysis revealed a familiar sequence of intrusion techniques based on the observed activity. There is evidence that the initial access was obtained through valid accounts, followed by lateral movement with remote services, credential acquisition through operating system credential dump mechanisms, and large-scale data exfiltration. 

The researchers also observed additional measures intended to undermine defensive monitoring by interfering with security controls within the targeted environments in order to weaken defensive monitoring. 

Researchers noted that each of these tactics has been observed in conventional cyberespionage operations; however, the distinctive feature of the campaign was the systematic integration of generative artificial intelligence into the attack process. 

It is possible for attackers to coordinate complex intrusion chains at a speed and scale that is not possible with traditional manual methods, as they were able to automate reconnaissance, exploit development, and operational planning. This incident underscores how generative artificial intelligence systems are rapidly becoming a new layer within the cyber threat landscape that can enhance both defensive and offensive capabilities. 

In response to the threat of AI-aided attacks, security experts recommend that organizations, particularly those operating critical public infrastructure, adapt their defensive strategies accordingly. A number of measures are being taken to strengthen identity and access controls, identify anomalous automation patterns, and implement advanced behavioral analytics to identify tooling and scripting generated by AI. 

It is also recommended that AI developers, cybersecurity firms, and government agencies collaborate continuously so that safeguards can be refined to ensure that large language models are not manipulated for malicious purposes. 

It is becoming increasingly important for the cybersecurity community to ensure that innovations in artificial intelligence do not inadvertently become a force multiplier for sophisticated digital intrusions as platforms such as Claude and other generative AI systems continue to evolve.

Fake Google Meet Update Can Give Attackers Control of Your Windows PC

 



Cybersecurity analysts have identified a phishing campaign that can quietly hand control of a Windows computer to attackers after a single click. The scam appears as a routine update notice for Google Meet, but the prompt is fraudulent and redirects victims into a device management system controlled by threat actors.

Unlike many phishing schemes, the technique does not steal passwords, download obvious malware, or display clear warning signs. Instead, the attack relies on convincing users to interact with a page that imitates a standard software update message.


A convincing but fake update message

The deceptive webpage tells visitors they must install the latest version of Meet in order to continue using the service. The design closely resembles a legitimate update notification and uses familiar colors and branding that many users associate with Google products.

However, both the “Update now” button and the “Learn more” link do not connect to any official Google resource. Instead, they activate a special Windows deep link known as ms-device-enrollment:.

This feature is a built-in Windows mechanism designed for corporate environments. IT administrators commonly use it to send employees a link that allows a computer to be enrolled in a company’s device management system with minimal effort. In the attack campaign, the same capability is redirected to infrastructure operated by the attacker.


How the enrollment process begins

Windows enrollment links such as ms-device-enrollment: are commonly used in corporate environments where organizations need to configure large numbers of laptops quickly. The link automatically opens Windows settings and connects the device to an enterprise management server.

Once enrolled, the device becomes part of a management framework that allows administrators to deploy software updates, enforce security policies, and manage system configurations remotely.

Attackers exploit this workflow because users are accustomed to seeing this setup process when joining corporate networks, making it appear legitimate.

When a victim clicks the link, Windows immediately bypasses the browser and opens the operating system’s “Set up a work or school account” dialog. This is the same interface that appears when an organization configures a new employee laptop.

The enrollment request arrives with several fields already filled in. The username displayed is collinsmckleen@sunlife-finance.com, a domain designed to resemble the financial services firm Sun Life Financial. Meanwhile, the server connection is preconfigured to an endpoint hosted at tnrmuv-api.esper[.]cloud, which is part of infrastructure operated by Esper.

The attacker’s objective is not to impersonate the victim’s account perfectly. Instead, the goal is to persuade the user to continue through the legitimate Windows enrollment process. Even if only a small portion of targeted users proceed, that is enough for attackers to gain access to some systems.


What attackers gain after enrollment

If the victim clicks Next and completes the setup wizard, the computer becomes registered with a remote Mobile Device Management (MDM) server.

MDM platforms are commonly used by organizations to manage employee devices. Once a device joins such a system, administrators can remotely install or remove applications, modify operating system settings, access stored files, lock the device, or completely erase its contents.

Because the commands come from a legitimate management platform rather than a malicious program, the operating system performs the actions itself. As a result, there may be no suspicious malware process running on the machine.

The infrastructure used in this campaign relies on Esper, a legitimate enterprise management service that many companies use to control corporate hardware.

Further analysis of the malicious link shows encoded configuration data embedded in the server address. When decoded, the data reveals two identifiers associated with the Esper platform: a blueprint ID that determines which management configuration will be applied and a group ID that specifies the device group the computer will join once enrolled.


Abuse of legitimate features

Both the Windows enrollment handler and the Esper management service are functioning exactly as designed. The attacker’s tactic simply redirects these legitimate tools toward unsuspecting users.

Because no malicious software is delivered and no login credentials are requested, the attack can be difficult for security tools to detect. The enrollment prompt displayed to the user is an authentic Windows system dialog rather than a fake webpage. This means typical browser warnings or email filters that look for credential-stealing forms may not flag the activity.

Additionally, the command infrastructure operates on a trusted cloud-based platform, making domain reputation filtering less effective. Security specialists warn that many traditional detection tools are not designed to recognize situations where legitimate operating system features are misused to gain control of a system.

This technique reflects a broader trend in cybercrime. Increasingly, attackers are abandoning conventional malware and instead exploiting built-in operating system capabilities or legitimate cloud services to carry out their operations.


Steps to take if you interacted with the page

Users who believe they may have clicked the fake update prompt should first check whether their device has been enrolled in an unfamiliar management system.

On Windows computers, this can be done by navigating to Settings → Accounts → Access work or school. If an unfamiliar entry appears, particularly one associated with domains such as sunlife-finance or esper, it should be selected and disconnected immediately.

Anyone who clicked the “Update now” link on the malicious site and proceeded through the enrollment wizard should treat the computer as potentially compromised. Running a current anti-malware scan is recommended to determine whether the management server deployed additional software after enrollment.

For organizations, administrators may also want to review device management policies. Endpoint management platforms such as Microsoft Intune allow companies to restrict which MDM servers corporate devices are permitted to join. Implementing such restrictions can reduce the risk of unauthorized device enrollment in similar attacks.

Security researchers have warned that misuse of device management systems can be particularly dangerous because they grant deep administrative control over enrolled devices.

According to analysts from Gartner, enterprise device management platforms often have privileged system access comparable to local administrators, allowing them to modify system policies, install applications, and control security settings remotely.

When such privileges fall into the wrong hands, attackers can effectively operate the device as if they were legitimate administrators.

Apple Rolls Out Global Age-Verification System to Protect Kids Online

 

Apple has rolled out a new global age-verification system across its platforms, aimed at keeping kids safer online while helping developers comply with tightening child safety laws worldwide. The move targets both app downloads and in‑app experiences, with a particular focus on blocking underage access to adult‑rated content without sacrificing user privacy.

Under the new rules, users in countries such as Brazil, Australia and Singapore will be blocked from downloading apps rated 18+ unless Apple can confirm they are adults. Similar protections are being extended to parts of the United States, where states like Utah and Louisiana are introducing strict online age‑assurance laws, pushing platforms to verify whether users are children, teens or adults before allowing access to certain apps or features.This marks one of Apple’s strongest steps yet to align its App Store with regional regulations on children’s digital safety.

At the heart of the initiative is Apple’s privacy‑focused Declared Age Range API, which lets apps learn a user’s age category instead of their exact birthdate. Developers can use this signal to tailor content, enable or disable features, or trigger parental consent flows for younger users, while never seeing sensitive identity details. Apple says this design is meant to minimize data collection and reduce the risk of intrusive ID checks or third‑party age‑verification databases.

For parents, the age‑verification push builds on Apple’s existing child account system and content restrictions.Parents can already set up child profiles, choose age ranges and apply web content filters, and now those settings can flow through to third‑party apps via the new tools.This means a game, social app or streaming service can automatically recognize that a user is a child or teen and adjust what they can see or do without asking for new personal information.

For developers, Apple is introducing an expanded toolkit that includes the updated Declared Age Range API, new age‑rating properties in StoreKit, and improved server notifications to track compliance. These tools will be essential in regions where apps must prove they are screening out underage users from adult content or obtaining parental consent for significant changes. As more governments pass online safety laws, Apple’s global age‑verification framework is likely to become a key part of how the App Store balances regulatory demands with user privacy.

Conduent Leak: One of the Largest Breaches in The U.S


Conduent, a business that offers printing, payment, and document processing services to some of the biggest health insurance companies in the nation, has had at least 25 million people's personal information stolen. Addresses, social security numbers, and health information were exposed to ransomware hackers in what some have already dubbed one of the biggest data breaches in American history. 

According to a letter the business issued online, Conduent initially learned it was the victim of a "cyber incident" more than a year ago on January 13, 2025. The actual breach occurred between October 21, 2024, and January 13, 2025, and it included Conduent's data because the company offers services to health plans.

Names, social security numbers, health insurance details, and unspecified medical information were among the data. In its notice, the business stressed that "not every data element was present for every individual," which implies that some individuals may have had their health insurance information taken but not their social security number, or vice versa. 

According to Bleeping Computer, the Safepay ransomware organization claimed responsibility for the attack, which allegedly captured more than 8 gigabytes of data. Conduent stated online, "Presently, we are unaware of any attempted or actual misuse of any information involved in this incident," while it is unclear if Safepay has demanded payment for the information's recovery.

10.5 million people were affected by the incident, according to Oregon's consumer protection website, although it's unknown how many people in Oregon alone were affected. According to Wisconsin, the national total is more than 25 million. 

Notifications have also been sent to residents of other states, such as California, Delaware, Massachusetts, New Hampshire, and New Mexico. According to the state's attorney general, just 374 people's data was compromised in Maine, one of the states with very tiny numbers. Conduent, a New Jersey-based company, did not reply to emails on Tuesday inquiring about the full extent of the incident and what victims could do about it.

Conduent is providing free credit monitoring and identity restoration services through Epiq to certain individuals, but those affected must join before April 30, 2026, according to a letter given to victims in California.

Age Verification Laws for Social Media Raise Privacy Concerns and Enforcement Challenges

 

Across nations, governments push tighter rules limiting young users’ access to social media. Because of worries over endless scrolling, disturbing material online, or growing emotional struggles in teens, officials demand change. Minimum entry ages - often 13 or 16 - are now common in draft laws shaping platform duties. While debates continue, one thing holds: unrestricted teenage access faces mounting resistance. 

Still, putting such policies into practice stirs up both technological hurdles and concerns about personal privacy. To make sure people are old enough, services need proof - yet proving age typically means gathering private details. Meanwhile, current regulations push firms to keep data collection minimal. That tension forms what specialists call an “age-verification trap,” where tighter control over access can weaken safeguards meant to protect individual information. 

While many rules about age limits demand that services make "reasonable efforts" to block young users, clear guidance on checking someone's actual age is almost never included. One way firms handle this gap: they lean heavily on just two methods when deciding what to do. Starting off, identity checks require people to show their age using official ID or online identity tools. 

Although more reliable, keeping such data creates worries over privacy breaches. Handling vast collections of private details increases exposure to cyber threats. Security weakens when too much sensitive material gathers in one place. Age guesses shape the next method. By watching how someone uses a device, or analyzing video selfies with face-scanning tech, systems try to judge their years without asking for ID cards. 

Still, since these outcomes depend on likelihoods instead of confirmed proof, doubt remains part of the process. Some big tech firms now run these kinds of tools. While Meta applies face-based age checks on Instagram in select regions - asking certain users to send brief video clips if they seem underage - TikTok examines openly shared videos to guess how old someone might be. 

Elsewhere, Google and its platform YouTube lean on activity patterns; yet when doubt remains, they can ask for official identification or payment details. These steps aim at confirming ages without relying solely on stated information. Mistakes happen within these systems. Though meant to protect, they occasionally misidentify adults as children - leading to sudden account access issues. 

At times, underage individuals slip through gaps, using borrowed IDs or setting up more than one profile. Restrictions fail when shared credentials enter the picture. A single appeal can expose personal details when systems retain proof materials past their immediate need. Stored face scans, ID photos, or validation logs may linger just to satisfy legal checks. These files attract digital intrusions simply by existing. Every extra day they remain increases the chance of breach. 

Where identity infrastructure is weak, the difficulty grows. Biometrics might step in when official systems fall short. Oversight tends to be sparse, even as outside verifiers take on bigger roles. Still, shielding kids on the web without losing grip on private information is far from simple. When authorities roll out tighter rules for confirming age, the tools built to follow these laws could change how identities and personal details move through digital spaces.