Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Windows Telemetry Explained: What Diagnostic Data Microsoft Collects and Why It Matters

 

Years after Windows 10 arrived, a single aspect keeps stirring conversation - telemetry. This data gathering, labeled diagnostic info by Microsoft, pulls details from machines without manual input. Its purpose? Keeping systems stable, secure, running smoothly. Yet reactions split sharply between everyday users and those watching privacy trends. 

Early on, after Windows 10 arrived, observers questioned whether its telemetry might double as monitoring. A few writers argued it collected large amounts of user detail while transmitting data to Microsoft machines. Still, analysts inspecting how the OS handles information report minimal proof backing such suspicions. 

Beginning in 2017, scrutiny from the Dutch Data Protection Authority revealed shortcomings in how Windows presented telemetry consent choices. Although designed to gather system performance details, the setup failed to align with regional privacy expectations due to unclear user permissions. 

Instead of defending the original design, Microsoft adjusted both interface wording and backend configurations. Following these updates, oversight bodies acknowledged improvements, noting no evidence emerged suggesting private information was gathered unlawfully. Independent analysts alongside regulatory teams had previously flagged the configuration, yet after revisions, compliance concerns faded gradually. 

What runs behind the scenes in Windows includes a mix of telemetry types - mainly split into essential and extra reporting layers. Most personal computers, especially those outside corporate control, turn on the basic tier automatically; there exists no standard menu option to switch it off entirely. This baseline layer gathers only what Microsoft claims is vital for stability and core operations. 

Though hidden from typical adjustments, its presence supports ongoing performance checks across devices. Basic troubleshooting relies on specific diagnostics tied to functions like Windows Update. Information might cover simple fault summaries, setup traits of hardware, software plus driver footprints, along with records tracking how updates succeed or fail. 

As noted by Microsoft, insights drawn support better stability fixes, safety patches, app alignment, and smoother running systems. Some diagnostic details go beyond basics, capturing patterns in app use or web habits. These insights might involve deeper system errors, performance signs, or hardware traits. 

While such data helps refine functionality, access remains under user control via Windows options. Those cautious about personal information often choose to turn this off. Control sits within settings, letting choices match comfort levels. Occasionally, memory dumps taken during system failures form part of Optional diagnostic data, according to experts. 

When a crash happens, pieces of active files might get saved inside these records. Because of this risk, certain groups managing confidential material prefer disabling the setting altogether. In 2018, Microsoft rolled out a feature named Diagnostic Data Viewer to boost openness. This tool gives people access to review what information their machine shares with the company, revealing specifics found in diagnostics and system summaries. 

One billion devices now operate on Windows 11 across the globe. Because of countless variations in hardware and software setups, Microsoft relies on telemetry data - this information reveals issues, shapes update improvements, yet supports consistent performance. While tracking user interactions might sound intrusive, it actually guides fixes without exposing personal details; instead, patterns emerge that steer engineering decisions behind the scenes. 

Even though some diagnostic details are essential for basic operations, those worried about personal data might choose to limit what gets sent by turning off non-essential diagnostics in device preferences. Still, full function depends on keeping certain reporting active.

Experts Warn About AI-assisted Malwares Used For Extortion


AI-based Slopoly malware

Cybersecurity experts have disclosed info about a suspected AI-based malware named “Slopoly” used by threat actor Hive0163 for financial motives. 

IBM X-Force researcher Golo Mühr said, “Although still relatively unspectacular, AI-generated malware such as Slopoly shows how easily threat actors can weaponize AI to develop new malware frameworks in a fraction of the time it used to take,” according to the Hacker News.

Hive0163 malware campaign 

Hive0163's attacks are motivated by extortion via large-scale data theft and ransomware. The gang is linked with various malicious tools like Interlock RAT, NodeSnake, Interlock ransomware, and Junk fiction loader. 

In a ransomware incident found in early 2026, the gang was found installing Slopoly during the post-exploit phase to build access to gain persistent access to the compromised server. 

Slopoly’s detection can be tracked back to PowerShell script that may be installed in the “C:\ProgramData\Microsoft\Windows\Runtime” folder via a builder. Persistence is made via a scheduled task called “Runtime Broker”. 

Experts believe that that malware was made with an LLM as it contains extensive comments, accurately named variables, error handling, and logging. 

There are signs that the malware was developed with the help of an as-yet-undetermined large language model (LLM). This includes the presence of extensive comments, logging, error handling, and accurately named variables. 

The comments also describe the script as a "Polymorphic C2 Persistence Client," indicating that it's part of a command-and-control (C2) framework. 

According to Mühr, “The script does not possess any advanced techniques and can hardly be considered polymorphic, since it's unable to modify its own code during execution. The builder may, however, generate new clients with different randomized configuration values and function names, which is standard practice among malware builders.”

The PowerShell script works as a backdoor comprising system details to a C2 server. There has been a rise in AI-assisted malware in recent times. Slopoly, PromptSpy, and VoidLink show how hackers are using the tool to speed up malware creation and expand their operations. 

IBM X-Force says the “introduction of AI-generated malware does not pose a new or sophisticated threat from a technical standpoint. It disproportionately enables threat actors by reducing the time an operator needs to develop and execute an attack.”

French FICOBA Breach Exposes 1.2M Bank Accounts

 

A major cyberattack struck France's national bank account registry, FICOBA, exposing sensitive data from over 1.2 million accounts.The breach occurred in late January 2026 when hackers stole login credentials from a civil servant and impersonated an authorized user to access the database. This incident highlights vulnerabilities in government systems handling financial records.

FICOBA serves as France's central repository for all bank accounts opened in domestic institutions, storing identifiers like RIB and IBAN numbers, holder names, and postal addresses. Attackers extracted this information but could not access balances or perform transactions, according to officials. The French Ministry of Finance confirmed tax IDs were not compromised, though early reports varied.

Authorities detected the intrusion swiftly, immediately restricting access and taking the database offline temporarily.It was restored with enhanced security measures after collaboration with the National Cybersecurity Agency (ANSSI). A formal complaint was filed with the National Commission for Information Technology and Civil Liberties (CNIL), and notifications are underway to affected individuals and banks.

The exposure raises alarms for phishing scams and SEPA direct debit fraud, with banks already noting increased suspicious SMS and emails.Criminals could exploit IBANs and personal details for identity theft or unauthorized payments. French tax authorities warn they never request banking info via unsolicited messages.

Safety recommendations 

To protect yourself post-breach, monitor bank statements daily for unauthorized activity and enable transaction alerts. Change passwords on financial accounts, using unique strong ones via a password manager, and activate multi-factor authentication (MFA) everywhere possible. Avoid clicking links in unsolicited emails or texts claiming breach notifications—contact your bank directly through official apps or sites.

Further, freeze credit reports if available in your country to block new accounts in your name, and consider credit monitoring services. Report suspicious activity to your bank and local cyber police immediately.Regularly update software and use antivirus tools to prevent credential theft, emphasizing least-privilege access in organizations. These steps minimize risks from exposed data like in the FICOBA incident.

The Global Cyber Fraud Wave Is Being Supercharged by Artificial Intelligence


 

It is becoming increasingly common for organizations to rethink how security operations are structured and managed as the digital threat landscape continues to evolve. Artificial intelligence is increasingly becoming an integral part of modern cyber defense strategies due to its increasing complexity. 

As networks, endpoints, and cloud infrastructures generate large quantities of telemetry, security teams are turning to advanced machine learning models and intelligent analytics to process those data. As a result, these systems are able to identify subtle anomalies and behavioral patterns which would otherwise be hidden by conventional monitoring frameworks, allowing for earlier detection of malicious behavior. 

In addition to improving cybersecurity workflow efficiency, AI is also transforming cybersecurity operations. With adaptive algorithms that continually refine their analytical models, tasks that previously required extensive manual oversight can now be automated, such as log correlation, threat triage, and vulnerability assessment. 

Artificial intelligence allows security professionals to concentrate on more strategic and investigative activities, such as threat hunting and incident response planning, by reducing the operational burden on human analysts. Organizations are facing increasingly sophisticated adversaries who utilize automation and advanced techniques in order to circumvent traditional defenses. 

The shift is particularly important as adversaries become increasingly sophisticated. Additionally, AI can strengthen proactive defense mechanisms by analyzing historical attacks and behavioral indicators. 

Using AI-driven platforms, organizations can detect phishing campaigns in real time using linguistic and contextual analysis as well as flag suspicious activity across distributed environments in advance of emerging attack vectors. This continuous learning capability allows these systems to adapt to changes in the threat landscape, enhancing their accuracy and resilience as new patterns of malicious activity emerge. 

Therefore, artificial intelligence is becoming a strategic asset as well as a defensive necessity, enabling organizations to deal with cyber threats more effectively, efficiently, and adaptably while ensuring the security of critical data and digital infrastructure. 

In the telecommunications sector, fraud has been a persistent operational and security concern for many years, resulting in considerable financial losses and reputational consequences. In order to identify irregular usage patterns and protect subscriber accounts, telecom operators traditionally rely on multilayered monitoring controls and rule-based fraud management systems.

Although the industry is rapidly expanding into adjacent digital services, including mobile payments, digital wallets, and payment service banking, conventional boundaries that once separated the telecom industry from the financial sector have begun to become blurred. Increasingly, telecom networks serve as foundational infrastructure for digital transactions, identity verification, and financial connectivity, rather than merely serving as communication channels. 

By resulting in this structural shift, the attack surface has been significantly increased, resulting in a more complex and interconnected fraud environment, where threats are capable of propagating across multiple digital platforms. At the same time, artificial intelligence is rapidly transforming the way fraud risks are managed and emergence occurs. 

With the use of artificial intelligence-driven automation, sophisticated threats actors are orchestrating highly scalable fraud campaigns, generating convincing phishing messages, utilizing social engineering tactics, and analyzing network vulnerabilities more quickly than ever before. This capability enables fraudulent schemes to evolve dynamically, adapting more rapidly than traditional detection mechanisms. 

In spite of this, technological advances are equipping telecommunications providers with more advanced defensive tools as well. A fraud detection platform based on artificial intelligence can analyze huge volumes of network telemetry and transaction data, analyzing signals across communication and payment systems in real time to identify subtle indicators of compromise.

By analyzing behavior patterns, detecting anomalies, and modeling predictive patterns, security teams are able to detect suspicious activities earlier and respond more precisely. Additionally, the economic implications of telecom-related fraud emphasize the need to strengthen these defenses. The telecommunications industry has been estimated to have suffered tens of billions of dollars in losses in recent years as a result of digital exploitation on a grand scale.

In emerging digital economies, this issue is particularly acute, since mobile connectivity is increasingly serving as a bridge to financial inclusion. Fraud incidents that occur on telecommunications networks that support digital banking, mobile money transfers, and online commerce can have consequences that go beyond the service providers themselves.

Interconnected platforms may be subject to a variety of regulatory exposures, operational disruptions, or declining consumer confidence at the same time, affecting both telecommunications and financial services simultaneously. Increasing convergence between communication networks and financial services is shifting telecom operators' responsibilities in light of their role in the digital payment ecosystem. 

In addition to ensuring network reliability, providers are also expected to safeguard financial transactions occurring across their infrastructure as digital payment ecosystems grow. In light of the significant interrelationship between mobile and online banking ecosystems, a number of scams target these populations. 

As a consequence of fraudulent activity occurring in such interconnected systems, it can have cascading effects across multiple organizations, leading to regulatory scrutiny and eroding trust within the entire digital economy. 

The challenge for telecommunications companies is therefore no longer limited to managing network abuse alone; they must build resilient, intelligence-driven fraud prevention frameworks capable of protecting a complex digital environment that is becoming increasingly complex. Several studies conducted by the industry indicates that cyber threat operations are in the process of undergoing a significant transformation. 

Attackers are increasingly orchestrating coordinated campaigns that incorporate traditional social engineering techniques with the speed and scale of automated technology. The use of artificial intelligence is now integral to the entire attack lifecycle, from early reconnaissance and target profiling to deceptive communication strategies and operational decision-making.

In the context of everyday business environments, organizations encounter increasingly high-risk interactions with automated systems as AI-powered tools become more accessible. Based on data collected in recent months, it appears that a substantial percentage of enterprise AI interactions involve prompts or requests that raise potential security concerns, demonstrating how the rapid integration of artificial intelligence into corporate workflows presents new opportunities for misappropriation. 

Along with this trend, ransomware ecosystems are also maturing into fragmented and scalable models. It has been observed that the landscape is becoming more characterized by loosely connected networks of specialized operators rather than a few centralized threat groups. 

As a consequence of decentralization, cybercriminals have been able to expand their operations at an exponential rate, increasing both the number of victims targeted and the speed with which campaigns can be executed. 

Moreover, artificial intelligence is helping to streamline target identification, optimize extortion strategies, and automate negotiation and infrastructure management functions. Consequently, a more adaptive and resilient criminal ecosystem has been created that is capable of sustaining persistent global campaigns. 

Social engineering tactics are also embracing a broader array of communication channels than traditional phishing emails. Deception is increasingly coordinated by threat actors across email, web platforms, enterprise collaboration tools, and voice communication channels. Security experts have observed a sharp increase in methods for manipulating user trust by issuing seemingly legitimate technical prompts or support instructions, often encouraging individuals to provide sensitive information or execute commands. 

As a result, phone-based impersonation attacks have evolved into structured intrusion attempts targeted at corporate help desks and internal support functions, resulting in more targeted intrusion attempts. In the age of cloud-based computing, browsers, software-as-a-service environments, and collaborative digital workspaces, artificial intelligence will become an integral part of critical trust layers which adversaries will attempt to exploit. 

Besides user-focused attacks, infrastructure-based vulnerabilities are also expanding the threat surface, enabling hackers to blend malicious activity into legitimate network traffic as covert entry points. Edge devices, virtual private network gateways, and internet-connected systems are increasingly being used as covert entry points by attackers. 

The lack of oversight of these devices can result in persistent access routes that remain undetected within complex enterprise architectures. There are also additional risks associated with the infrastructure that supports artificial intelligence. As machine learning models, automated agents, and supporting services become integrated into enterprise technology stacks, significant configuration weaknesses have been identified across a wide number of deployments, highlighting potential exposures. 

As a result of these developments, cybersecurity leaders are reconsidering the structure of defensive strategies in an era marked by machine-speed attacks. Analysts have increasingly emphasized that responding to incidents after they occur is no longer sufficient; organizations must design security frameworks that prioritize prevention and resilience from the very beginning. 

To ensure these foundational controls can withstand automated and coordinated attacks, security teams need to reevaluate them across networks, endpoints, cloud platforms, communication systems, and secure access environments. 

Security teams face the challenge of facilitating artificial intelligence adoption without introducing unmanaged risks as it becomes incorporated into daily business processes. Keeping a clear picture of the use of artificial intelligence, both sanctioned and unsanctioned, as well as enforcing policies, is essential to reducing the potential for data leakage and misuse. 

In addition, protecting modern digital workspaces, where human decision-making increasingly intersects with automated technologies, is imperative. Several tools, including email platforms, web browsers, collaboration tools, and voice systems, form an integrated operation environment that needs to be secured as a single trust domain. 

In addition to strengthening the protection of edge infrastructure, maintaining an accurate inventory of connected devices can assist in reducing the possibility of attackers exploiting hidden entry points. A key component of maintaining resilience against artificial intelligence-driven cyber threats is consistent visibility across hybrid environments that encompass both on-premises infrastructures and cloud platforms along with distributed edge systems. 

By integrating oversight across these layers and prioritizing prevention-focused security models, organizations can reduce operational blind spots and enhance their defenses against rapidly evolving cyber threats. Industry observers emphasize that, under these circumstances, the ability to defend against AI-enabled cyber fraud will be less dependent upon isolated tools and more dependent upon coordinated security architectures. 

The telecommunications and digital service providers are expected to strengthen collaboration across the technological, financial, and regulatory ecosystems, as well as embed intelligence-driven monitoring into every layer of their infrastructure. It is essential to continually model fraud threats, use adaptive security analytics, and tighten up governance of emerging technologies to anticipate how fraud tactics evolve as innovations progress. 

By emphasizing proactive risk management and strengthening trust across interconnected digital platforms, organizations can be better prepared to address increasingly automated threats while maintaining the integrity of the rapidly expanding digital economy.

Meta Targets 150K Accounts in Southeast Asia Scam Operation

 



Meta announced that it has removed more than 150,000 accounts tied to organized scam centers operating in Southeast Asia, describing the move as part of a large international effort to disrupt coordinated online fraud networks.

The enforcement action was carried out with assistance from authorities in several countries. Law enforcement agencies and government partners involved in the operation included officials from Thailand, the United States, the United Kingdom, Canada, South Korea, Japan, Singapore, the Philippines, Australia, New Zealand, and Indonesia. According to Meta, the joint effort resulted in 21 individuals being arrested by the Royal Thai Police.

This latest crackdown builds on an earlier pilot initiative launched in December 2025. During that initial phase, Meta removed approximately 59,000 accounts, Pages, and Groups from its platforms that were connected to similar fraudulent activity. The earlier investigation also led to the issuance of six arrest warrants by authorities.

In a statement explaining the action, Meta said that online scams have grown increasingly complex and organized over recent years. Criminal networks, often operating from countries such as Cambodia, Myanmar, and Laos, have established large scam compounds that function in many ways like organized business operations. These groups typically use structured teams, scripted communication strategies, and digital tools designed to evade detection while targeting victims on a global scale. According to the company, the impact of such scams extends far beyond financial loss, as they can severely disrupt lives and weaken trust in digital communication platforms.

Alongside the enforcement action, Meta also announced several new safety features aimed at helping users identify and avoid scam attempts.

One of these tools introduces new warning messages on Facebook that notify users when they receive communication from accounts that display characteristics commonly linked to fraudulent activity. Another safeguard has been introduced on WhatsApp to address a tactic used by scammers who attempt to persuade users to scan a QR code. If successful, this method can link the attacker’s device to the victim’s WhatsApp account, allowing them to access messages and impersonate the account holder. Meta said its system will now notify users when suspicious device-linking requests are detected.

The company is also expanding scam detection on Messenger. When a conversation with a new contact begins to resemble known fraud patterns, such as questionable job opportunities or requests that appear unusual, the platform may prompt users to share recent messages so that an artificial intelligence system can evaluate whether the interaction matches known scam behavior.

Meta also disclosed broader enforcement statistics related to scams on its platforms. Throughout 2025, the company removed more than 159 million advertisements that violated its policies related to fraud and deception. In addition, it disabled approximately 10.9 million Facebook and Instagram accounts that investigators linked to organized scam centers.

To further address fraudulent activity, the company said it plans to expand its advertiser verification program. The goal of this measure is to increase transparency by confirming the identities of advertisers and reducing the ability of malicious actors to misrepresent themselves while running advertisements.

The announcement comes at a time when governments are intensifying efforts to address online fraud. The UK Government recently introduced a new Online Crime Centre designed to focus specifically on cybercrime, including scams connected to organized fraud operations operating in regions such as Southeast Asia, West Africa, Eastern Europe, India, and China.

The centre will bring together specialists from several sectors, including government agencies, law enforcement, intelligence services, financial institutions, mobile network providers, and major technology companies. The initiative is expected to begin operations next month.

The project forms part of the United Kingdom’s broader Fraud Strategy 2026–2029, a policy framework aimed at strengthening the country’s response to fraud and financial crime. As part of this strategy, authorities plan to use artificial intelligence to detect emerging scam patterns, identify suspicious bank transfers more quickly, and deploy “scam-baiting” chatbots designed to interact with fraudsters in order to gather intelligence.

Officials said the new centre, supported by more than £30 million in funding, will focus on identifying the digital infrastructure used by organized crime groups. This includes tracking fraudulent accounts, websites, and phone numbers used in scam operations. Authorities aim to shut down these resources at scale by blocking scam messages, freezing financial accounts linked to criminal activity, removing fraudulent social media profiles, and disrupting scam networks at their source.

Google API Keys Expose Gemini AI Data via Leaked Credentials

 

Google API keys, once considered harmless when embedded in public websites for services like Maps or YouTube, have turned into a serious security risk following the integration of Google's Gemini AI assistant. Security researchers at Truffle Security uncovered this issue, revealing that nearly 3,000 live API keys—prefixed with "AIza"—are exposed in client-side JavaScript code across popular sites.

Truffle Security's scan of the November 2025 Common Crawl dataset, which captures snapshots of major websites, identified 2,863 active keys from diverse sectors including finance, security firms, and even Google's own infrastructure. These keys, deployed sometimes years ago (one traced back to February 2023), were originally safe as mere billing identifiers but gained unauthorized access to Gemini endpoints without developers' knowledge.Attackers can simply copy a key from page source, authenticate to Gemini, and extract sensitive data like uploaded files, cached contexts, or datasets via simple prompts.

The danger extends beyond data theft to massive financial abuse, as Gemini API calls consume tokens that rack up charges—potentially thousands of dollars daily per compromised account, depending on the model and context window. Truffle Security demonstrated this by querying the /models endpoint with exposed keys, confirming access to private Gemini features. One reported case highlighted an $82,314 bill from a stolen key, underscoring the real-world impact.

Google acknowledged the flaw as "single-service privilege escalation" after Truffle's disclosure on November 21, 2025, and implemented fixes by January 2026, including blocking leaked keys from Gemini access, defaulting new AI Studio keys to Gemini-only scope, and sending proactive leak notifications. Despite these measures, the "retroactive privilege expansion" caught many off-guard, as enabling Gemini in projects silently empowered old keys.

Developers must immediately audit Google Cloud projects for Gemini API enablement, rotate all exposed keys, and restrict scopes to essentials—avoiding the default "unrestricted" setting. Tools like TruffleHog can scan code repositories for leaks, while regular monitoring prevents future exposures in an era where AI services amplify API risks. This incident highlights the need for vigilance as cloud features evolve.

Silent Scam Calls Used to Verify Active Phone Numbers, Cybersecurity Experts Warn

 

Many people have answered calls from unfamiliar numbers only to hear silence on the other end. In some cases, no one speaks at all. In others, there is a short delay before a caller finally responds. While this may appear to be a simple mistake or a wrong number, cybersecurity experts say these calls are often part of a deliberate scam tactic used to verify active phone numbers. 

According to security specialists, these silent calls function as a form of automated reconnaissance. Fraud operations run large-scale calling systems that dial thousands of numbers to determine which ones belong to real people. When someone answers, the system confirms that the number is active and marks it as a potential target for future scams. 

Keeper Security Chief Information Security Officer Shane Barney explained that such calls are rarely accidental. Instead, they help attackers filter out inactive numbers before investing more time and resources into scams. Verified contact information has value in modern cybercrime networks, where data about reachable individuals can be bought, sold, and reused across different fraud campaigns. 

Once a phone number is confirmed as active, it may be used in several ways. In some cases, scammers follow up with phishing calls or messages designed to trick victims into revealing personal or financial information. In more advanced attacks, a verified phone number could be combined with leaked email addresses from data breaches or used in schemes such as SIM-swap fraud, where attackers attempt to gain control of a victim’s mobile account. 

Another variation occurs when callers respond only after a brief pause. This delay is typically caused by predictive dialing systems that automatically place large volumes of calls. These systems detect when a human answers and then route the call to a live operator. The short silence represents the time it takes for the system to transfer the connection. 

Some people also worry that speaking during these calls could allow scammers to clone their voice using artificial intelligence. While voice cloning technology exists, experts say creating a convincing replica generally requires longer and clearer audio samples than a brief greeting. 

However, voice cloning could still become part of larger scams if criminals already possess other personal details about a victim. Security professionals recommend simple precautions when receiving suspicious calls. If an unknown number produces silence, hanging up immediately is usually the safest option. 

Another tactic is answering without speaking, which prevents automated systems from detecting a human voice. Spam-filtering tools can also help reduce nuisance calls. Applications such as Truecaller, RoboKiller, and Hiya identify numbers previously reported as spam. However, experts caution that no filtering system is perfect because scammers frequently change phone numbers. 

Ultimately, while call-blocking tools can reduce the volume of unwanted calls, maintaining strong account security and being cautious with unknown callers remain the most effective ways to avoid phone-based scams.

Perplexity's Comet AI Browser Tricked Into Phishing Scam Within Four Minutes


Agentic browser at risk

Agentic web browsers that use AI tools to autonomously do tasks across various websites for a user could be trained and fooled into phishing attacks. Hackers exploit the AI browsers’ tendency to assert their actions and deploy them against the same model to remove security checks. 

According to security expert Shaked Chen, “The AI now operates in real time, inside messy and dynamic pages, while continuously requesting information, making decisions, and narrating its actions along the way. Well, 'narrating' is quite an understatement - It blabbers, and way too much!,” the Hacker News reported. Agentic Blabbering is an AI browser that displays what it sees, thinks, and plans to do next, and what it deems safe or a threat. 

Tricking the browsers

By hacking the traffic between the AI services on the vendor’s servers and putting it as input to a Generative Adversarial Network (GAN), it made Perplexity’s Comet AI browser fall prey to a phishing attack within four minutes. 

The research is based on established tactics such as Scamlexity and VibeScamming, which revealed that vibe-coding platforms and AI browsers can be coerced into generating scam pages and performing malicious tasks via prompt injection. 

Attack tactic

There is a change in the attack surface as a result of the AI agent managing the tasks without frequent human oversight, meaning that a scammer no longer has to trick a user. Instead, it seeks to deceive the AI model itself. 

Chen said, “If you can observe what the agent flags as suspicious, hesitates on, and more importantly, what it thinks and blabbers about the page, you can use that as a training signal.” Chen added that the “scam evolves until the AI Browser reliably walks into the trap another AI set for it."

End goal?

The aim is to make a “scamming machine” that improves and recreates a phishing page until the agentic browser accepts the commands and carries out the hacker’s command, like putting the victim’s passwords on a malicious web page built for refund scams. 

Guardio is concerned about the development, saying that, “This reveals the unfortunate near future we are facing: scams will not just be launched and adjusted in the wild, they will be trained offline, against the exact model millions rely on, until they work flawlessly on first contact.”

ShinyHunters Threatens Data Leak After Alleged Salesforce Breach

 

The hacking group ShinyHunters has warned roughly 400 companies that it may publish stolen data online if ransom demands are not met. The group claims it accessed private records through websites built on Salesforce Experience Cloud, a platform companies use to create public portals and customer support sites. 

According to earlier findings by cybersecurity firm Mandiant, the attackers targeted organisations that used Salesforce’s Experience Cloud for external-facing services such as help centres and information portals. 

How the breach allegedly happened? The reported intrusion appears linked to the configuration of public access settings within these websites. 

Salesforce allows websites built on Experience Cloud to include a “guest user” profile so visitors can view limited information without logging in. 

If these settings are configured too broadly, however, the access permissions can expose internal data to the public internet. Investigations suggest the attackers used a modified version of a tool called Aura Inspector to scan websites for such weaknesses. 

Once vulnerabilities were identified, the hackers were able to extract information including names and phone numbers. Security experts say the stolen data may already be fueling vishing attacks. 

In such scams, attackers contact employees by phone and attempt to trick them into revealing additional confidential information. 

Dispute over the root cause There is disagreement over whether the problem stems from a software flaw or from how companies configured their systems. Salesforce has said the platform itself remains secure and that the issue is related to customer settings rather than a vulnerability in the product. 

“Our investigation to date confirms that this activity relates to a customer-configured guest user setting, not a platform security flaw,” the company said in a blog post. 

ShinyHunters disputes that explanation, claiming it discovered a previously unknown flaw that allows it to bypass certain protections even on sites that appear properly configured. 

Independent researchers have not yet verified that claim. Pressure tactics used by hackers ShinyHunters is known for using aggressive extortion strategies to pressure victims into paying ransom demands. The group often releases stolen data in stages to increase pressure on organisations that refuse to negotiate. 

A recent example involved Dutch telecommunications provider Odido and its brand Ben. After the company declined to pay a ransom reportedly worth one million euros, the hackers began publishing large quantities of customer data on the dark web. 

Security guidance for companies Salesforce is urging customers to review their portal configurations and tighten access controls. The company recommends applying a “least privilege” approach, meaning guest users should only have the minimum permissions required to use a site. 

Businesses are also advised to keep data private by default, disable settings that expose internal staff information, and turn off public application programming interfaces where possible. 

These interfaces can allow external systems to exchange data and may create additional entry points if left open. 

The incident highlights the growing risks associated with misconfigured cloud services, which security analysts say have become a common target for cybercriminal groups seeking large volumes of corporate data.

AI is Reshaping How Hackers Discover and Exploit Digital Weaknesses


 

Throughout history, artificial intelligence has been hailed as the engine of innovation, revolutionizing data analysis, automation of business processes, and strategic decision-making. However, the same capabilities that enable organizations to work more efficiently and efficiently are quietly transforming the cyber threat landscape in far less constructive ways. 

In the hands of threat actors, artificial intelligence becomes a force multiplier, lowering the barrier to sophisticated attacks dramatically. It is now possible to accomplish tasks once requiring extensive technical expertise, patience, and careful coordination at unprecedented speed and efficiency by utilizing AI-based tools for scanning vast digital environments, analyzing weaknesses, and refining attack strategies in real time. 

As a result of AI-driven tools, cybercriminals are reducing the length of the preparation process to a matter of minutes. Consequently, cyber risk is experiencing a new era in which traditional timelines for detecting, understanding, and responding to threats are rapidly disappearing, leaving organizations unable to keep up with adversaries that are increasingly automated, adaptive, and relentless. 

In recent years, threat intelligence has indicated that this acceleration has become measurable across the global attack landscape rather than merely theoretical. 

Researchers have observed that threats actors are increasingly incorporating generative AI tools into their operational workflows, thus facilitating the identification and exploitation of vulnerabilities in corporate infrastructure much faster and more consistently than they have in the past. 

In the IBM XForce Threat Intelligence Index 2026, which was released in 2026, the scale of this shift is evident. In comparison with the previous year, cyberattacks targeting public-facing applications increased by 44 percent, according to the report. 

Many applications, including corporate websites, ecommerce platforms, email gateways, financial portals, APIs, and other externally accessible services, have developed into attractive entry points because they often expose complex codebases directly to the Internet and are often easy to access. 

Based on the same analysis, vulnerability exploitation is one of the most prevalent methods of gaining access to modern networks. It has been estimated that approximately 40% of cyber incidents in 2025 have been the result of attackers successfully exploiting previously identified security vulnerabilities before their organizations have been able to correct them. 

Parallel trends indicate the expansion of the cybercrime ecosystem as a whole. It has been reported that the number of active ransomware groups operating globally has nearly doubled during the same period, whereas the number of attacks that have been publicly disclosed has increased by approximately 12 percent. 

As a consequence of these indicators, it appears that the convergence of automated discovery tools, readily available exploit frameworks, and artificial intelligence-assisted reconnaissance is accelerating the speed with which vulnerabilities are disclosed and exploited, increasing the amount of pressure on enterprise security teams already confronted with a complex threat environment. 

Artificial intelligence is rapidly becoming an integral part of cyber operations, and as such is altering the way vulnerabilities are discovered and addressed within legitimate security practices. These technological developments are accompanied by an evolution of ethical hacking, once considered a key component of modern defense strategies. 

Advanced machine learning models are increasingly being utilized by security researchers to speed up tasks which previously required painful manual analysis. The use of artificial intelligence-driven tools enables defenders to detect anomalies and potential security gaps at a scale traditional auditing methods are rarely able to attain by processing large volumes of application code, system logs, and network telemetry in seconds. 

Several experiments have already demonstrated the practical benefits of this capability. A controlled research environment has been demonstrated where AI-powered analysis systems can identify exploitable weaknesses in extensive code repositories by analyzing extensive code repositories. These systems significantly shorten the time required for vulnerability triage and remediation. 

It is becoming increasingly important for organizations operating complex digital infrastructure to perform automated security analysis. Threat actors are integrating AI-assisted techniques into their own reconnaissance and development workflows, enabling them to automate tasks that previously required experienced security researchers by leveraging the same technological advantages. 

Adversaries, however, have similar technological advantages. As a consequence of polymorphic malware, malicious code can evade signature-based detection systems by altering its structure each time it executes. A number of modified large language model toolkits have been observed in underground forums, marketed as resources to generate malware variants or scripts for exploiting vulnerabilities. 

A parallel development effort is underway to develop experimental attack frameworks that utilize artificial intelligence agents to scan open-source repositories, cloud environments, and embedded device firmware for exploitable vulnerabilities. In many ways, these approaches are similar to those employed by legitimate researchers to locate bugs, however the objective is to accelerate intrusion campaigns rather than prevent them. 

Another area which is receiving considerable attention is the security of artificial intelligence systems themselves. A growing number of organizations are incorporating AI copilots, automation agents, and data analysis models into their everyday operations, thereby creating new attack surfaces. 

In some cases, hidden instructions embedded within web content or metadata have been consumed by automated artificial intelligence systems without their knowledge, altering their behavior or triggering unauthorized actions. 

The occurrence of such incidents illustrates the potential risks associated with prompt injection and data poisoning, where malicious inputs influence the interpretation of information by AI models or the interaction between enterprise systems with them. 

In addition to exploiting weaknesses in the way AI models process context and instructions, these vulnerabilities are particularly concerning since they are not necessarily caused by traditional software vulnerabilities. In light of these developments, both industry and regulatory bodies are responding to them. 

Security frameworks and policy discussions are increasingly recognizing AI as a dual-purpose technology that can strengthen cyber defenses as well as enabling more sophisticated attack techniques. 

A number of government agencies, international policing organizations, and leading technology vendors have published guidance on addressing adversarial AI threats, emphasizing that stronger safeguards must be implemented around AI deployments, monitoring mechanisms need to be improved, and standards for model development need to be clearer. 

According to cybersecurity specialists, artificial intelligence should no longer be considered to be an unimportant or theoretical risk factor. In reality, it has already developed the tactics used by both defenders and attackers in real-world environments. 

To adapt to this environment, enterprise security teams must develop more proactive and automated defensive strategies. A growing number of organizations are evaluating artificial intelligence-assisted "red teaming" capabilities in order to simulate adversarial behavior within controlled environments and identify weaknesses in corporate infrastructure before they can be exploited by external parties. 

A key element of the security industry is the development of threat intelligence platforms that utilize machine learning to identify emerging malware patterns and accelerate incident response. Additionally, it is important to design AI systems with security considerations built in from the outset.

In order to ensure that these technologies strengthen digital resilience, rather than inadvertently expanding the attack surface, organizations are required to integrate rigorous auditing processes, secure-by-design development practices, and continuous monitoring into their automation platforms as AI-driven tools and automation platforms are increasingly used.

Increasingly, adversaries are utilizing artificial intelligence in offensive operations, which is expected to be refined and expanded as artificial intelligence matures. There is now no doubt that AI will be included in cyberattacks, but the question is whether defensive capabilities can evolve at a pace that is comparable to the evolution of AI. 

Organizations that are relying on a slow remediation cycle, fragmented monitoring, and manual investigative process risk falling behind attackers that have the capability to automate reconnaissance, vulnerability discovery, and exploit development processes.

Compared to this, security strategies that incorporate continuous visibility, automated analysis, and rapid response mechanisms have proven to be more resilient in a threat environment that is characterized by speed and scale. 

Identifying vulnerabilities and remediating them within a reasonable period of time has rapidly become a critical metric for cyber security. The security industry is responding to this challenge by introducing tools that provide more comprehensive and continuous insight into enterprise environments. 

VulnDetect, an integrated platform that helps IT and security teams stay up to date on vulnerabilities across endpoint infrastructures, is one example. Instead of tracking known or managed software with traditional asset management tools, the platform identifies obsolete, misconfigured, or unmanaged applications that often remain invisible within large enterprise networks. These overlooked assets frequently serve as attractive entry points for attackers conducting automated vulnerability scans.

A system such as VulnDetect is designed to bridge the gap between vulnerability discovery and mitigation by continuously monitoring endpoints and mapping software exposure across the network. By focusing remediation efforts on the weaknesses that present the greatest operational risk, security teams can prioritize actionable intelligence over static inventories. 

The reduction of this exposure window is becoming increasingly important in an environment where attackers are increasingly relying on artificial intelligence-assisted techniques for identifying and exploiting weaknesses.

In addition to improving incident response capabilities, the increased visibility across digital infrastructure also gives organizations a strategic control over their security posture as the cyber threat landscape becomes increasingly automated and unpredictable.

Due to this background, cybersecurity professionals are increasingly arguing that artificial intelligence should now be integrated into the defensive architecture as a whole rather than being treated as an experimental addition. Threat actors are increasingly utilizing automated reconnaissance, adaptive malware development, and artificial intelligence-assisted exploit discovery.

In order to compete effectively, defensive systems must operate at similar speeds. It is imperative that enterprise environments have greater control over how artificial intelligence models are accessed and integrated, as well as better safeguards to prevent model manipulation or jailbreaks. 

Additionally, behavioural analytics are becoming increasingly integrated into security platforms, allowing defenders to distinguish traditional threats from automated attack campaigns by identifying activity patterns that suggest machine-driven intrusion attempts. 

Furthermore, it is becoming increasingly apparent that no single organization can address these challenges alone. Cybersecurity specialists emphasize that collaboration between private corporations, government agencies, academic researchers, and international security alliances is necessary. 

It is still being actively studied how artificial intelligence introduces layers of technical complexity, and effective responses to its misuse require rapid information sharing and coordinated strategies that cross national boundaries. 

In order to counter highly automated threats, defenders can construct adaptive and responsive security postures combining the contextual judgment of experienced security professionals with the analytical capabilities of advanced artificial intelligence systems. 

While AI-assisted cybercrime is becoming increasingly sophisticated, security experts warn that organizations do not have all that is necessary to protect themselves. There are many defensive principles already in existence within established cybersecurity frameworks that can mitigate these risks.

Rather than finding entirely new defenses, enterprise leaders must strengthen visibility, governance, and operational discipline around the tools already in place in order to strengthen the visibility, governance, and operational discipline.

Organizations' resilience in an era where cyberattacks are increasingly characterized by intelligent and autonomous technologies may be determined by understanding the extent of the evolving threat landscape and taking proactive measures to enhance modern defensive capabilities.

KadNap Malware Compromises Over 14,000 Edge Devices to Operate Hidden Proxy Botnet

 


Cybersecurity researchers have identified a previously undocumented malware strain called KadNap that is primarily infecting Asus routers and other internet-facing networking devices. The attackers are using these compromised systems to form a botnet that routes malicious traffic through residential connections, effectively turning infected hardware into anonymous proxy nodes.

The threat was first observed in real-world attacks in August 2025. Since that time, the number of affected devices has grown to more than 14,000, according to investigators at Black Lotus Labs. A large share of infections, exceeding 60 percent, has been detected within the United States. Smaller groups of compromised devices have also been identified across Taiwan, Hong Kong, Russia, the United Kingdom, Australia, Brazil, France, Italy, and Spain.

Researchers report that the malware uses a modified version of the Kademlia Distributed Hash Table (DHT) protocol. This peer-to-peer networking technology enables the attackers to conceal the true location of their infrastructure by distributing communication across multiple nodes. By embedding command traffic inside decentralized peer-to-peer activity, the operators can evade traditional network monitoring systems that rely on detecting centralized servers.

Within this architecture, infected devices communicate with one another using the DHT network to discover and establish connections with command-and-control servers. This design improves the botnet’s resilience, as it reduces the chances that defenders can disable operations by shutting down a single control point.

Once a router or other edge device has been compromised, the system can be sold or rented through a proxy platform known as Doppelgänger. Investigators believe this service is a rebranded version of another proxy operation called Faceless, which previously had links to TheMoon router malware. According to information published on the Doppelgänger website, the service launched around May or June 2025 and advertises access to residential proxy connections in more than 50 countries, promoting what it claims is complete anonymity for users.

Although many of the observed infections involve Asus routers, researchers found that the malware operators are also capable of targeting a wider range of edge networking equipment.

The attack chain begins with the download of a shell script named aic.sh, retrieved from a command server located at 212.104.141[.]140. This script initiates the infection process by connecting the compromised device to the botnet’s peer-to-peer network.

To ensure the malware remains active, the script establishes persistence by creating a cron task that downloads the same script again at the 55-minute mark of every hour. During this process, the file is renamed “.asusrouter” and executed automatically.

After persistence is secured, the script downloads an ELF executable, renames it “kad,” and runs it on the device. This program installs the KadNap malware itself. The malware is capable of operating on hardware that uses ARM and MIPS processor architectures, which are commonly found in routers and networking appliances.

KadNap also contacts a Network Time Protocol (NTP) server to retrieve the current system time and store it along with the device’s uptime. These values are combined to produce a hash that allows the malware to identify and connect with other peers within the decentralized network, enabling it to receive commands or download additional components.

Two additional files used during the infection process, fwr.sh and /tmp/.sose, contain instructions that close port 22, which is the default port used by Secure Shell (SSH). These files also extract lists of command server addresses in IP-address-and-port format, which the malware uses to establish communication with control infrastructure.

According to researchers, the use of the DHT protocol provides the botnet with durable communication channels that are difficult to shut down because its traffic blends with legitimate peer-to-peer network activity.

Further examination revealed that not every infected device communicates with every command server. This suggests the attackers are segmenting their infrastructure, possibly grouping devices based on hardware type or model.

Investigators also noted that routers infected with KadNap may sometimes contain multiple malware infections simultaneously. Because of this overlap, it can be challenging to determine which threat actor is responsible for particular malicious activity originating from those systems.

Security experts recommend that individuals and organizations operating small-office or home-office (SOHO) routers take several precautions. These include installing firmware updates, restarting devices periodically, replacing default administrator credentials, restricting management access, and replacing routers that have reached end-of-life status and no longer receive security patches.

Researchers concluded that KadNap’s reliance on a peer-to-peer command structure distinguishes it from many other proxy-based botnets designed to provide anonymity services. The decentralized approach allows operators to remain hidden while making it significantly harder for defenders to detect and block the network.

In a separate report, security analysts at Cyble disclosed a new Linux malware threat named ClipXDaemon.

The malware targets cryptocurrency users by intercepting wallet addresses that victims copy to their clipboard and secretly replacing them with addresses controlled by attackers. This type of threat is commonly known as clipper malware.

ClipXDaemon is distributed through a Linux post-exploitation framework called ShadowHS and has been described as an automated clipboard-hijacking tool designed specifically for systems running Linux X11 graphical environments.

The malware operates entirely in memory, which reduces traces on disk and improves its ability to remain undetected. It also employs several stealth techniques, including disguising its process names and deliberately avoiding execution in Wayland sessions.

This design choice is intentional because Wayland’s security architecture introduces stricter restrictions on clipboard access. Applications must usually involve explicit user interaction before they can read clipboard contents. By disabling itself when Wayland is detected, the malware avoids triggering errors or suspicious behavior.

Once active in an X11 session, ClipXDaemon continuously checks the system clipboard every 200 milliseconds. If it detects a copied cryptocurrency wallet address, it immediately substitutes it with an attacker-controlled address before the victim pastes the information.

The malware currently targets a wide range of digital currencies, including Bitcoin, Ethereum, Litecoin, Monero, Tron, Dogecoin, Ripple, and TON.

Researchers noted that ClipXDaemon differs significantly from traditional Linux malware families. It does not include command-and-control communication, does not send beaconing signals to remote servers, and does not rely on external instructions to operate.

Instead, the malware generates profits directly by manipulating cryptocurrency transactions in real time, silently redirecting funds when victims paste compromised wallet addresses during transfers.

Commercial Spy Trackers Breach U.S. Army Networks, Jeopardizing National Security

 

U.S. Army networks face a hidden invasion from commercial spy technology, compromising soldier data and national security in alarming ways. A groundbreaking study by the Army Cyber Institute at West Point analyzed traffic on military networks, discovering that 21.2% of the most frequently visited websites host tracker domains. These trackers relentlessly collect sensitive information like geolocation, email addresses, and detailed browsing histories from troops during routine online activities.

The infiltration stems from ubiquitous commercial tools embedded in popular sites. Companies such as Adobe, Microsoft, Akamai, and even the banned TikTok deploy these trackers, funneling harvested data to brokers who resell it without regard for buyers' intentions. This surveillance capitalism mirrors civilian web tracking but strikes deeper when targeting military personnel, turning everyday internet use into a potential intelligence leak.

Researchers from Duke University exposed the severity by purchasing dossiers on active-duty service members from data brokers with ease. They acquired names, home addresses, personal emails, and military branch details, often from non-U.S. domains, highlighting how adversaries could exploit this for blackmail, targeting installations, or cyber campaigns . One expert called the process "disturbingly simple," underscoring the broker market's indifference to national security risks.

Persistent vulnerabilities echo the 2018 Strava fitness app scandal, where heatmap data revealed covert base locations worldwide. The latest findings show trackers in 42% of network requests and 10.4% of sites, exceeding privacy safeguards on mainstream streaming platforms. Cybersecurity professor Alan Woodward of the University of Surrey warns, "If you’re not paying, you are the product," a harsh reality for soldiers navigating the open web.

The Pentagon is responding aggressively through its 2023 Cyber Strategy, implementing Zero Trust architecture, enhanced endpoint detection, and widespread tracker blocking . The National Defense Authorization Act bolsters these efforts with mandates for spyware mitigation and stricter social media vetting. The Army Cyber Institute advocates quantifying trackers and extending blocks to personal devices, elevating data privacy to a core element of force protection in the digital age.

AI Agents Boost Productivity but Introduce New Cybersecurity Risks for Organizations

 

Artificial Intelligence is rapidly evolving from a conversational tool into a system capable of performing real-world tasks independently. Known as AI Agents, these systems can carry out activities such as sending emails, transferring data, and managing software workflows without constant human supervision.

While this automation significantly improves efficiency, it also creates a new entry point for cyber threats.

AI agents can be compared to a new employee who has access to every room in a company building but lacks proper identification. Because these digital systems operate autonomously, they often hold permissions to sensitive resources and information, sometimes without sufficient monitoring.

Cybercriminals have begun exploiting this reality. Instead of attempting to steal passwords or break into systems directly, attackers may manipulate AI agents into performing malicious actions on their behalf.

Organizations that rely on AI-driven automation could therefore face new risks. Many conventional cybersecurity systems were originally designed to protect human users rather than automated digital workers, leaving a potential gap in defense.

To address these concerns, an upcoming webinar titled “Beyond the Model: The Expanded Attack Surface of AI Agents” will explore how this evolving technology is being targeted by threat actors.

During the session, Rahul Parwani, Head of Product for AI Security at Airia, will explain how attackers exploit AI agents and what organizations can do to strengthen their defenses.

What You Will Learn
  • The "Dark Matter" of Identity: Why AI agents are often invisible to your security team and how to find them.
  • How Agents Get Tricked: Learn how a simple "bad idea" hidden in a document can make an AI agent leak your company secrets.
  • The Safety Blueprint: Simple steps to give your AI agents the power they need without giving them "God Mode" over your data.
This session is aimed at business leaders, IT professionals, and anyone responsible for safeguarding corporate data. The discussion will break down complex security concepts in a way that does not require deep coding expertise.

As organizations continue adopting AI-driven automation, understanding the security implications of AI agents is becoming increasingly important. Without proper safeguards, the same tools designed to improve productivity could also become unexpected vulnerabilities.

Hackers Exploit FortiGate Devices to Hack Networks and Credentials


Exploiting network points to hack victims 

Cybersecurity experts have warned about a new campaign where hackers are exploiting FortiGate Next-Gen Firewall (NGFW) devices as entry points to hack target networks. 

The campaign involves abusing the recently revealed security flaws or weak password to take out configuration files. The activity has singled out class linked to government, healthcare, and managed service providers. 

Attack tactic 

According to experts, “FortiGate network appliances have considerable access to the environments they were installed to protect. In many configurations, this includes service accounts which are connected to the authentication infrastructure, such as Active Directory (AD) and Lightweight Directory Access Protocol (LDAP).”

"This setup can enable the appliance to map roles to specific users by fetching attributes about the connection that’s being analyzed and correlating with the Directory information, which is useful in cases where role-based policies are set or for increasing response speed for network security alerts detected by the device,” the experts added. 

Misconfigurations opening doors for hackers 

But the experts noticed that this access could be compromised by hackers who hack into FortiGate devices via flaws or misconfigurations.

In one attack, the hackers breached a FortiGate appliance last year in November to make a new local admin account “support” and built four new firewall policies that let the account to travel across all zones without any limitations. 

The hacker then routinely checked device access. “Evidence demonstrates the attacker authenticated to the AD using clear text credentials from the fortidcagent service account, suggesting the attacker decrypted the configuration file and extracted the service account credentials,” SentinelOne reported. 

How was the account used?

After this, hacker leveraged the service account to verify the target's environment and put rogue workstations in the AD for further access. Following this, network scanning started and the breach was found, and lateral movement was stopped. 

The contents of the NTDS.dit file and SYSTEM registry hive were exfiltrated to an external server ("172.67.196[.]232") over port 443 by the Java malware, which was triggered via DLL side-loading.

SentinelOne said that “While the actor may have attempted to crack passwords from the data, no such credential usage was identified between the time of credential harvesting and incident containment.”

Iran-Linked Handala Hackers Claim Breach of Israel’s Clalit Healthcare Network

 

A breach at Israel’s biggest health provider has been tied to an Iranian-affiliated hacking collective, which posted stolen patient records online. Claiming credit, a network calling itself Handala detailed the intrusion via public posts. Access reportedly reached Clalit Health Services’ core data stores. That institution cares for around fifty percent of the country’s residents. 

More than ten thousand people saw their medical files exposed, the hackers stated. Samples of what they say is real data now sit on public servers - names, test results, health scans tucked inside. Handala issued a statement saying Israel's hospital networks were left reeling after the breach, calling defenses weak and slow. What followed was not subtle: laughter at how easily systems gave way.  

Not just an attack, but positioned as resistance - this action followed claims of long-standing control and abuse. Echoing past messages, the announcement carried familiar tones seen when digital strikes hit Israeli bodies before. 

A strange post appeared online just hours before the reveal - hinting at something unfolding within Israel’s medical system. By next morning, reports confirmed a possible leak of sensitive information. Right after hearing about it, Clalit's cyber defense units started looking into what happened. Government agencies got updates right away, since detection tools kicked in under standard procedures. 

While checks are still underway, hospital networks remain stable and running without disruption. A fresh incident highlights ongoing digital operations tied to Iran, aimed at entities and people in Israel. In recent years, outfits connected to Tehran have faced claims of seeking information, interfering with key bodies, while also trying to pull in collaborators using internet exchanges along with money offers. 

Now known for bold statements, Handala has taken credit for multiple major cyber events, experts note. While Check Point Research points out that some assertions appear inflated, a few of those declarations align with verified breaches. Unexpected overlaps between claim and evidence keep scrutiny alive. 

In December, hackers revealed they had gained access to ex-Prime Minister Naftali Bennett’s Telegram messages. Confirmation came from Bennett's team - yes, the account was reached, yet his device remained untouched. 

Later, these attackers stated they went after more individuals in politics. Among them: ex-minister Ayelet Shaked and Tzachi Braverman, a close associate of Netanyahu. Earlier, Israel's medical system dealt with digital attacks. Last October, hackers targeted Assaf Harofeh Medical Center using ransomware linked to Qilin. Patient records were at risk when the criminals asked for 70,000 dollars. Threats to expose sensitive information followed if payment failed. 

Later, officials pointed to Iran’s likely involvement in that incident too - showing how digital attacks are becoming a key part of the strain between these nations.

Hackers Exploit Claude to Target Multiple Mexican Government Agencies

 


As generative artificial intelligence emerges, digital innovation is evolving at an unprecedented rate, but it is also quietly reshaping cybercrime in a subtle way. Tools originally designed for the purpose of research, coding, and problem-solving are now being explored for a variety of less benign purposes as well. 

This fact has been illustrated in a troubling fashion by recent revelations that threat actors have exploited the capabilities of Claude in order to support a large-scale intrusion targeting Mexican government networks. 

A security researcher at Gambit Security reported that attackers extracted approximately 150 gigabytes of sensitive information from multiple Mexican government agencies, demonstrating how widely accessible artificial intelligence systems can be manipulated to assist sophisticated cyber operations despite built-in safeguards despite their ease of use. 

It has been determined that the intrusion was not limited to passive reconnaissance. The attacker is believed to have used Claude throughout the campaign as an interactive tool for research and development. 

Gambit Security has released an analysis that indicates that the activity began in December, and continued for approximately a month, during which the chatbot was repeatedly instructed to identify potential vulnerabilities within government networks and to create scripts for exploiting those vulnerabilities. 

Using the same AI model, methods were also outlined for automating sensitive information extraction, effectively turning the model into an assistant for data extraction. In a series of carefully structured prompts, the operator gradually weakened the built-in safeguards of the model, thereby manipulating it slowly. 

There have been reports that the system has rejected initial requests, but subsequent iterations seem to have bypassed the platform's guardrails and generated increasingly more actionable material. The extent of the assistance presented by the model raised particular concerns among analysts. 

According to Curtis Simpson, the system produced thousands of analytical outputs which detailed potential attack paths, internal network targets, and credential-related strategies, thereby providing guidance on how to proceed within compromised environments. These outputs were more structured operational guidance for the campaign's human operator than casual responses. 

According to Anthropic, an internal investigation had been initiated following the disclosure and that the activity had been disrupted and the accounts associated with the misuse were permanently banned. According to a company representative, safeguards are continuing to develop. 

For example, the Claude Opus 4.6 model incorporates additional mechanisms to detect and block similar forms of abuse in the latest iteration. In the time of publishing, it had not been officially determined that the individuals responsible for the intrusion were part of any advanced persistent threat group that had been publicly identified.

Nonetheless, analysts examining the operation noted several similarities with tactics historically associated with espionage campaigns involving Chinese actors. As a result of intelligence gathered by Gambit Security and corroborated by SecurityAffairs, the tradecraft demonstrated in the operation - particularly disciplined operational security and systematic reconnaissance - appears to resemble patterns previously observed in state-aligned cyber espionage. 

A separate disclosure from Anthropic confirmed that state-sponsored actors have misused its AI programming tools to benefit dozens of organizations worldwide. It has been determined that investigators at this incident heavily relied on artificial intelligence-assisted workflows to accelerate the exploit development process, effectively reducing the technical barrier to assembling complex multi-stage intrusion chains while retaining high levels of operational secrecy. 

Technical analysis indicates that the campaign aimed at weaponizing Claude Code, by utilizing prompt engineering techniques in order to circumvent the system's built-in security measures. Over 1,000 prompts were submitted to the artificial intelligence environment, some of which were presented as legitimate bug bounty testing scenarios to bypass ethical restrictions embedded within the model by the researchers. 

In this iterative process, attackers were reported to have developed customized exploit scripts, lateral movement tooling, and operational playbooks tailored to the architecture of compromised networks through this iterative interaction. 

Following the generation of AI-generated material, successive phases of the intrusion chain, including privilege escalation, credential harvesting, and automated data extraction, were carried out. According to reports, the operators began shifting portions of their workflow to GPT-4.1 to continue developing credential handling utilities and refine network traversal techniques when restrictions began limiting output from Claude's environment. 

It was possible for the attackers to maintain a workflow that was largely automated and able to quickly adapt to defensive obstacles within the targeted infrastructure by chaining outputs from both AI systems. As a result of this approach, investigators identified behavioural indicators that stood out during forensic examination.

Among them were unusually large amounts of automated scripting activity, repeated instances of AI-generated code fragments appearing within attack tools, and the presence of AI-aided development processes operating from compromised government infrastructures. 

A series of stages has been involved in the intrusion, which began with compromising systems related to the Mexican tax authority before spreading to other public infrastructures. The attacker, according to investigators, then moved through a network of interconnected systems involving several regional government environments, municipal systems in Mexico City, public utility infrastructure in Monterrey, as well as at least one major financial institution, as well as the national electoral institute. 

As a result of the operation, approximately 150 gigabytes of sensitive data - including administrative information and individually identifiable information - were exfiltrated from these environments. MITER ATT&CK knowledge base analysis revealed a familiar sequence of intrusion techniques based on the observed activity. There is evidence that the initial access was obtained through valid accounts, followed by lateral movement with remote services, credential acquisition through operating system credential dump mechanisms, and large-scale data exfiltration. 

The researchers also observed additional measures intended to undermine defensive monitoring by interfering with security controls within the targeted environments in order to weaken defensive monitoring. 

Researchers noted that each of these tactics has been observed in conventional cyberespionage operations; however, the distinctive feature of the campaign was the systematic integration of generative artificial intelligence into the attack process. 

It is possible for attackers to coordinate complex intrusion chains at a speed and scale that is not possible with traditional manual methods, as they were able to automate reconnaissance, exploit development, and operational planning. This incident underscores how generative artificial intelligence systems are rapidly becoming a new layer within the cyber threat landscape that can enhance both defensive and offensive capabilities. 

In response to the threat of AI-aided attacks, security experts recommend that organizations, particularly those operating critical public infrastructure, adapt their defensive strategies accordingly. A number of measures are being taken to strengthen identity and access controls, identify anomalous automation patterns, and implement advanced behavioral analytics to identify tooling and scripting generated by AI. 

It is also recommended that AI developers, cybersecurity firms, and government agencies collaborate continuously so that safeguards can be refined to ensure that large language models are not manipulated for malicious purposes. 

It is becoming increasingly important for the cybersecurity community to ensure that innovations in artificial intelligence do not inadvertently become a force multiplier for sophisticated digital intrusions as platforms such as Claude and other generative AI systems continue to evolve.