Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

Meta Targets 150K Accounts in Southeast Asia Scam Operation

  Meta announced that it has removed more than 150,000 accounts tied to organized scam centers operating in Southeast Asia, describing the m...

All the recent news you need to know

Google API Keys Expose Gemini AI Data via Leaked Credentials

 

Google API keys, once considered harmless when embedded in public websites for services like Maps or YouTube, have turned into a serious security risk following the integration of Google's Gemini AI assistant. Security researchers at Truffle Security uncovered this issue, revealing that nearly 3,000 live API keys—prefixed with "AIza"—are exposed in client-side JavaScript code across popular sites.

Truffle Security's scan of the November 2025 Common Crawl dataset, which captures snapshots of major websites, identified 2,863 active keys from diverse sectors including finance, security firms, and even Google's own infrastructure. These keys, deployed sometimes years ago (one traced back to February 2023), were originally safe as mere billing identifiers but gained unauthorized access to Gemini endpoints without developers' knowledge.Attackers can simply copy a key from page source, authenticate to Gemini, and extract sensitive data like uploaded files, cached contexts, or datasets via simple prompts.

The danger extends beyond data theft to massive financial abuse, as Gemini API calls consume tokens that rack up charges—potentially thousands of dollars daily per compromised account, depending on the model and context window. Truffle Security demonstrated this by querying the /models endpoint with exposed keys, confirming access to private Gemini features. One reported case highlighted an $82,314 bill from a stolen key, underscoring the real-world impact.

Google acknowledged the flaw as "single-service privilege escalation" after Truffle's disclosure on November 21, 2025, and implemented fixes by January 2026, including blocking leaked keys from Gemini access, defaulting new AI Studio keys to Gemini-only scope, and sending proactive leak notifications. Despite these measures, the "retroactive privilege expansion" caught many off-guard, as enabling Gemini in projects silently empowered old keys.

Developers must immediately audit Google Cloud projects for Gemini API enablement, rotate all exposed keys, and restrict scopes to essentials—avoiding the default "unrestricted" setting. Tools like TruffleHog can scan code repositories for leaks, while regular monitoring prevents future exposures in an era where AI services amplify API risks. This incident highlights the need for vigilance as cloud features evolve.

Silent Scam Calls Used to Verify Active Phone Numbers, Cybersecurity Experts Warn

 

Many people have answered calls from unfamiliar numbers only to hear silence on the other end. In some cases, no one speaks at all. In others, there is a short delay before a caller finally responds. While this may appear to be a simple mistake or a wrong number, cybersecurity experts say these calls are often part of a deliberate scam tactic used to verify active phone numbers. 

According to security specialists, these silent calls function as a form of automated reconnaissance. Fraud operations run large-scale calling systems that dial thousands of numbers to determine which ones belong to real people. When someone answers, the system confirms that the number is active and marks it as a potential target for future scams. 

Keeper Security Chief Information Security Officer Shane Barney explained that such calls are rarely accidental. Instead, they help attackers filter out inactive numbers before investing more time and resources into scams. Verified contact information has value in modern cybercrime networks, where data about reachable individuals can be bought, sold, and reused across different fraud campaigns. 

Once a phone number is confirmed as active, it may be used in several ways. In some cases, scammers follow up with phishing calls or messages designed to trick victims into revealing personal or financial information. In more advanced attacks, a verified phone number could be combined with leaked email addresses from data breaches or used in schemes such as SIM-swap fraud, where attackers attempt to gain control of a victim’s mobile account. 

Another variation occurs when callers respond only after a brief pause. This delay is typically caused by predictive dialing systems that automatically place large volumes of calls. These systems detect when a human answers and then route the call to a live operator. The short silence represents the time it takes for the system to transfer the connection. 

Some people also worry that speaking during these calls could allow scammers to clone their voice using artificial intelligence. While voice cloning technology exists, experts say creating a convincing replica generally requires longer and clearer audio samples than a brief greeting. 

However, voice cloning could still become part of larger scams if criminals already possess other personal details about a victim. Security professionals recommend simple precautions when receiving suspicious calls. If an unknown number produces silence, hanging up immediately is usually the safest option. 

Another tactic is answering without speaking, which prevents automated systems from detecting a human voice. Spam-filtering tools can also help reduce nuisance calls. Applications such as Truecaller, RoboKiller, and Hiya identify numbers previously reported as spam. However, experts caution that no filtering system is perfect because scammers frequently change phone numbers. 

Ultimately, while call-blocking tools can reduce the volume of unwanted calls, maintaining strong account security and being cautious with unknown callers remain the most effective ways to avoid phone-based scams.

Perplexity's Comet AI Browser Tricked Into Phishing Scam Within Four Minutes


Agentic browser at risk

Agentic web browsers that use AI tools to autonomously do tasks across various websites for a user could be trained and fooled into phishing attacks. Hackers exploit the AI browsers’ tendency to assert their actions and deploy them against the same model to remove security checks. 

According to security expert Shaked Chen, “The AI now operates in real time, inside messy and dynamic pages, while continuously requesting information, making decisions, and narrating its actions along the way. Well, 'narrating' is quite an understatement - It blabbers, and way too much!,” the Hacker News reported. Agentic Blabbering is an AI browser that displays what it sees, thinks, and plans to do next, and what it deems safe or a threat. 

Tricking the browsers

By hacking the traffic between the AI services on the vendor’s servers and putting it as input to a Generative Adversarial Network (GAN), it made Perplexity’s Comet AI browser fall prey to a phishing attack within four minutes. 

The research is based on established tactics such as Scamlexity and VibeScamming, which revealed that vibe-coding platforms and AI browsers can be coerced into generating scam pages and performing malicious tasks via prompt injection. 

Attack tactic

There is a change in the attack surface as a result of the AI agent managing the tasks without frequent human oversight, meaning that a scammer no longer has to trick a user. Instead, it seeks to deceive the AI model itself. 

Chen said, “If you can observe what the agent flags as suspicious, hesitates on, and more importantly, what it thinks and blabbers about the page, you can use that as a training signal.” Chen added that the “scam evolves until the AI Browser reliably walks into the trap another AI set for it."

End goal?

The aim is to make a “scamming machine” that improves and recreates a phishing page until the agentic browser accepts the commands and carries out the hacker’s command, like putting the victim’s passwords on a malicious web page built for refund scams. 

Guardio is concerned about the development, saying that, “This reveals the unfortunate near future we are facing: scams will not just be launched and adjusted in the wild, they will be trained offline, against the exact model millions rely on, until they work flawlessly on first contact.”

ShinyHunters Threatens Data Leak After Alleged Salesforce Breach

 

The hacking group ShinyHunters has warned roughly 400 companies that it may publish stolen data online if ransom demands are not met. The group claims it accessed private records through websites built on Salesforce Experience Cloud, a platform companies use to create public portals and customer support sites. 

According to earlier findings by cybersecurity firm Mandiant, the attackers targeted organisations that used Salesforce’s Experience Cloud for external-facing services such as help centres and information portals. 

How the breach allegedly happened? The reported intrusion appears linked to the configuration of public access settings within these websites. 

Salesforce allows websites built on Experience Cloud to include a “guest user” profile so visitors can view limited information without logging in. 

If these settings are configured too broadly, however, the access permissions can expose internal data to the public internet. Investigations suggest the attackers used a modified version of a tool called Aura Inspector to scan websites for such weaknesses. 

Once vulnerabilities were identified, the hackers were able to extract information including names and phone numbers. Security experts say the stolen data may already be fueling vishing attacks. 

In such scams, attackers contact employees by phone and attempt to trick them into revealing additional confidential information. 

Dispute over the root cause There is disagreement over whether the problem stems from a software flaw or from how companies configured their systems. Salesforce has said the platform itself remains secure and that the issue is related to customer settings rather than a vulnerability in the product. 

“Our investigation to date confirms that this activity relates to a customer-configured guest user setting, not a platform security flaw,” the company said in a blog post. 

ShinyHunters disputes that explanation, claiming it discovered a previously unknown flaw that allows it to bypass certain protections even on sites that appear properly configured. 

Independent researchers have not yet verified that claim. Pressure tactics used by hackers ShinyHunters is known for using aggressive extortion strategies to pressure victims into paying ransom demands. The group often releases stolen data in stages to increase pressure on organisations that refuse to negotiate. 

A recent example involved Dutch telecommunications provider Odido and its brand Ben. After the company declined to pay a ransom reportedly worth one million euros, the hackers began publishing large quantities of customer data on the dark web. 

Security guidance for companies Salesforce is urging customers to review their portal configurations and tighten access controls. The company recommends applying a “least privilege” approach, meaning guest users should only have the minimum permissions required to use a site. 

Businesses are also advised to keep data private by default, disable settings that expose internal staff information, and turn off public application programming interfaces where possible. 

These interfaces can allow external systems to exchange data and may create additional entry points if left open. 

The incident highlights the growing risks associated with misconfigured cloud services, which security analysts say have become a common target for cybercriminal groups seeking large volumes of corporate data.

AI is Reshaping How Hackers Discover and Exploit Digital Weaknesses


 

Throughout history, artificial intelligence has been hailed as the engine of innovation, revolutionizing data analysis, automation of business processes, and strategic decision-making. However, the same capabilities that enable organizations to work more efficiently and efficiently are quietly transforming the cyber threat landscape in far less constructive ways. 

In the hands of threat actors, artificial intelligence becomes a force multiplier, lowering the barrier to sophisticated attacks dramatically. It is now possible to accomplish tasks once requiring extensive technical expertise, patience, and careful coordination at unprecedented speed and efficiency by utilizing AI-based tools for scanning vast digital environments, analyzing weaknesses, and refining attack strategies in real time. 

As a result of AI-driven tools, cybercriminals are reducing the length of the preparation process to a matter of minutes. Consequently, cyber risk is experiencing a new era in which traditional timelines for detecting, understanding, and responding to threats are rapidly disappearing, leaving organizations unable to keep up with adversaries that are increasingly automated, adaptive, and relentless. 

In recent years, threat intelligence has indicated that this acceleration has become measurable across the global attack landscape rather than merely theoretical. 

Researchers have observed that threats actors are increasingly incorporating generative AI tools into their operational workflows, thus facilitating the identification and exploitation of vulnerabilities in corporate infrastructure much faster and more consistently than they have in the past. 

In the IBM XForce Threat Intelligence Index 2026, which was released in 2026, the scale of this shift is evident. In comparison with the previous year, cyberattacks targeting public-facing applications increased by 44 percent, according to the report. 

Many applications, including corporate websites, ecommerce platforms, email gateways, financial portals, APIs, and other externally accessible services, have developed into attractive entry points because they often expose complex codebases directly to the Internet and are often easy to access. 

Based on the same analysis, vulnerability exploitation is one of the most prevalent methods of gaining access to modern networks. It has been estimated that approximately 40% of cyber incidents in 2025 have been the result of attackers successfully exploiting previously identified security vulnerabilities before their organizations have been able to correct them. 

Parallel trends indicate the expansion of the cybercrime ecosystem as a whole. It has been reported that the number of active ransomware groups operating globally has nearly doubled during the same period, whereas the number of attacks that have been publicly disclosed has increased by approximately 12 percent. 

As a consequence of these indicators, it appears that the convergence of automated discovery tools, readily available exploit frameworks, and artificial intelligence-assisted reconnaissance is accelerating the speed with which vulnerabilities are disclosed and exploited, increasing the amount of pressure on enterprise security teams already confronted with a complex threat environment. 

Artificial intelligence is rapidly becoming an integral part of cyber operations, and as such is altering the way vulnerabilities are discovered and addressed within legitimate security practices. These technological developments are accompanied by an evolution of ethical hacking, once considered a key component of modern defense strategies. 

Advanced machine learning models are increasingly being utilized by security researchers to speed up tasks which previously required painful manual analysis. The use of artificial intelligence-driven tools enables defenders to detect anomalies and potential security gaps at a scale traditional auditing methods are rarely able to attain by processing large volumes of application code, system logs, and network telemetry in seconds. 

Several experiments have already demonstrated the practical benefits of this capability. A controlled research environment has been demonstrated where AI-powered analysis systems can identify exploitable weaknesses in extensive code repositories by analyzing extensive code repositories. These systems significantly shorten the time required for vulnerability triage and remediation. 

It is becoming increasingly important for organizations operating complex digital infrastructure to perform automated security analysis. Threat actors are integrating AI-assisted techniques into their own reconnaissance and development workflows, enabling them to automate tasks that previously required experienced security researchers by leveraging the same technological advantages. 

Adversaries, however, have similar technological advantages. As a consequence of polymorphic malware, malicious code can evade signature-based detection systems by altering its structure each time it executes. A number of modified large language model toolkits have been observed in underground forums, marketed as resources to generate malware variants or scripts for exploiting vulnerabilities. 

A parallel development effort is underway to develop experimental attack frameworks that utilize artificial intelligence agents to scan open-source repositories, cloud environments, and embedded device firmware for exploitable vulnerabilities. In many ways, these approaches are similar to those employed by legitimate researchers to locate bugs, however the objective is to accelerate intrusion campaigns rather than prevent them. 

Another area which is receiving considerable attention is the security of artificial intelligence systems themselves. A growing number of organizations are incorporating AI copilots, automation agents, and data analysis models into their everyday operations, thereby creating new attack surfaces. 

In some cases, hidden instructions embedded within web content or metadata have been consumed by automated artificial intelligence systems without their knowledge, altering their behavior or triggering unauthorized actions. 

The occurrence of such incidents illustrates the potential risks associated with prompt injection and data poisoning, where malicious inputs influence the interpretation of information by AI models or the interaction between enterprise systems with them. 

In addition to exploiting weaknesses in the way AI models process context and instructions, these vulnerabilities are particularly concerning since they are not necessarily caused by traditional software vulnerabilities. In light of these developments, both industry and regulatory bodies are responding to them. 

Security frameworks and policy discussions are increasingly recognizing AI as a dual-purpose technology that can strengthen cyber defenses as well as enabling more sophisticated attack techniques. 

A number of government agencies, international policing organizations, and leading technology vendors have published guidance on addressing adversarial AI threats, emphasizing that stronger safeguards must be implemented around AI deployments, monitoring mechanisms need to be improved, and standards for model development need to be clearer. 

According to cybersecurity specialists, artificial intelligence should no longer be considered to be an unimportant or theoretical risk factor. In reality, it has already developed the tactics used by both defenders and attackers in real-world environments. 

To adapt to this environment, enterprise security teams must develop more proactive and automated defensive strategies. A growing number of organizations are evaluating artificial intelligence-assisted "red teaming" capabilities in order to simulate adversarial behavior within controlled environments and identify weaknesses in corporate infrastructure before they can be exploited by external parties. 

A key element of the security industry is the development of threat intelligence platforms that utilize machine learning to identify emerging malware patterns and accelerate incident response. Additionally, it is important to design AI systems with security considerations built in from the outset.

In order to ensure that these technologies strengthen digital resilience, rather than inadvertently expanding the attack surface, organizations are required to integrate rigorous auditing processes, secure-by-design development practices, and continuous monitoring into their automation platforms as AI-driven tools and automation platforms are increasingly used.

Increasingly, adversaries are utilizing artificial intelligence in offensive operations, which is expected to be refined and expanded as artificial intelligence matures. There is now no doubt that AI will be included in cyberattacks, but the question is whether defensive capabilities can evolve at a pace that is comparable to the evolution of AI. 

Organizations that are relying on a slow remediation cycle, fragmented monitoring, and manual investigative process risk falling behind attackers that have the capability to automate reconnaissance, vulnerability discovery, and exploit development processes.

Compared to this, security strategies that incorporate continuous visibility, automated analysis, and rapid response mechanisms have proven to be more resilient in a threat environment that is characterized by speed and scale. 

Identifying vulnerabilities and remediating them within a reasonable period of time has rapidly become a critical metric for cyber security. The security industry is responding to this challenge by introducing tools that provide more comprehensive and continuous insight into enterprise environments. 

VulnDetect, an integrated platform that helps IT and security teams stay up to date on vulnerabilities across endpoint infrastructures, is one example. Instead of tracking known or managed software with traditional asset management tools, the platform identifies obsolete, misconfigured, or unmanaged applications that often remain invisible within large enterprise networks. These overlooked assets frequently serve as attractive entry points for attackers conducting automated vulnerability scans.

A system such as VulnDetect is designed to bridge the gap between vulnerability discovery and mitigation by continuously monitoring endpoints and mapping software exposure across the network. By focusing remediation efforts on the weaknesses that present the greatest operational risk, security teams can prioritize actionable intelligence over static inventories. 

The reduction of this exposure window is becoming increasingly important in an environment where attackers are increasingly relying on artificial intelligence-assisted techniques for identifying and exploiting weaknesses.

In addition to improving incident response capabilities, the increased visibility across digital infrastructure also gives organizations a strategic control over their security posture as the cyber threat landscape becomes increasingly automated and unpredictable.

Due to this background, cybersecurity professionals are increasingly arguing that artificial intelligence should now be integrated into the defensive architecture as a whole rather than being treated as an experimental addition. Threat actors are increasingly utilizing automated reconnaissance, adaptive malware development, and artificial intelligence-assisted exploit discovery.

In order to compete effectively, defensive systems must operate at similar speeds. It is imperative that enterprise environments have greater control over how artificial intelligence models are accessed and integrated, as well as better safeguards to prevent model manipulation or jailbreaks. 

Additionally, behavioural analytics are becoming increasingly integrated into security platforms, allowing defenders to distinguish traditional threats from automated attack campaigns by identifying activity patterns that suggest machine-driven intrusion attempts. 

Furthermore, it is becoming increasingly apparent that no single organization can address these challenges alone. Cybersecurity specialists emphasize that collaboration between private corporations, government agencies, academic researchers, and international security alliances is necessary. 

It is still being actively studied how artificial intelligence introduces layers of technical complexity, and effective responses to its misuse require rapid information sharing and coordinated strategies that cross national boundaries. 

In order to counter highly automated threats, defenders can construct adaptive and responsive security postures combining the contextual judgment of experienced security professionals with the analytical capabilities of advanced artificial intelligence systems. 

While AI-assisted cybercrime is becoming increasingly sophisticated, security experts warn that organizations do not have all that is necessary to protect themselves. There are many defensive principles already in existence within established cybersecurity frameworks that can mitigate these risks.

Rather than finding entirely new defenses, enterprise leaders must strengthen visibility, governance, and operational discipline around the tools already in place in order to strengthen the visibility, governance, and operational discipline.

Organizations' resilience in an era where cyberattacks are increasingly characterized by intelligent and autonomous technologies may be determined by understanding the extent of the evolving threat landscape and taking proactive measures to enhance modern defensive capabilities.

KadNap Malware Compromises Over 14,000 Edge Devices to Operate Hidden Proxy Botnet

 


Cybersecurity researchers have identified a previously undocumented malware strain called KadNap that is primarily infecting Asus routers and other internet-facing networking devices. The attackers are using these compromised systems to form a botnet that routes malicious traffic through residential connections, effectively turning infected hardware into anonymous proxy nodes.

The threat was first observed in real-world attacks in August 2025. Since that time, the number of affected devices has grown to more than 14,000, according to investigators at Black Lotus Labs. A large share of infections, exceeding 60 percent, has been detected within the United States. Smaller groups of compromised devices have also been identified across Taiwan, Hong Kong, Russia, the United Kingdom, Australia, Brazil, France, Italy, and Spain.

Researchers report that the malware uses a modified version of the Kademlia Distributed Hash Table (DHT) protocol. This peer-to-peer networking technology enables the attackers to conceal the true location of their infrastructure by distributing communication across multiple nodes. By embedding command traffic inside decentralized peer-to-peer activity, the operators can evade traditional network monitoring systems that rely on detecting centralized servers.

Within this architecture, infected devices communicate with one another using the DHT network to discover and establish connections with command-and-control servers. This design improves the botnet’s resilience, as it reduces the chances that defenders can disable operations by shutting down a single control point.

Once a router or other edge device has been compromised, the system can be sold or rented through a proxy platform known as Doppelgänger. Investigators believe this service is a rebranded version of another proxy operation called Faceless, which previously had links to TheMoon router malware. According to information published on the Doppelgänger website, the service launched around May or June 2025 and advertises access to residential proxy connections in more than 50 countries, promoting what it claims is complete anonymity for users.

Although many of the observed infections involve Asus routers, researchers found that the malware operators are also capable of targeting a wider range of edge networking equipment.

The attack chain begins with the download of a shell script named aic.sh, retrieved from a command server located at 212.104.141[.]140. This script initiates the infection process by connecting the compromised device to the botnet’s peer-to-peer network.

To ensure the malware remains active, the script establishes persistence by creating a cron task that downloads the same script again at the 55-minute mark of every hour. During this process, the file is renamed “.asusrouter” and executed automatically.

After persistence is secured, the script downloads an ELF executable, renames it “kad,” and runs it on the device. This program installs the KadNap malware itself. The malware is capable of operating on hardware that uses ARM and MIPS processor architectures, which are commonly found in routers and networking appliances.

KadNap also contacts a Network Time Protocol (NTP) server to retrieve the current system time and store it along with the device’s uptime. These values are combined to produce a hash that allows the malware to identify and connect with other peers within the decentralized network, enabling it to receive commands or download additional components.

Two additional files used during the infection process, fwr.sh and /tmp/.sose, contain instructions that close port 22, which is the default port used by Secure Shell (SSH). These files also extract lists of command server addresses in IP-address-and-port format, which the malware uses to establish communication with control infrastructure.

According to researchers, the use of the DHT protocol provides the botnet with durable communication channels that are difficult to shut down because its traffic blends with legitimate peer-to-peer network activity.

Further examination revealed that not every infected device communicates with every command server. This suggests the attackers are segmenting their infrastructure, possibly grouping devices based on hardware type or model.

Investigators also noted that routers infected with KadNap may sometimes contain multiple malware infections simultaneously. Because of this overlap, it can be challenging to determine which threat actor is responsible for particular malicious activity originating from those systems.

Security experts recommend that individuals and organizations operating small-office or home-office (SOHO) routers take several precautions. These include installing firmware updates, restarting devices periodically, replacing default administrator credentials, restricting management access, and replacing routers that have reached end-of-life status and no longer receive security patches.

Researchers concluded that KadNap’s reliance on a peer-to-peer command structure distinguishes it from many other proxy-based botnets designed to provide anonymity services. The decentralized approach allows operators to remain hidden while making it significantly harder for defenders to detect and block the network.

In a separate report, security analysts at Cyble disclosed a new Linux malware threat named ClipXDaemon.

The malware targets cryptocurrency users by intercepting wallet addresses that victims copy to their clipboard and secretly replacing them with addresses controlled by attackers. This type of threat is commonly known as clipper malware.

ClipXDaemon is distributed through a Linux post-exploitation framework called ShadowHS and has been described as an automated clipboard-hijacking tool designed specifically for systems running Linux X11 graphical environments.

The malware operates entirely in memory, which reduces traces on disk and improves its ability to remain undetected. It also employs several stealth techniques, including disguising its process names and deliberately avoiding execution in Wayland sessions.

This design choice is intentional because Wayland’s security architecture introduces stricter restrictions on clipboard access. Applications must usually involve explicit user interaction before they can read clipboard contents. By disabling itself when Wayland is detected, the malware avoids triggering errors or suspicious behavior.

Once active in an X11 session, ClipXDaemon continuously checks the system clipboard every 200 milliseconds. If it detects a copied cryptocurrency wallet address, it immediately substitutes it with an attacker-controlled address before the victim pastes the information.

The malware currently targets a wide range of digital currencies, including Bitcoin, Ethereum, Litecoin, Monero, Tron, Dogecoin, Ripple, and TON.

Researchers noted that ClipXDaemon differs significantly from traditional Linux malware families. It does not include command-and-control communication, does not send beaconing signals to remote servers, and does not rely on external instructions to operate.

Instead, the malware generates profits directly by manipulating cryptocurrency transactions in real time, silently redirecting funds when victims paste compromised wallet addresses during transfers.

Featured