Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

Windows Telemetry Explained: What Diagnostic Data Microsoft Collects and Why It Matters

  Years after Windows 10 arrived, a single aspect keeps stirring conversation - telemetry. This data gathering, labeled diagnostic info by M...

All the recent news you need to know

Experts Warn About AI-assisted Malwares Used For Extortion


AI-based Slopoly malware

Cybersecurity experts have disclosed info about a suspected AI-based malware named “Slopoly” used by threat actor Hive0163 for financial motives. 

IBM X-Force researcher Golo Mühr said, “Although still relatively unspectacular, AI-generated malware such as Slopoly shows how easily threat actors can weaponize AI to develop new malware frameworks in a fraction of the time it used to take,” according to the Hacker News.

Hive0163 malware campaign 

Hive0163's attacks are motivated by extortion via large-scale data theft and ransomware. The gang is linked with various malicious tools like Interlock RAT, NodeSnake, Interlock ransomware, and Junk fiction loader. 

In a ransomware incident found in early 2026, the gang was found installing Slopoly during the post-exploit phase to build access to gain persistent access to the compromised server. 

Slopoly’s detection can be tracked back to PowerShell script that may be installed in the “C:\ProgramData\Microsoft\Windows\Runtime” folder via a builder. Persistence is made via a scheduled task called “Runtime Broker”. 

Experts believe that that malware was made with an LLM as it contains extensive comments, accurately named variables, error handling, and logging. 

There are signs that the malware was developed with the help of an as-yet-undetermined large language model (LLM). This includes the presence of extensive comments, logging, error handling, and accurately named variables. 

The comments also describe the script as a "Polymorphic C2 Persistence Client," indicating that it's part of a command-and-control (C2) framework. 

According to Mühr, “The script does not possess any advanced techniques and can hardly be considered polymorphic, since it's unable to modify its own code during execution. The builder may, however, generate new clients with different randomized configuration values and function names, which is standard practice among malware builders.”

The PowerShell script works as a backdoor comprising system details to a C2 server. There has been a rise in AI-assisted malware in recent times. Slopoly, PromptSpy, and VoidLink show how hackers are using the tool to speed up malware creation and expand their operations. 

IBM X-Force says the “introduction of AI-generated malware does not pose a new or sophisticated threat from a technical standpoint. It disproportionately enables threat actors by reducing the time an operator needs to develop and execute an attack.”

French FICOBA Breach Exposes 1.2M Bank Accounts

 

A major cyberattack struck France's national bank account registry, FICOBA, exposing sensitive data from over 1.2 million accounts.The breach occurred in late January 2026 when hackers stole login credentials from a civil servant and impersonated an authorized user to access the database. This incident highlights vulnerabilities in government systems handling financial records.

FICOBA serves as France's central repository for all bank accounts opened in domestic institutions, storing identifiers like RIB and IBAN numbers, holder names, and postal addresses. Attackers extracted this information but could not access balances or perform transactions, according to officials. The French Ministry of Finance confirmed tax IDs were not compromised, though early reports varied.

Authorities detected the intrusion swiftly, immediately restricting access and taking the database offline temporarily.It was restored with enhanced security measures after collaboration with the National Cybersecurity Agency (ANSSI). A formal complaint was filed with the National Commission for Information Technology and Civil Liberties (CNIL), and notifications are underway to affected individuals and banks.

The exposure raises alarms for phishing scams and SEPA direct debit fraud, with banks already noting increased suspicious SMS and emails.Criminals could exploit IBANs and personal details for identity theft or unauthorized payments. French tax authorities warn they never request banking info via unsolicited messages.

Safety recommendations 

To protect yourself post-breach, monitor bank statements daily for unauthorized activity and enable transaction alerts. Change passwords on financial accounts, using unique strong ones via a password manager, and activate multi-factor authentication (MFA) everywhere possible. Avoid clicking links in unsolicited emails or texts claiming breach notifications—contact your bank directly through official apps or sites.

Further, freeze credit reports if available in your country to block new accounts in your name, and consider credit monitoring services. Report suspicious activity to your bank and local cyber police immediately.Regularly update software and use antivirus tools to prevent credential theft, emphasizing least-privilege access in organizations. These steps minimize risks from exposed data like in the FICOBA incident.

The Global Cyber Fraud Wave Is Being Supercharged by Artificial Intelligence


 

It is becoming increasingly common for organizations to rethink how security operations are structured and managed as the digital threat landscape continues to evolve. Artificial intelligence is increasingly becoming an integral part of modern cyber defense strategies due to its increasing complexity. 

As networks, endpoints, and cloud infrastructures generate large quantities of telemetry, security teams are turning to advanced machine learning models and intelligent analytics to process those data. As a result, these systems are able to identify subtle anomalies and behavioral patterns which would otherwise be hidden by conventional monitoring frameworks, allowing for earlier detection of malicious behavior. 

In addition to improving cybersecurity workflow efficiency, AI is also transforming cybersecurity operations. With adaptive algorithms that continually refine their analytical models, tasks that previously required extensive manual oversight can now be automated, such as log correlation, threat triage, and vulnerability assessment. 

Artificial intelligence allows security professionals to concentrate on more strategic and investigative activities, such as threat hunting and incident response planning, by reducing the operational burden on human analysts. Organizations are facing increasingly sophisticated adversaries who utilize automation and advanced techniques in order to circumvent traditional defenses. 

The shift is particularly important as adversaries become increasingly sophisticated. Additionally, AI can strengthen proactive defense mechanisms by analyzing historical attacks and behavioral indicators. 

Using AI-driven platforms, organizations can detect phishing campaigns in real time using linguistic and contextual analysis as well as flag suspicious activity across distributed environments in advance of emerging attack vectors. This continuous learning capability allows these systems to adapt to changes in the threat landscape, enhancing their accuracy and resilience as new patterns of malicious activity emerge. 

Therefore, artificial intelligence is becoming a strategic asset as well as a defensive necessity, enabling organizations to deal with cyber threats more effectively, efficiently, and adaptably while ensuring the security of critical data and digital infrastructure. 

In the telecommunications sector, fraud has been a persistent operational and security concern for many years, resulting in considerable financial losses and reputational consequences. In order to identify irregular usage patterns and protect subscriber accounts, telecom operators traditionally rely on multilayered monitoring controls and rule-based fraud management systems.

Although the industry is rapidly expanding into adjacent digital services, including mobile payments, digital wallets, and payment service banking, conventional boundaries that once separated the telecom industry from the financial sector have begun to become blurred. Increasingly, telecom networks serve as foundational infrastructure for digital transactions, identity verification, and financial connectivity, rather than merely serving as communication channels. 

By resulting in this structural shift, the attack surface has been significantly increased, resulting in a more complex and interconnected fraud environment, where threats are capable of propagating across multiple digital platforms. At the same time, artificial intelligence is rapidly transforming the way fraud risks are managed and emergence occurs. 

With the use of artificial intelligence-driven automation, sophisticated threats actors are orchestrating highly scalable fraud campaigns, generating convincing phishing messages, utilizing social engineering tactics, and analyzing network vulnerabilities more quickly than ever before. This capability enables fraudulent schemes to evolve dynamically, adapting more rapidly than traditional detection mechanisms. 

In spite of this, technological advances are equipping telecommunications providers with more advanced defensive tools as well. A fraud detection platform based on artificial intelligence can analyze huge volumes of network telemetry and transaction data, analyzing signals across communication and payment systems in real time to identify subtle indicators of compromise.

By analyzing behavior patterns, detecting anomalies, and modeling predictive patterns, security teams are able to detect suspicious activities earlier and respond more precisely. Additionally, the economic implications of telecom-related fraud emphasize the need to strengthen these defenses. The telecommunications industry has been estimated to have suffered tens of billions of dollars in losses in recent years as a result of digital exploitation on a grand scale.

In emerging digital economies, this issue is particularly acute, since mobile connectivity is increasingly serving as a bridge to financial inclusion. Fraud incidents that occur on telecommunications networks that support digital banking, mobile money transfers, and online commerce can have consequences that go beyond the service providers themselves.

Interconnected platforms may be subject to a variety of regulatory exposures, operational disruptions, or declining consumer confidence at the same time, affecting both telecommunications and financial services simultaneously. Increasing convergence between communication networks and financial services is shifting telecom operators' responsibilities in light of their role in the digital payment ecosystem. 

In addition to ensuring network reliability, providers are also expected to safeguard financial transactions occurring across their infrastructure as digital payment ecosystems grow. In light of the significant interrelationship between mobile and online banking ecosystems, a number of scams target these populations. 

As a consequence of fraudulent activity occurring in such interconnected systems, it can have cascading effects across multiple organizations, leading to regulatory scrutiny and eroding trust within the entire digital economy. 

The challenge for telecommunications companies is therefore no longer limited to managing network abuse alone; they must build resilient, intelligence-driven fraud prevention frameworks capable of protecting a complex digital environment that is becoming increasingly complex. Several studies conducted by the industry indicates that cyber threat operations are in the process of undergoing a significant transformation. 

Attackers are increasingly orchestrating coordinated campaigns that incorporate traditional social engineering techniques with the speed and scale of automated technology. The use of artificial intelligence is now integral to the entire attack lifecycle, from early reconnaissance and target profiling to deceptive communication strategies and operational decision-making.

In the context of everyday business environments, organizations encounter increasingly high-risk interactions with automated systems as AI-powered tools become more accessible. Based on data collected in recent months, it appears that a substantial percentage of enterprise AI interactions involve prompts or requests that raise potential security concerns, demonstrating how the rapid integration of artificial intelligence into corporate workflows presents new opportunities for misappropriation. 

Along with this trend, ransomware ecosystems are also maturing into fragmented and scalable models. It has been observed that the landscape is becoming more characterized by loosely connected networks of specialized operators rather than a few centralized threat groups. 

As a consequence of decentralization, cybercriminals have been able to expand their operations at an exponential rate, increasing both the number of victims targeted and the speed with which campaigns can be executed. 

Moreover, artificial intelligence is helping to streamline target identification, optimize extortion strategies, and automate negotiation and infrastructure management functions. Consequently, a more adaptive and resilient criminal ecosystem has been created that is capable of sustaining persistent global campaigns. 

Social engineering tactics are also embracing a broader array of communication channels than traditional phishing emails. Deception is increasingly coordinated by threat actors across email, web platforms, enterprise collaboration tools, and voice communication channels. Security experts have observed a sharp increase in methods for manipulating user trust by issuing seemingly legitimate technical prompts or support instructions, often encouraging individuals to provide sensitive information or execute commands. 

As a result, phone-based impersonation attacks have evolved into structured intrusion attempts targeted at corporate help desks and internal support functions, resulting in more targeted intrusion attempts. In the age of cloud-based computing, browsers, software-as-a-service environments, and collaborative digital workspaces, artificial intelligence will become an integral part of critical trust layers which adversaries will attempt to exploit. 

Besides user-focused attacks, infrastructure-based vulnerabilities are also expanding the threat surface, enabling hackers to blend malicious activity into legitimate network traffic as covert entry points. Edge devices, virtual private network gateways, and internet-connected systems are increasingly being used as covert entry points by attackers. 

The lack of oversight of these devices can result in persistent access routes that remain undetected within complex enterprise architectures. There are also additional risks associated with the infrastructure that supports artificial intelligence. As machine learning models, automated agents, and supporting services become integrated into enterprise technology stacks, significant configuration weaknesses have been identified across a wide number of deployments, highlighting potential exposures. 

As a result of these developments, cybersecurity leaders are reconsidering the structure of defensive strategies in an era marked by machine-speed attacks. Analysts have increasingly emphasized that responding to incidents after they occur is no longer sufficient; organizations must design security frameworks that prioritize prevention and resilience from the very beginning. 

To ensure these foundational controls can withstand automated and coordinated attacks, security teams need to reevaluate them across networks, endpoints, cloud platforms, communication systems, and secure access environments. 

Security teams face the challenge of facilitating artificial intelligence adoption without introducing unmanaged risks as it becomes incorporated into daily business processes. Keeping a clear picture of the use of artificial intelligence, both sanctioned and unsanctioned, as well as enforcing policies, is essential to reducing the potential for data leakage and misuse. 

In addition, protecting modern digital workspaces, where human decision-making increasingly intersects with automated technologies, is imperative. Several tools, including email platforms, web browsers, collaboration tools, and voice systems, form an integrated operation environment that needs to be secured as a single trust domain. 

In addition to strengthening the protection of edge infrastructure, maintaining an accurate inventory of connected devices can assist in reducing the possibility of attackers exploiting hidden entry points. A key component of maintaining resilience against artificial intelligence-driven cyber threats is consistent visibility across hybrid environments that encompass both on-premises infrastructures and cloud platforms along with distributed edge systems. 

By integrating oversight across these layers and prioritizing prevention-focused security models, organizations can reduce operational blind spots and enhance their defenses against rapidly evolving cyber threats. Industry observers emphasize that, under these circumstances, the ability to defend against AI-enabled cyber fraud will be less dependent upon isolated tools and more dependent upon coordinated security architectures. 

The telecommunications and digital service providers are expected to strengthen collaboration across the technological, financial, and regulatory ecosystems, as well as embed intelligence-driven monitoring into every layer of their infrastructure. It is essential to continually model fraud threats, use adaptive security analytics, and tighten up governance of emerging technologies to anticipate how fraud tactics evolve as innovations progress. 

By emphasizing proactive risk management and strengthening trust across interconnected digital platforms, organizations can be better prepared to address increasingly automated threats while maintaining the integrity of the rapidly expanding digital economy.

Meta Targets 150K Accounts in Southeast Asia Scam Operation

 



Meta announced that it has removed more than 150,000 accounts tied to organized scam centers operating in Southeast Asia, describing the move as part of a large international effort to disrupt coordinated online fraud networks.

The enforcement action was carried out with assistance from authorities in several countries. Law enforcement agencies and government partners involved in the operation included officials from Thailand, the United States, the United Kingdom, Canada, South Korea, Japan, Singapore, the Philippines, Australia, New Zealand, and Indonesia. According to Meta, the joint effort resulted in 21 individuals being arrested by the Royal Thai Police.

This latest crackdown builds on an earlier pilot initiative launched in December 2025. During that initial phase, Meta removed approximately 59,000 accounts, Pages, and Groups from its platforms that were connected to similar fraudulent activity. The earlier investigation also led to the issuance of six arrest warrants by authorities.

In a statement explaining the action, Meta said that online scams have grown increasingly complex and organized over recent years. Criminal networks, often operating from countries such as Cambodia, Myanmar, and Laos, have established large scam compounds that function in many ways like organized business operations. These groups typically use structured teams, scripted communication strategies, and digital tools designed to evade detection while targeting victims on a global scale. According to the company, the impact of such scams extends far beyond financial loss, as they can severely disrupt lives and weaken trust in digital communication platforms.

Alongside the enforcement action, Meta also announced several new safety features aimed at helping users identify and avoid scam attempts.

One of these tools introduces new warning messages on Facebook that notify users when they receive communication from accounts that display characteristics commonly linked to fraudulent activity. Another safeguard has been introduced on WhatsApp to address a tactic used by scammers who attempt to persuade users to scan a QR code. If successful, this method can link the attacker’s device to the victim’s WhatsApp account, allowing them to access messages and impersonate the account holder. Meta said its system will now notify users when suspicious device-linking requests are detected.

The company is also expanding scam detection on Messenger. When a conversation with a new contact begins to resemble known fraud patterns, such as questionable job opportunities or requests that appear unusual, the platform may prompt users to share recent messages so that an artificial intelligence system can evaluate whether the interaction matches known scam behavior.

Meta also disclosed broader enforcement statistics related to scams on its platforms. Throughout 2025, the company removed more than 159 million advertisements that violated its policies related to fraud and deception. In addition, it disabled approximately 10.9 million Facebook and Instagram accounts that investigators linked to organized scam centers.

To further address fraudulent activity, the company said it plans to expand its advertiser verification program. The goal of this measure is to increase transparency by confirming the identities of advertisers and reducing the ability of malicious actors to misrepresent themselves while running advertisements.

The announcement comes at a time when governments are intensifying efforts to address online fraud. The UK Government recently introduced a new Online Crime Centre designed to focus specifically on cybercrime, including scams connected to organized fraud operations operating in regions such as Southeast Asia, West Africa, Eastern Europe, India, and China.

The centre will bring together specialists from several sectors, including government agencies, law enforcement, intelligence services, financial institutions, mobile network providers, and major technology companies. The initiative is expected to begin operations next month.

The project forms part of the United Kingdom’s broader Fraud Strategy 2026–2029, a policy framework aimed at strengthening the country’s response to fraud and financial crime. As part of this strategy, authorities plan to use artificial intelligence to detect emerging scam patterns, identify suspicious bank transfers more quickly, and deploy “scam-baiting” chatbots designed to interact with fraudsters in order to gather intelligence.

Officials said the new centre, supported by more than £30 million in funding, will focus on identifying the digital infrastructure used by organized crime groups. This includes tracking fraudulent accounts, websites, and phone numbers used in scam operations. Authorities aim to shut down these resources at scale by blocking scam messages, freezing financial accounts linked to criminal activity, removing fraudulent social media profiles, and disrupting scam networks at their source.

Google API Keys Expose Gemini AI Data via Leaked Credentials

 

Google API keys, once considered harmless when embedded in public websites for services like Maps or YouTube, have turned into a serious security risk following the integration of Google's Gemini AI assistant. Security researchers at Truffle Security uncovered this issue, revealing that nearly 3,000 live API keys—prefixed with "AIza"—are exposed in client-side JavaScript code across popular sites.

Truffle Security's scan of the November 2025 Common Crawl dataset, which captures snapshots of major websites, identified 2,863 active keys from diverse sectors including finance, security firms, and even Google's own infrastructure. These keys, deployed sometimes years ago (one traced back to February 2023), were originally safe as mere billing identifiers but gained unauthorized access to Gemini endpoints without developers' knowledge.Attackers can simply copy a key from page source, authenticate to Gemini, and extract sensitive data like uploaded files, cached contexts, or datasets via simple prompts.

The danger extends beyond data theft to massive financial abuse, as Gemini API calls consume tokens that rack up charges—potentially thousands of dollars daily per compromised account, depending on the model and context window. Truffle Security demonstrated this by querying the /models endpoint with exposed keys, confirming access to private Gemini features. One reported case highlighted an $82,314 bill from a stolen key, underscoring the real-world impact.

Google acknowledged the flaw as "single-service privilege escalation" after Truffle's disclosure on November 21, 2025, and implemented fixes by January 2026, including blocking leaked keys from Gemini access, defaulting new AI Studio keys to Gemini-only scope, and sending proactive leak notifications. Despite these measures, the "retroactive privilege expansion" caught many off-guard, as enabling Gemini in projects silently empowered old keys.

Developers must immediately audit Google Cloud projects for Gemini API enablement, rotate all exposed keys, and restrict scopes to essentials—avoiding the default "unrestricted" setting. Tools like TruffleHog can scan code repositories for leaks, while regular monitoring prevents future exposures in an era where AI services amplify API risks. This incident highlights the need for vigilance as cloud features evolve.

Silent Scam Calls Used to Verify Active Phone Numbers, Cybersecurity Experts Warn

 

Many people have answered calls from unfamiliar numbers only to hear silence on the other end. In some cases, no one speaks at all. In others, there is a short delay before a caller finally responds. While this may appear to be a simple mistake or a wrong number, cybersecurity experts say these calls are often part of a deliberate scam tactic used to verify active phone numbers. 

According to security specialists, these silent calls function as a form of automated reconnaissance. Fraud operations run large-scale calling systems that dial thousands of numbers to determine which ones belong to real people. When someone answers, the system confirms that the number is active and marks it as a potential target for future scams. 

Keeper Security Chief Information Security Officer Shane Barney explained that such calls are rarely accidental. Instead, they help attackers filter out inactive numbers before investing more time and resources into scams. Verified contact information has value in modern cybercrime networks, where data about reachable individuals can be bought, sold, and reused across different fraud campaigns. 

Once a phone number is confirmed as active, it may be used in several ways. In some cases, scammers follow up with phishing calls or messages designed to trick victims into revealing personal or financial information. In more advanced attacks, a verified phone number could be combined with leaked email addresses from data breaches or used in schemes such as SIM-swap fraud, where attackers attempt to gain control of a victim’s mobile account. 

Another variation occurs when callers respond only after a brief pause. This delay is typically caused by predictive dialing systems that automatically place large volumes of calls. These systems detect when a human answers and then route the call to a live operator. The short silence represents the time it takes for the system to transfer the connection. 

Some people also worry that speaking during these calls could allow scammers to clone their voice using artificial intelligence. While voice cloning technology exists, experts say creating a convincing replica generally requires longer and clearer audio samples than a brief greeting. 

However, voice cloning could still become part of larger scams if criminals already possess other personal details about a victim. Security professionals recommend simple precautions when receiving suspicious calls. If an unknown number produces silence, hanging up immediately is usually the safest option. 

Another tactic is answering without speaking, which prevents automated systems from detecting a human voice. Spam-filtering tools can also help reduce nuisance calls. Applications such as Truecaller, RoboKiller, and Hiya identify numbers previously reported as spam. However, experts caution that no filtering system is perfect because scammers frequently change phone numbers. 

Ultimately, while call-blocking tools can reduce the volume of unwanted calls, maintaining strong account security and being cautious with unknown callers remain the most effective ways to avoid phone-based scams.

Featured