Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Meta to Discontinue End-to-End Encrypted Chats on Instagram Come May 2026

 



Meta Platforms has confirmed that it will remove support for end-to-end encrypted messaging in Instagram direct messages beginning May 8, 2026. After this date, conversations that previously relied on this encryption feature will no longer be protected by the same privacy mechanism.

According to guidance published in the platform’s support documentation, users whose conversations are affected will receive instructions explaining how to download messages or media files they want to retain. In some situations, individuals may also need to install the latest version of the Instagram application before they can export their chat history.  

When asked about the decision, Meta stated that encrypted messaging on Instagram saw limited adoption. The company explained that only a small percentage of users chose to enable end-to-end encryption within Instagram direct messages. Meta also pointed out that people who want encrypted communication can still use the feature on WhatsApp, where end-to-end encryption is already widely used.


How Instagram Encryption Was Introduced

Instagram’s encrypted messaging capability was originally introduced as part of a broader push by Meta to transform its messaging ecosystem. In 2021, Meta CEO Mark Zuckerberg outlined a “privacy-focused” strategy for social networking that aimed to shift communication toward private and secure messaging environments. 

Within that initiative, Meta began experimenting with encrypted direct messages on Instagram. However, the feature never became the default setting for users. Instead, it remained an optional capability available only in certain regions and had to be manually activated within specific conversations.

The tool also gained relevance during geopolitical tensions. Shortly after the outbreak of the Russia-Ukraine conflict in early 2022, Meta expanded access to encrypted direct messages for adult users in both Russia and Ukraine. The company said the move was intended to provide safer communication channels during the early phase of the war.


Industry Debate Over Encrypted Messaging

The decision to discontinue Instagram’s encrypted chats comes amid a broader debate in the technology sector about whether strong encryption improves or complicates online safety.

Recently, the social media platform TikTok said it currently has no plans to introduce end-to-end encryption for its messaging system. The company told the BBC that such technology could reduce its ability to monitor harmful activity and protect younger users from abuse.

End-to-end encryption is widely regarded by cybersecurity experts as one of the strongest ways to secure digital communication. When this technology is used, messages are encrypted on the sender’s device and can only be decrypted by the recipient. This means that even the platform hosting the conversation cannot read the message contents during transmission. 

Because of this design, encrypted systems can protect users from surveillance, data interception, or unauthorized access by third parties. Many messaging services, including WhatsApp and Signal, rely on similar encryption models to secure billions of conversations globally.


Law Enforcement Concerns

Despite its privacy advantages, encryption has long been controversial among law enforcement agencies and child-safety advocates. Critics argue that encrypted messaging makes it harder for technology companies to detect criminal behavior such as terrorism recruitment or the distribution of child sexual abuse material.

Authorities describe this challenge as the “Going Dark” problem, referring to situations where investigators cannot access message content even when they obtain legal warrants. Policymakers have repeatedly warned that widespread encryption could reduce the ability of platforms to cooperate with criminal investigations.

Internal documents previously reported by Reuters indicated that some Meta executives had raised similar concerns internally. In discussions dating back to 2019, company officials warned that widespread encryption could limit the company’s ability to identify and report illegal activity to law enforcement authorities. 


Regulatory Pressure and Future Policy

The global policy debate around encryption is still evolving and charting new courses. The European Commission is expected to release a technology roadmap on encryption later this year. The initiative aims to explore ways to allow lawful access to encrypted data for investigators while preserving cybersecurity protections and civil liberties.


A Changing Messaging Strategy

Meta’s decision to remove encrypted messaging from Instagram highlights the complex trade-offs technology companies face when balancing privacy protections with safety monitoring and regulatory expectations.

While encryption remains a cornerstone of messaging on WhatsApp and has expanded across other platforms, the rollback on Instagram suggests that adoption rates, platform design, and policy pressures can influence whether such security features remain viable.

For Instagram users who relied on encrypted chats, the upcoming change means reviewing conversations before May 2026 and exporting any information they wish to keep before the feature is officially retired.

CISA Reveals New Details on RESURGE Malware Exploiting Ivanti Zero-Day Vulnerability

 

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) has published fresh technical insights into RESURGE, a malicious implant leveraged in zero-day attacks targeting Ivanti Connect Secure appliances through the vulnerability tracked as CVE-2025-0282.

The latest advisory highlights the implant’s ability to remain undetected on affected systems for extended periods. According to CISA, the malware employs advanced network-level evasion and authentication mechanisms that allow attackers to maintain hidden communication channels with compromised devices.

CISA first reported the malware on March 28 last year, noting that it can persist even after system reboots. The implant is capable of creating web shells to harvest credentials, generating new accounts, resetting passwords, and escalating privileges on affected systems.

Security researchers at incident response firm Mandiant revealed that the critical CVE-2025-0282 flaw had been actively exploited as a zero-day vulnerability since mid-December 2024. The campaign has been linked to a China-associated threat actor identified internally as UNC5221.

Network-level evasion techniques

In the updated bulletin, CISA shared additional technical details about the implant. The malware is a 32-bit Linux shared object file named libdsupgrade.so that was recovered from a compromised Ivanti device.

RESURGE functions as a passive command-and-control (C2) implant with multiple capabilities, including rootkit, bootkit, backdoor, dropper, proxying, and tunneling functions.

Unlike typical malware that regularly sends signals to its command server, RESURGE remains idle until it receives a specific inbound TLS connection from an attacker. This behavior helps it avoid detection by traditional network monitoring systems.

When loaded within the ‘web’ process, the implant intercepts the ‘accept()’ function to inspect incoming TLS packets before they reach the web server. It searches for particular connection patterns originating from remote attackers using a CRC32 TLS fingerprint hashing method.

If the fingerprint does not match the expected pattern, the traffic is redirected to the legitimate Ivanti server. CISA also explained that the attackers rely on a fake Ivanti certificate to confirm that they are interacting with the malware implant rather than the genuine web server.

The agency noted that the forged certificate is used strictly for authentication and verification purposes and does not encrypt communication. However, it also helps attackers evade detection by impersonating the legitimate Ivanti service.

Because the fake certificate is transmitted over the internet without encryption, CISA said defenders can potentially use it as a network signature to identify ongoing compromises.

Once the fingerprint verification and authentication steps are completed, attackers establish encrypted remote access to the implant through a Mutual TLS session secured with elliptic curve cryptography.

"Static analysis indicates the RESURGE implant will request the remote actors' EC key to utilize for encryption, and will also verify it with a hard-coded EC Certificate Authority (CA) key," CISA says.

By disguising its traffic to resemble legitimate TLS or SSH communications, the implant maintains stealth while ensuring long-term persistence on compromised systems.

Additional malicious components

CISA also examined another file, a variant of the SpawnSloth malware named liblogblock.so, which is embedded within the RESURGE implant. Its primary role is to manipulate system logs to conceal malicious activities on infected devices.

A third analyzed component, called dsmain, is a kernel extraction script that incorporates the open-source script extract_vmlinux.sh along with the BusyBox collection of Unix/Linux utilities.

The script enables the malware to decrypt, alter, and re-encrypt coreboot firmware images while modifying filesystem contents to maintain persistence at the boot level.

“CISA’s updated analysis shows that RESURGE can remain latent on systems until a remote actor attempts to connect to the compromised device,” the agency notes. Because of this, the malicious implant "may be dormant and undetected on Ivanti Connect Secure devices and remains an active threat."

To address the risk, CISA recommends that administrators review the updated indicators of compromise (IoCs) provided in the advisory to identify potential RESURGE infections and remove the malware from affected Ivanti systems.

Researchers Investigate AI Models That Can Interpret Fragmented Cognitive Signals


 

Despite being among the most complex and least understood systems in science for decades, the human brain continues to be one of the most complex and least understood. Advancements in brain-imaging technology have enabled researchers to observe neural activity in stunning detail, showing how different areas of the brain light up when a person listens, speaks, or processes information. However, the causes of these patterns have yet to be fully understood. 

Despite the fact that intricate waves of electrical signals and shifting clusters of brain activity indicate the brain is working, the deeper question of how these signals translate into meaning remains largely unresolved. Historically, neuroscientists, linguists, and psychologists have found it difficult to understand how the brain transforms words into coherent thoughts. 

Recent developments at the intersection of neuroscience and artificial intelligence are beginning to alter this picture for the better. As detailed recordings of brain activity are being analyzed using advanced deep learning techniques, researchers are revealing patterns suggesting that the human brain might interpret language in a manner similar to modern artificial intelligence models in terms of interpretation. 

As speech unfolds, rather than using rigid grammatical rules alone, the brain appears to build meaning gradually, layering context and interpretation as it unfolds. In a new perspective, this emerging concept offers insight into the mechanisms of human comprehension and may ultimately alter how scientists study language, cognition, and thought's neural foundations. 

The implications of this emerging understanding are already being explored in experimental clinical settings. In one such study, researchers observed the recovery of a participant following a stroke after experiencing severe speech impairments for nearly two decades. Despite remaining physically still, her subtle breathing rhythm was the only visible movement, yet she was experiencing complex neural activity beneath the surface. 

During silent speech, words appeared on a nearby screen, gradually combining into complete sentences that she was unable to convey aloud as she imagined speaking. As part of the study, the participant, 52-years-old T16, was implanted with a small array of electrodes located within the frontal regions of her brain responsible for language planning and motor speech control, which were monitored with an array of electrodes. 

A deep-learning system analyzed these signals and translated them into written text in near-real-time as she mentally rehearsed words using an implanted interface. As part of a broader investigation conducted by Stanford University, the same experimental framework was applied to additional volunteers with amyotrophic lateral sclerosis, a neurodegenerative condition. 

Through the integration of high-resolution neural recordings and machine learning models capable of recognizing complex activity patterns, the system attempted to reconstruct intended speech directly from brain signals based on the recorded signals. 

Even though the approach is still in experimental stages, it represents a significant breakthrough in brain-computer interface research aimed at converting internal speech into readable language. This research brings researchers closer to technologies that may one day allow individuals who have lost their ability to communicate to be able to communicate again.

The development of neural decoding goes beyond speech reconstruction and is also being explored simultaneously. A recent experiment at the Communication Science Laboratories of NTT, Inc in Japan has demonstrated that visual thoughts can be converted into written descriptions using a technique known as “mind captioning”. This approach, unlike earlier brain–computer interfaces that required participants to attempt or imagine speaking, emphasizes the interpretation of neural activity related to perception and memory.

The system can produce textual descriptions based on patterns in brain signals, giving a glimpse into how internal visual experiences can be translated into language without requiring physical communication. In order to develop the method, functional magnetic resonance imaging is combined with advanced language modeling techniques. 

Functional MRI can measure subtle changes in blood flow throughout the brain, enabling researchers to map neural responses as participants watch video footage and later recall those same scenes. As a result of these neural patterns, a pretrained language model is used to generate semantic representations, which encode relationships between concepts, objects and actions by utilizing numerical structures. 

This intermediary layer creates a link between raw brain activity and linguistic expressions by acting as an intermediary layer. As a result of the decoding model, observed neural signals are aligned with these semantic structures, while the resulting text is gradually refined by an artificial intelligence language model so that it reflects the meaning implicit in the recorded brain activity.

Experimental trials demonstrated that short video clips were often described in a way that captured the overall context, including interactions between individuals, objects, and environments. Although the system often misidentified a specific object, it often preserved the relationships or actions occurring in the scene even when the system misidentified the object. This indicates that the model was interpreting conceptual patterns rather than simply retrieving memorized phrases.

Furthermore, the process is not primarily dependent on the conventional language-processing regions of the brain. Rather than using sensory and cognitive activity as a basis for constructing meaningful descriptions, it interprets neural signals originating from areas that are involved in visual perception and conceptual understanding. This technology has implications beyond experimental neuroscience, in addition to enhancing human perception.

The development of systems that can translate perceptual or imagined experiences into language could lead to the development of new modes of communication for people suffering from severe neurological conditions, such as paralysis, aphasia, or degenerative diseases affecting their speech. At the same time, the possibility of utilizing technology to deduce internal mental content from neural data raises complex ethical issues. 

In the future, when it becomes easier to interpret brain activity, researchers and policymakers will need to consider how privacy, consent, and cognitive autonomy can be protected in an environment in which thoughts can, under certain conditions, be decoded. 

Increasingly sophisticated systems that can interpret neural signals and restore aspects of human thought are presenting researchers and ethicists with broader questions about how artificial intelligence may change the nature of human knowledge. 

According to scholars, if algorithmic systems are increasingly used as default intermediaries for information, understanding could gradually shift from direct human reasoning to automated interpretation as a consequence.

In this scenario, human judgement's traditional qualities - context awareness, critical doubt, ethical reflection, and interpretive nuance - may be eclipsed by the efficiency and speed of machine-generated responses. There is concern among some analysts that this shift may result in the creation of a new form of epistemic divide. 

There may be those individuals who continue to cultivate the cognitive discipline necessary to build knowledge through sustained attention, reflection, and analysis. Conversely, those individuals whose thinking processes are increasingly mediated by digital systems that provide answers on demand may also be affected.

The latter approach, while beneficial in many contexts, can improve productivity and speed up problem solving. However, overreliance on external computational tools may weaken the underlying habits of independent inquiry over time. 

It is likely that the implications would extend far beyond academic environments, influencing those who are capable of managing complex decisions, evaluating conflicting information, or generating truly original ideas rather than relying on pattern predictions generated by algorithms. 

It is noteworthy that, despite these concerns, experts emphasize that the most appropriate response to artificial intelligence is not the rejection of it, but rather the carefully designed social and systemic practices that maintain human cognitive agency. It is likely that educators, institutions, and policymakers will need to intentionally reintroduce intellectual effort that sustains deep thinking in the face of increasing friction caused by automated information retrieval and analytical tools. 

It is possible to encourage individuals to use their independent problem-solving skills before consulting digital tools in these learning environments, as well as evaluate their performance in these learning environments using methods that emphasize reasoning, revision, and reflection. The distinction between retrieval of knowledge and retrieval of information may be particularly relevant in this context.

Despite retrieval systems' ability to deliver information instantly, true understanding requires an explanation of concepts, their application to unfamiliar situations, and critical examination of the assumptions they are based on. These implications are particularly significant for the younger generations, whose cognitive habits are still developing. 

Researchers are increasingly emphasizing the importance of practicing activities that enhance concentration and independent thought. These activities include reading for sustained periods of time, writing without assistance, solving complex problems, and composing creative works that require patience and focus. It is essential that such activities continue in an environment in which information is almost effortless to access that they serve as forms of cognitive training. 

As neural decoding technologies and artificial intelligence-assisted cognition progress, it may ultimately prove just as important to preserve the human capacity for deliberate thought as it is to achieve technological breakthroughs. As a result of the lack of such a balance, the question is not whether intelligence would diminish, but whether the individual would gradually lose control over the process by which his or her own thoughts are formed. 

 Technological advancement and frameworks that guide the application of neural decoding and artificial intelligence-assisted cognition will determine the trajectory of neural decoding and AI-assisted cognition in the future. 

As the ability to interpret brain activity becomes more refined, researchers, clinicians, and policymakers will be required to develop clear safeguards that protect mental privacy while ensuring the technology serves a legitimate scientific or medical purpose. 

A comprehensive governance system, transparent research standards, and ethical oversight will play a central role in determining the integration of such tools into society. If neural interfaces and artificial intelligence-driven interpretation systems are developed responsibly, they can transform communication for patients with severe neurological impairments and provide greater insight into human behavior. 

In addition, it remains essential to maintain a clear boundary between assistance and intrusion, to ensure that advancements in decoding the brain ultimately enhance human autonomy rather than compromise it.

Chinese Threat Actors Attack Southeast Asian Military Targets via Malware


A China-based cyber espionage campaign is targeting Southeast Asian military targets. The state-sponsored campaign started in 2020. 

Palo Alto Networks Unit 42 has been tracking the campaign under the name CL-STA-1087. Here, CL means cluster, and STA means state-backed motivation. 

According to security experts Yoav Zemah and Lior Rochberger, “The activity demonstrated strategic operational patience and a focus on highly targeted intelligence collection, rather than bulk data theft. The attackers behind this cluster actively searched for and collected highly specific files concerning military capabilities, organizational structures, and collaborative efforts with Western armed forces.”

About the campaign

The campaign shows traces commonly linked with APT campaigns, such as defense escape tactics, tailored delivery methods, custom payload deployment, and stable operational infrastructure to aid sustained access to hacked systems.

MemFun and AppleChris

Threat actors used tools such as backdoors called MemFun and AppleChris, and a credential harvester called Getpass. Experts found the hacking tools after finding malicious PowerShell execution that allowed the script to go into a sleep state and then make reverse shells to a hacker-controlled C2 server. Experts don't know about the exact initial access vector. 

About the attack sequence

The compromise sequence deploys AppleChris’ different versions across victim endpoints and moves laterally to avoid detection. Hackers were also found doing searches for joint military activities, detailed assessments of operational capabilities, and official meeting records. The experts said that the “attackers showed particular interest in files related to military organizational structures and strategy, including command, control, communications, computers, and intelligence (C4I) systems.”

MemFun and AppleChris are designed to access a shared Pastebin account that serves as a dead-drop resolver to retrieve the real C2 address in Base64-encoded format. An AppleChris version also depends on Dropbox to fetch the C2 details via the Pastebin approach, kept as a backup option. Installed via DLL hijacking, AppleChris contacts the C2 server to receive commands to perform drive enumeration and related tasks. 

According to Unit 42, “To bypass automated security systems, some of the malware variants employ sandbox evasion tactics at runtime. These variants trigger delayed execution through sleep timers of 30 seconds (EXE) and 120 seconds (DLL), effectively outlasting the typical monitoring windows of automated sandboxes.”

Debunking the Myth of “Military‑Grade” Encryption

 

Military-grade encryption sounds impressive, but in reality it is mostly a marketing phrase used by VPN providers to describe widely available, well‑tested encryption standards like AES‑256 rather than some secret military‑only technology. The term usually refers to the Advanced Encryption Standard with a 256‑bit key (AES‑256), a symmetric cipher adopted as a US federal standard in 2001 to replace the older Data Encryption Standard. 

AES turns readable data into random‑looking ciphertext using a shared key, and the 256‑bit key length makes brute‑force attacks computationally infeasible for any realistic adversary. Because the same key is used for both encryption and decryption, AES is paired with slower asymmetric algorithms such as RSA during the VPN handshake so the symmetric key can be exchanged securely over an untrusted network. Once that key is agreed, your traffic flows efficiently using AES while still benefiting from the secure key exchange provided by public‑key cryptography.

Calling this setup “military‑grade” is misleading because it implies special, restricted technology, when in fact AES‑256 is an open, publicly documented standard used by governments, banks, corporations, and everyday internet services alike. Any competent developer can implement AES‑256, and your browser and many apps already rely on it to protect logins and other sensitive data as it traverses the internet. In practical terms, the same class of algorithm that safeguards classified government communications also secures routine tasks like online banking or cloud storage. VPN marketing leans on the phrase because “AES‑256 with a 256‑bit key” means little to non‑experts, while “military‑grade” instantly conveys strength and trustworthiness.

Strong encryption is not overkill reserved for spies; it matters for everyday users whose online activity constantly generates data trails across sites and apps. That information is monetized for targeted advertising and exposed in breaches that can enable phishing, identity theft, or other fraud, even if you believe you have nothing to hide. Location histories, financial records, and health details are all highly sensitive, and the risks are even greater for journalists, activists, or people living under repressive regimes where surveillance and censorship are common. For them, robust encryption is essential, often combined with obfuscation and multi‑hop VPN chains to conceal VPN usage and add layers of protection if an exit server is compromised.

Ultimately, a VPN without strong encryption offers little real security, whether you are using public Wi‑Fi or simply trying to keep your ISP and advertisers from building detailed profiles about you. AES‑256 remains a widely trusted choice, but modern VPNs may also use alternatives like ChaCha20 in protocols such as WireGuard, which, although not a NIST standard, has been thoroughly audited and is considered secure. The important point is not the “military‑grade” label but whether the service implements proven, well‑reviewed cryptography correctly and combines it with privacy‑preserving features that match your threat model.

Shadow AI Risks Rise as Employees Use Generative AI Tools at Work Without Oversight

 

With speed surprising even experts, artificial intelligence now appears routinely inside office software once limited to labs. Because uptake grows faster than oversight, companies care less about who uses AI and more about how safely it runs. 

Research referenced by security specialists suggests that roughly 83 percent of UK workers frequently use generative artificial intelligence for everyday duties - finding data, condensing reports, creating written material. Because tools including ChatGPT simplify repetitive work, efficiency gains emerge across fast-paced departments. While automation reshapes daily workflows, practical advantages become visible where speed matters most. 

Still, quick uptake of artificial intelligence brings fresh risks to digital security. More staff now introduce personal AI software at work, bypassing official organizational consent. Experts label this shift "shadow AI," meaning unapproved systems run inside business environments. 

These tools handle internal information unseen by IT teams. Oversight gaps grow when such platforms function outside monitored channels. Almost three out of four people using artificial intelligence at work introduce outside tools without approval. 

Meanwhile, close to half rely on personal accounts instead of official platforms when working with generative models. Security groups often remain unaware - this gap leaves sensitive information exposed. What stands out most is the nature of details staff share with artificial intelligence platforms. Because generative models depend on what users feed them, workers frequently insert written content, programming scripts, or files straight into the interface. 

Often, such inputs include sensitive company records, proprietary knowledge, personal client data, sometimes segments of private software code. Almost every worker - around 93 percent - has fed work details into unofficial AI systems, according to research. Confidential client material made its way into those inputs, admitted roughly a third of them. 

After such data lands on external servers, companies often lose influence over storage methods, handling practices, or future applications. One real event showed just how fast things can go wrong. Back in 2023, workers at Samsung shared private code along with confidential meeting details by sending them into ChatGPT. That slip revealed data meant to stay inside the company. 

What slipped out was not hacked - just handed over during routine work. Without strong rules in place, such tools become quiet exits for secrets. Trusting outside software too quickly opens gaps even careful firms miss. Compromised AI accounts might not only leak data - security specialists stress they may also unlock wider company networks through exposed chat logs. 

While financial firms worry about breaking GDPR rules, hospitals fear HIPAA violations when staff misuse artificial intelligence tools unexpectedly. One slip with these systems can trigger audits far beyond IT departments’ control. Bypassing restrictions tends to happen anyway, even when companies try to ban AI outright. 

Experts argue complete blocks usually fail because staff seek workarounds if they think a tool helps them get things done faster. Organizations might shift attention toward AI oversight methods that reveal how these tools get applied across teams. 

By watching how systems are accessed, spotting unapproved software, clarity often emerges around acceptable use. Clear rules tend to appear more effective when risk control matters - especially if workers continue using innovative tools quietly. Guidance like this supports balance: safety improves without blocking progress.

Rust-Based VENON Malware Targets 33 Brazilian Banks

 


A newly identified banking malware strain called VENON is targeting users in Brazil and stands out for an unusual technical choice. Instead of relying on the Delphi programming language used by many long-running Latin American banking trojans, the new threat is written in Rust, a modern systems language that is increasingly appearing in intricately executed cyber operations.

The malware infects Windows machines and was first detected in February 2026. Researchers at the Brazilian cybersecurity firm ZenoX assigned the malware the name VENON after analyzing the threat.

Although it is written in a different programming language, the malware behaves similarly to several well-known banking trojans that have historically targeted financial institutions in Latin America. Analysts say the threat shares operational patterns with malware families such as Grandoreiro, Mekotio, and Coyote. These similarities include techniques like monitoring the active window on a victim’s computer, launching fake login overlays when banking applications open, and hijacking Windows shortcut files to redirect users.

At the moment, investigators have not linked VENON to any previously identified cybercriminal operation. However, forensic examination of an earlier version of the malware dating back to January 2026 revealed traces from the developer’s workstation. File paths embedded in the code repeatedly referenced a Windows user account named “byst4,” which may indicate the environment used during development.

Researchers believe the developer appears to be familiar with how Latin American banking trojans typically operate. However, the implementation in Rust suggests a higher level of technical expertise compared with many traditional banking malware campaigns. Analysts also noted that generative artificial intelligence tools may have been used to help reproduce and expand existing malware capabilities while rewriting them in Rust.

The infection process relies on a multi-stage delivery chain designed to avoid detection. VENON is executed through a technique known as DLL side-loading, where a malicious dynamic-link library runs when a legitimate application loads it. Investigators suspect the campaign may rely on social-engineering tactics similar to the ClickFix method. In this scenario, victims are persuaded to download a ZIP archive that contains the malicious components. A PowerShell script within the archive then launches the malware.

Before performing any harmful actions, the malicious DLL runs several checks designed to evade security tools. Researchers documented nine separate evasion methods. These include detecting whether the malware is running inside a security sandbox, using indirect system calls to avoid monitoring, and bypassing both Event Tracing for Windows (ETW) logging and the Antimalware Scan Interface (AMSI).

After completing these checks, the malware contacts a configuration file hosted on Google Cloud Storage. It then installs a scheduled task on the compromised machine to maintain persistence and establishes a WebSocket connection with a command-and-control server operated by the attackers.

Investigators also identified two Visual Basic Script components embedded in the DLL. These scripts implement a shortcut hijacking mechanism aimed specifically at the Itaú banking application. The technique replaces legitimate shortcuts with manipulated versions that redirect victims to a fraudulent webpage controlled by the threat actor.

The malware even includes an uninstall routine that can reverse these shortcut changes. This feature allows operators to restore the original system configuration, which could help remove evidence of the compromise after an attack.

VENON is configured to monitor activity related to 33 financial institutions and cryptocurrency services. The malware constantly checks the titles of open windows and the domains visited in web browsers. It activates only when a user accesses one of the targeted banking platforms. When triggered, the malware displays fake login overlays designed to capture credentials.

The discovery comes amid a broader wave of campaigns targeting Brazilian users through messaging platforms. Researchers recently observed threat actors exploiting the widespread popularity of WhatsApp in the country to spread a worm known as SORVEPOTEL. The worm spreads through the desktop web version of the messaging service by abusing already authenticated chat sessions to send malicious messages directly to contacts.

According to analysts at Blackpoint Cyber, a single malicious message sent from a compromised SORVEPOTEL session can initiate a multi-stage infection chain. In one observed scenario, the attack eventually deployed the Astaroth threat entirely in system memory.

The researchers noted that the combination of local automation tools, browser drivers operating without supervision, and runtime environments that allow users to write files locally created an environment that made it easier for both the worm and the final malware payload to install themselves with minimal resistance.

Windows Telemetry Explained: What Diagnostic Data Microsoft Collects and Why It Matters

 

Years after Windows 10 arrived, a single aspect keeps stirring conversation - telemetry. This data gathering, labeled diagnostic info by Microsoft, pulls details from machines without manual input. Its purpose? Keeping systems stable, secure, running smoothly. Yet reactions split sharply between everyday users and those watching privacy trends. 

Early on, after Windows 10 arrived, observers questioned whether its telemetry might double as monitoring. A few writers argued it collected large amounts of user detail while transmitting data to Microsoft machines. Still, analysts inspecting how the OS handles information report minimal proof backing such suspicions. 

Beginning in 2017, scrutiny from the Dutch Data Protection Authority revealed shortcomings in how Windows presented telemetry consent choices. Although designed to gather system performance details, the setup failed to align with regional privacy expectations due to unclear user permissions. 

Instead of defending the original design, Microsoft adjusted both interface wording and backend configurations. Following these updates, oversight bodies acknowledged improvements, noting no evidence emerged suggesting private information was gathered unlawfully. Independent analysts alongside regulatory teams had previously flagged the configuration, yet after revisions, compliance concerns faded gradually. 

What runs behind the scenes in Windows includes a mix of telemetry types - mainly split into essential and extra reporting layers. Most personal computers, especially those outside corporate control, turn on the basic tier automatically; there exists no standard menu option to switch it off entirely. This baseline layer gathers only what Microsoft claims is vital for stability and core operations. 

Though hidden from typical adjustments, its presence supports ongoing performance checks across devices. Basic troubleshooting relies on specific diagnostics tied to functions like Windows Update. Information might cover simple fault summaries, setup traits of hardware, software plus driver footprints, along with records tracking how updates succeed or fail. 

As noted by Microsoft, insights drawn support better stability fixes, safety patches, app alignment, and smoother running systems. Some diagnostic details go beyond basics, capturing patterns in app use or web habits. These insights might involve deeper system errors, performance signs, or hardware traits. 

While such data helps refine functionality, access remains under user control via Windows options. Those cautious about personal information often choose to turn this off. Control sits within settings, letting choices match comfort levels. Occasionally, memory dumps taken during system failures form part of Optional diagnostic data, according to experts. 

When a crash happens, pieces of active files might get saved inside these records. Because of this risk, certain groups managing confidential material prefer disabling the setting altogether. In 2018, Microsoft rolled out a feature named Diagnostic Data Viewer to boost openness. This tool gives people access to review what information their machine shares with the company, revealing specifics found in diagnostics and system summaries. 

One billion devices now operate on Windows 11 across the globe. Because of countless variations in hardware and software setups, Microsoft relies on telemetry data - this information reveals issues, shapes update improvements, yet supports consistent performance. While tracking user interactions might sound intrusive, it actually guides fixes without exposing personal details; instead, patterns emerge that steer engineering decisions behind the scenes. 

Even though some diagnostic details are essential for basic operations, those worried about personal data might choose to limit what gets sent by turning off non-essential diagnostics in device preferences. Still, full function depends on keeping certain reporting active.

Experts Warn About AI-assisted Malwares Used For Extortion


AI-based Slopoly malware

Cybersecurity experts have disclosed info about a suspected AI-based malware named “Slopoly” used by threat actor Hive0163 for financial motives. 

IBM X-Force researcher Golo Mühr said, “Although still relatively unspectacular, AI-generated malware such as Slopoly shows how easily threat actors can weaponize AI to develop new malware frameworks in a fraction of the time it used to take,” according to the Hacker News.

Hive0163 malware campaign 

Hive0163's attacks are motivated by extortion via large-scale data theft and ransomware. The gang is linked with various malicious tools like Interlock RAT, NodeSnake, Interlock ransomware, and Junk fiction loader. 

In a ransomware incident found in early 2026, the gang was found installing Slopoly during the post-exploit phase to build access to gain persistent access to the compromised server. 

Slopoly’s detection can be tracked back to PowerShell script that may be installed in the “C:\ProgramData\Microsoft\Windows\Runtime” folder via a builder. Persistence is made via a scheduled task called “Runtime Broker”. 

Experts believe that that malware was made with an LLM as it contains extensive comments, accurately named variables, error handling, and logging. 

There are signs that the malware was developed with the help of an as-yet-undetermined large language model (LLM). This includes the presence of extensive comments, logging, error handling, and accurately named variables. 

The comments also describe the script as a "Polymorphic C2 Persistence Client," indicating that it's part of a command-and-control (C2) framework. 

According to Mühr, “The script does not possess any advanced techniques and can hardly be considered polymorphic, since it's unable to modify its own code during execution. The builder may, however, generate new clients with different randomized configuration values and function names, which is standard practice among malware builders.”

The PowerShell script works as a backdoor comprising system details to a C2 server. There has been a rise in AI-assisted malware in recent times. Slopoly, PromptSpy, and VoidLink show how hackers are using the tool to speed up malware creation and expand their operations. 

IBM X-Force says the “introduction of AI-generated malware does not pose a new or sophisticated threat from a technical standpoint. It disproportionately enables threat actors by reducing the time an operator needs to develop and execute an attack.”

French FICOBA Breach Exposes 1.2M Bank Accounts

 

A major cyberattack struck France's national bank account registry, FICOBA, exposing sensitive data from over 1.2 million accounts.The breach occurred in late January 2026 when hackers stole login credentials from a civil servant and impersonated an authorized user to access the database. This incident highlights vulnerabilities in government systems handling financial records.

FICOBA serves as France's central repository for all bank accounts opened in domestic institutions, storing identifiers like RIB and IBAN numbers, holder names, and postal addresses. Attackers extracted this information but could not access balances or perform transactions, according to officials. The French Ministry of Finance confirmed tax IDs were not compromised, though early reports varied.

Authorities detected the intrusion swiftly, immediately restricting access and taking the database offline temporarily.It was restored with enhanced security measures after collaboration with the National Cybersecurity Agency (ANSSI). A formal complaint was filed with the National Commission for Information Technology and Civil Liberties (CNIL), and notifications are underway to affected individuals and banks.

The exposure raises alarms for phishing scams and SEPA direct debit fraud, with banks already noting increased suspicious SMS and emails.Criminals could exploit IBANs and personal details for identity theft or unauthorized payments. French tax authorities warn they never request banking info via unsolicited messages.

Safety recommendations 

To protect yourself post-breach, monitor bank statements daily for unauthorized activity and enable transaction alerts. Change passwords on financial accounts, using unique strong ones via a password manager, and activate multi-factor authentication (MFA) everywhere possible. Avoid clicking links in unsolicited emails or texts claiming breach notifications—contact your bank directly through official apps or sites.

Further, freeze credit reports if available in your country to block new accounts in your name, and consider credit monitoring services. Report suspicious activity to your bank and local cyber police immediately.Regularly update software and use antivirus tools to prevent credential theft, emphasizing least-privilege access in organizations. These steps minimize risks from exposed data like in the FICOBA incident.

The Global Cyber Fraud Wave Is Being Supercharged by Artificial Intelligence


 

It is becoming increasingly common for organizations to rethink how security operations are structured and managed as the digital threat landscape continues to evolve. Artificial intelligence is increasingly becoming an integral part of modern cyber defense strategies due to its increasing complexity. 

As networks, endpoints, and cloud infrastructures generate large quantities of telemetry, security teams are turning to advanced machine learning models and intelligent analytics to process those data. As a result, these systems are able to identify subtle anomalies and behavioral patterns which would otherwise be hidden by conventional monitoring frameworks, allowing for earlier detection of malicious behavior. 

In addition to improving cybersecurity workflow efficiency, AI is also transforming cybersecurity operations. With adaptive algorithms that continually refine their analytical models, tasks that previously required extensive manual oversight can now be automated, such as log correlation, threat triage, and vulnerability assessment. 

Artificial intelligence allows security professionals to concentrate on more strategic and investigative activities, such as threat hunting and incident response planning, by reducing the operational burden on human analysts. Organizations are facing increasingly sophisticated adversaries who utilize automation and advanced techniques in order to circumvent traditional defenses. 

The shift is particularly important as adversaries become increasingly sophisticated. Additionally, AI can strengthen proactive defense mechanisms by analyzing historical attacks and behavioral indicators. 

Using AI-driven platforms, organizations can detect phishing campaigns in real time using linguistic and contextual analysis as well as flag suspicious activity across distributed environments in advance of emerging attack vectors. This continuous learning capability allows these systems to adapt to changes in the threat landscape, enhancing their accuracy and resilience as new patterns of malicious activity emerge. 

Therefore, artificial intelligence is becoming a strategic asset as well as a defensive necessity, enabling organizations to deal with cyber threats more effectively, efficiently, and adaptably while ensuring the security of critical data and digital infrastructure. 

In the telecommunications sector, fraud has been a persistent operational and security concern for many years, resulting in considerable financial losses and reputational consequences. In order to identify irregular usage patterns and protect subscriber accounts, telecom operators traditionally rely on multilayered monitoring controls and rule-based fraud management systems.

Although the industry is rapidly expanding into adjacent digital services, including mobile payments, digital wallets, and payment service banking, conventional boundaries that once separated the telecom industry from the financial sector have begun to become blurred. Increasingly, telecom networks serve as foundational infrastructure for digital transactions, identity verification, and financial connectivity, rather than merely serving as communication channels. 

By resulting in this structural shift, the attack surface has been significantly increased, resulting in a more complex and interconnected fraud environment, where threats are capable of propagating across multiple digital platforms. At the same time, artificial intelligence is rapidly transforming the way fraud risks are managed and emergence occurs. 

With the use of artificial intelligence-driven automation, sophisticated threats actors are orchestrating highly scalable fraud campaigns, generating convincing phishing messages, utilizing social engineering tactics, and analyzing network vulnerabilities more quickly than ever before. This capability enables fraudulent schemes to evolve dynamically, adapting more rapidly than traditional detection mechanisms. 

In spite of this, technological advances are equipping telecommunications providers with more advanced defensive tools as well. A fraud detection platform based on artificial intelligence can analyze huge volumes of network telemetry and transaction data, analyzing signals across communication and payment systems in real time to identify subtle indicators of compromise.

By analyzing behavior patterns, detecting anomalies, and modeling predictive patterns, security teams are able to detect suspicious activities earlier and respond more precisely. Additionally, the economic implications of telecom-related fraud emphasize the need to strengthen these defenses. The telecommunications industry has been estimated to have suffered tens of billions of dollars in losses in recent years as a result of digital exploitation on a grand scale.

In emerging digital economies, this issue is particularly acute, since mobile connectivity is increasingly serving as a bridge to financial inclusion. Fraud incidents that occur on telecommunications networks that support digital banking, mobile money transfers, and online commerce can have consequences that go beyond the service providers themselves.

Interconnected platforms may be subject to a variety of regulatory exposures, operational disruptions, or declining consumer confidence at the same time, affecting both telecommunications and financial services simultaneously. Increasing convergence between communication networks and financial services is shifting telecom operators' responsibilities in light of their role in the digital payment ecosystem. 

In addition to ensuring network reliability, providers are also expected to safeguard financial transactions occurring across their infrastructure as digital payment ecosystems grow. In light of the significant interrelationship between mobile and online banking ecosystems, a number of scams target these populations. 

As a consequence of fraudulent activity occurring in such interconnected systems, it can have cascading effects across multiple organizations, leading to regulatory scrutiny and eroding trust within the entire digital economy. 

The challenge for telecommunications companies is therefore no longer limited to managing network abuse alone; they must build resilient, intelligence-driven fraud prevention frameworks capable of protecting a complex digital environment that is becoming increasingly complex. Several studies conducted by the industry indicates that cyber threat operations are in the process of undergoing a significant transformation. 

Attackers are increasingly orchestrating coordinated campaigns that incorporate traditional social engineering techniques with the speed and scale of automated technology. The use of artificial intelligence is now integral to the entire attack lifecycle, from early reconnaissance and target profiling to deceptive communication strategies and operational decision-making.

In the context of everyday business environments, organizations encounter increasingly high-risk interactions with automated systems as AI-powered tools become more accessible. Based on data collected in recent months, it appears that a substantial percentage of enterprise AI interactions involve prompts or requests that raise potential security concerns, demonstrating how the rapid integration of artificial intelligence into corporate workflows presents new opportunities for misappropriation. 

Along with this trend, ransomware ecosystems are also maturing into fragmented and scalable models. It has been observed that the landscape is becoming more characterized by loosely connected networks of specialized operators rather than a few centralized threat groups. 

As a consequence of decentralization, cybercriminals have been able to expand their operations at an exponential rate, increasing both the number of victims targeted and the speed with which campaigns can be executed. 

Moreover, artificial intelligence is helping to streamline target identification, optimize extortion strategies, and automate negotiation and infrastructure management functions. Consequently, a more adaptive and resilient criminal ecosystem has been created that is capable of sustaining persistent global campaigns. 

Social engineering tactics are also embracing a broader array of communication channels than traditional phishing emails. Deception is increasingly coordinated by threat actors across email, web platforms, enterprise collaboration tools, and voice communication channels. Security experts have observed a sharp increase in methods for manipulating user trust by issuing seemingly legitimate technical prompts or support instructions, often encouraging individuals to provide sensitive information or execute commands. 

As a result, phone-based impersonation attacks have evolved into structured intrusion attempts targeted at corporate help desks and internal support functions, resulting in more targeted intrusion attempts. In the age of cloud-based computing, browsers, software-as-a-service environments, and collaborative digital workspaces, artificial intelligence will become an integral part of critical trust layers which adversaries will attempt to exploit. 

Besides user-focused attacks, infrastructure-based vulnerabilities are also expanding the threat surface, enabling hackers to blend malicious activity into legitimate network traffic as covert entry points. Edge devices, virtual private network gateways, and internet-connected systems are increasingly being used as covert entry points by attackers. 

The lack of oversight of these devices can result in persistent access routes that remain undetected within complex enterprise architectures. There are also additional risks associated with the infrastructure that supports artificial intelligence. As machine learning models, automated agents, and supporting services become integrated into enterprise technology stacks, significant configuration weaknesses have been identified across a wide number of deployments, highlighting potential exposures. 

As a result of these developments, cybersecurity leaders are reconsidering the structure of defensive strategies in an era marked by machine-speed attacks. Analysts have increasingly emphasized that responding to incidents after they occur is no longer sufficient; organizations must design security frameworks that prioritize prevention and resilience from the very beginning. 

To ensure these foundational controls can withstand automated and coordinated attacks, security teams need to reevaluate them across networks, endpoints, cloud platforms, communication systems, and secure access environments. 

Security teams face the challenge of facilitating artificial intelligence adoption without introducing unmanaged risks as it becomes incorporated into daily business processes. Keeping a clear picture of the use of artificial intelligence, both sanctioned and unsanctioned, as well as enforcing policies, is essential to reducing the potential for data leakage and misuse. 

In addition, protecting modern digital workspaces, where human decision-making increasingly intersects with automated technologies, is imperative. Several tools, including email platforms, web browsers, collaboration tools, and voice systems, form an integrated operation environment that needs to be secured as a single trust domain. 

In addition to strengthening the protection of edge infrastructure, maintaining an accurate inventory of connected devices can assist in reducing the possibility of attackers exploiting hidden entry points. A key component of maintaining resilience against artificial intelligence-driven cyber threats is consistent visibility across hybrid environments that encompass both on-premises infrastructures and cloud platforms along with distributed edge systems. 

By integrating oversight across these layers and prioritizing prevention-focused security models, organizations can reduce operational blind spots and enhance their defenses against rapidly evolving cyber threats. Industry observers emphasize that, under these circumstances, the ability to defend against AI-enabled cyber fraud will be less dependent upon isolated tools and more dependent upon coordinated security architectures. 

The telecommunications and digital service providers are expected to strengthen collaboration across the technological, financial, and regulatory ecosystems, as well as embed intelligence-driven monitoring into every layer of their infrastructure. It is essential to continually model fraud threats, use adaptive security analytics, and tighten up governance of emerging technologies to anticipate how fraud tactics evolve as innovations progress. 

By emphasizing proactive risk management and strengthening trust across interconnected digital platforms, organizations can be better prepared to address increasingly automated threats while maintaining the integrity of the rapidly expanding digital economy.

Why VPNs Can’t Guarantee Complete Online Anonymity: Understanding the Limits of Digital Privacy

 

The modern internet constantly collects and analyzes information about users. Nearly every action online—browsing websites, clicking links, watching videos or making purchases—creates digital traces that are monitored, stored and often traded. As a result, maintaining privacy on the internet has become increasingly difficult.

Faced with this reality, many people attempt to shield themselves by using tools designed to protect their identity online. Virtual Private Networks (VPNs) have become one of the most popular solutions, often marketed as a way to achieve complete anonymity. However, experts emphasize that true anonymity on the internet is largely unrealistic.

Some VPN providers are transparent about what their services can and cannot do. However, several companies continue to promote exaggerated claims suggesting that their services can make users entirely anonymous online.

For instance, VPN provider CyberGhost states on its website that users can “go completely anonymous and surf the internet without privacy worries,” and promises they can “enjoy complete anonymity & protection online” through its service. Although the company acknowledges in an FAQ section that “no VPN service can make you 100% anonymous online,” the conflicting messaging can still mislead users.

Experts warn that believing VPNs provide absolute anonymity can be risky. Relying solely on a VPN may create a false sense of security, especially when sharing sensitive information or operating in regions with strict digital surveillance. Even journalists, activists or individuals communicating confidential information may remain exposed despite using a VPN.

Widespread Data Collection Online

Online surveillance has existed for decades. Governments have used digital tools to monitor citizens and foreign actors, while technology companies collect user data to support advertising and other business operations.

Public awareness of large-scale digital surveillance increased significantly after former NSA contractor and whistleblower Edward Snowden revealed classified surveillance programs in 2013. Later, the 2018 Cambridge Analytica scandal further highlighted how massive amounts of user data could be harvested and used without clear consent.

Major online platforms such as Google, Facebook, TikTok, Instagram, X, Amazon and Netflix collect extensive information about user activity when individuals are logged in. This includes search queries, clicked links, watched videos, purchased items, ads interacted with and shared content. These details help companies build detailed profiles of user interests and behaviors.

In addition, personal data such as names, email addresses, physical addresses, payment information and usernames can be tracked. Technical identifiers—including IP addresses, browser types, device models and operating systems—also provide valuable data points.

Internet service providers can monitor browsing activity, location data, application usage and metadata. Meanwhile, websites employ technologies such as cookies and device fingerprinting, while social media platforms use tracking pixels to follow users across the web.

The collected data is often sold to data brokers, who treat personal information as a valuable commodity.

Privacy regulations such as Europe’s General Data Protection Regulation (GDPR) and California’s Consumer Privacy Act (CCPA) give individuals greater control over how their information is handled. Still, experts note that these laws can only address part of the problem, as data collection practices remain deeply embedded within the digital economy.

How VPNs Improve Privacy — and Where They Fall Short

A VPN can still play an important role in protecting online privacy. The technology encrypts internet traffic and routes it through a secure server located elsewhere. This process hides browsing activity from internet providers, network administrators and other potential observers.

It also replaces the user’s real IP address with the address of the VPN server, making it harder for websites to identify a user’s exact location or track them directly.

These features allow VPNs to help limit certain types of tracking, bypass geographic restrictions and evade network firewalls at workplaces or schools.

However, VPNs cannot eliminate all tracking mechanisms. Many services include basic protections such as ad or tracker blocking, but most cannot fully defend against browser fingerprinting. This technique gathers information like screen resolution, language preferences, browser type, extensions and operating system to uniquely identify users.

Even with a VPN active, online services such as Amazon, Google or Facebook can still recognize users when they log into their accounts. These platforms continue collecting data linked directly to the individual.

VPNs also cannot prevent users from downloading malicious files or entering personal information into phishing websites. While antivirus tools may help mitigate these risks, VPNs alone cannot.

Another important consideration is that using a VPN shifts visibility of internet activity from an internet service provider to the VPN provider itself. If the provider maintains strong privacy policies—such as audited no-logs practices and secure infrastructure—this risk is minimized. However, some VPN services, particularly free ones, have been criticized for misusing or mishandling user data.

Additional Tools for Stronger Privacy

Specialists emphasize that VPNs should be viewed as just one component of a broader cybersecurity strategy.

Tools like Tor, which uses “onion routing” to send traffic through multiple encrypted relays, can further obscure user activity. Operating systems such as Tails run independently from a computer’s main system and automatically erase data after each session.

Other privacy-enhancing technologies include ad-blocking browser extensions, encrypted messaging platforms like Signal, secure email services such as Proton Mail, and privacy-focused browsers designed to block trackers and resist fingerprinting.

Private search engines such as DuckDuckGo or Brave Search also help reduce data collection compared to mainstream search platforms.

Beyond software tools, experts recommend adopting safer online habits. Limiting social media use, creating temporary accounts with aliases, paying in cash or cryptocurrency when possible, and avoiding suspicious downloads can help reduce exposure.

Users are also encouraged to adjust device privacy settings, restrict application permissions, enable encryption, disable unnecessary tracking features and exercise caution when connecting to public Wi-Fi networks.

Regularly clearing browser cookies and cache can further limit tracking activity.

Ultimately, no single tool can guarantee anonymity on the internet. However, combining multiple privacy technologies with careful online behavior can significantly strengthen personal data protection.

Meta Targets 150K Accounts in Southeast Asia Scam Operation

 



Meta announced that it has removed more than 150,000 accounts tied to organized scam centers operating in Southeast Asia, describing the move as part of a large international effort to disrupt coordinated online fraud networks.

The enforcement action was carried out with assistance from authorities in several countries. Law enforcement agencies and government partners involved in the operation included officials from Thailand, the United States, the United Kingdom, Canada, South Korea, Japan, Singapore, the Philippines, Australia, New Zealand, and Indonesia. According to Meta, the joint effort resulted in 21 individuals being arrested by the Royal Thai Police.

This latest crackdown builds on an earlier pilot initiative launched in December 2025. During that initial phase, Meta removed approximately 59,000 accounts, Pages, and Groups from its platforms that were connected to similar fraudulent activity. The earlier investigation also led to the issuance of six arrest warrants by authorities.

In a statement explaining the action, Meta said that online scams have grown increasingly complex and organized over recent years. Criminal networks, often operating from countries such as Cambodia, Myanmar, and Laos, have established large scam compounds that function in many ways like organized business operations. These groups typically use structured teams, scripted communication strategies, and digital tools designed to evade detection while targeting victims on a global scale. According to the company, the impact of such scams extends far beyond financial loss, as they can severely disrupt lives and weaken trust in digital communication platforms.

Alongside the enforcement action, Meta also announced several new safety features aimed at helping users identify and avoid scam attempts.

One of these tools introduces new warning messages on Facebook that notify users when they receive communication from accounts that display characteristics commonly linked to fraudulent activity. Another safeguard has been introduced on WhatsApp to address a tactic used by scammers who attempt to persuade users to scan a QR code. If successful, this method can link the attacker’s device to the victim’s WhatsApp account, allowing them to access messages and impersonate the account holder. Meta said its system will now notify users when suspicious device-linking requests are detected.

The company is also expanding scam detection on Messenger. When a conversation with a new contact begins to resemble known fraud patterns, such as questionable job opportunities or requests that appear unusual, the platform may prompt users to share recent messages so that an artificial intelligence system can evaluate whether the interaction matches known scam behavior.

Meta also disclosed broader enforcement statistics related to scams on its platforms. Throughout 2025, the company removed more than 159 million advertisements that violated its policies related to fraud and deception. In addition, it disabled approximately 10.9 million Facebook and Instagram accounts that investigators linked to organized scam centers.

To further address fraudulent activity, the company said it plans to expand its advertiser verification program. The goal of this measure is to increase transparency by confirming the identities of advertisers and reducing the ability of malicious actors to misrepresent themselves while running advertisements.

The announcement comes at a time when governments are intensifying efforts to address online fraud. The UK Government recently introduced a new Online Crime Centre designed to focus specifically on cybercrime, including scams connected to organized fraud operations operating in regions such as Southeast Asia, West Africa, Eastern Europe, India, and China.

The centre will bring together specialists from several sectors, including government agencies, law enforcement, intelligence services, financial institutions, mobile network providers, and major technology companies. The initiative is expected to begin operations next month.

The project forms part of the United Kingdom’s broader Fraud Strategy 2026–2029, a policy framework aimed at strengthening the country’s response to fraud and financial crime. As part of this strategy, authorities plan to use artificial intelligence to detect emerging scam patterns, identify suspicious bank transfers more quickly, and deploy “scam-baiting” chatbots designed to interact with fraudsters in order to gather intelligence.

Officials said the new centre, supported by more than £30 million in funding, will focus on identifying the digital infrastructure used by organized crime groups. This includes tracking fraudulent accounts, websites, and phone numbers used in scam operations. Authorities aim to shut down these resources at scale by blocking scam messages, freezing financial accounts linked to criminal activity, removing fraudulent social media profiles, and disrupting scam networks at their source.

Google API Keys Expose Gemini AI Data via Leaked Credentials

 

Google API keys, once considered harmless when embedded in public websites for services like Maps or YouTube, have turned into a serious security risk following the integration of Google's Gemini AI assistant. Security researchers at Truffle Security uncovered this issue, revealing that nearly 3,000 live API keys—prefixed with "AIza"—are exposed in client-side JavaScript code across popular sites.

Truffle Security's scan of the November 2025 Common Crawl dataset, which captures snapshots of major websites, identified 2,863 active keys from diverse sectors including finance, security firms, and even Google's own infrastructure. These keys, deployed sometimes years ago (one traced back to February 2023), were originally safe as mere billing identifiers but gained unauthorized access to Gemini endpoints without developers' knowledge.Attackers can simply copy a key from page source, authenticate to Gemini, and extract sensitive data like uploaded files, cached contexts, or datasets via simple prompts.

The danger extends beyond data theft to massive financial abuse, as Gemini API calls consume tokens that rack up charges—potentially thousands of dollars daily per compromised account, depending on the model and context window. Truffle Security demonstrated this by querying the /models endpoint with exposed keys, confirming access to private Gemini features. One reported case highlighted an $82,314 bill from a stolen key, underscoring the real-world impact.

Google acknowledged the flaw as "single-service privilege escalation" after Truffle's disclosure on November 21, 2025, and implemented fixes by January 2026, including blocking leaked keys from Gemini access, defaulting new AI Studio keys to Gemini-only scope, and sending proactive leak notifications. Despite these measures, the "retroactive privilege expansion" caught many off-guard, as enabling Gemini in projects silently empowered old keys.

Developers must immediately audit Google Cloud projects for Gemini API enablement, rotate all exposed keys, and restrict scopes to essentials—avoiding the default "unrestricted" setting. Tools like TruffleHog can scan code repositories for leaks, while regular monitoring prevents future exposures in an era where AI services amplify API risks. This incident highlights the need for vigilance as cloud features evolve.