Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Google Testing ‘Contextual Suggestions’ Feature for Wider Android Rollout

 



Google is reportedly preparing to extend a smart assistance feature beyond its Pixel smartphones to the wider Android ecosystem. The functionality, referred to as Contextual Suggestions, closely resembles Magic Cue, a software feature currently limited to Google’s Pixel 10 lineup. Early signs suggest the company is testing whether this experience can work reliably across a broader range of Android devices.

Contextual Suggestions is designed to make everyday phone interactions more efficient by offering timely prompts based on a user’s regular habits. Instead of requiring users to manually open apps or repeat the same steps, the system aims to anticipate what action might be useful at a given moment. For example, if someone regularly listens to a specific playlist during workouts, their phone may suggest that music when they arrive at the gym. Similarly, users who cast sports content to a television at the same time every week may receive an automatic casting suggestion at that familiar hour.

According to Google’s feature description, these suggestions are generated using activity patterns and location signals collected directly on the device. This information is stored within a protected, encrypted environment on the phone itself. Google states that the data never leaves the device, is not shared with apps, and is not accessible to the company unless the user explicitly chooses to share it for purposes such as submitting a bug report.

Within this encrypted space, on-device artificial intelligence analyzes usage behavior to identify recurring routines and predict actions that may be helpful. While apps and system services can present the resulting suggestions, they do not gain access to the underlying data used to produce them. Only the prediction is exposed, not the personal information behind it.

Privacy controls are a central part of the feature’s design. Contextual data is automatically deleted after 60 days by default, and users can remove it sooner through a “Manage your data” option. The entire feature can also be disabled for those who prefer not to receive contextual prompts at all.

Contextual Suggestions has begun appearing for a limited number of users running the latest beta version of Google Play Services, although access remains inconsistent even among beta testers. This indicates that the feature is still under controlled testing rather than a full rollout. When available, it appears under Settings > Google or Google Services > All Services > Others.

Google has not yet clarified which apps support Contextual Suggestions. Based on current observations, functionality may be restricted to system-level or Google-owned apps, though this has not been confirmed. The company also mentions the use of artificial intelligence but has not specified whether older or less powerful devices will be excluded due to hardware limitations.

As testing continues, further details are expected to emerge regarding compatibility, app support, and wider availability. For now, Contextual Suggestions reflects Google’s effort to balance convenience with on-device privacy, while cautiously evaluating how such features perform across the diverse Android ecosystem.

Microsoft Introduces Hardware-Accelerated BitLocker to Boost Windows 11 Security and Performance

 

Microsoft is updating Windows 11 with hardware-accelerated BitLocker to improve both data security and system performance. The change enhances full-disk encryption by shifting cryptographic work from the CPU to dedicated hardware components within modern processors, helping systems run more efficiently while keeping data protected. 

BitLocker is Windows’ built-in encryption feature that prevents unauthorized access to stored data. During startup, it uses the Trusted Platform Module to manage encryption keys and unlock drives after verifying system integrity. While this method has been effective, Microsoft says faster storage technologies have made the performance impact of software-based encryption more noticeable, especially during demanding tasks. 

As storage speeds increase, BitLocker’s encryption overhead can slow down activities like gaming and video editing. To address this, Microsoft is offloading encryption tasks to specialized hardware within the processor that is designed for secure and high-speed cryptographic operations. This reduces reliance on the CPU and improves overall system responsiveness. 

With hardware acceleration enabled, large encryption workloads no longer heavily tax the CPU. Microsoft reports that testing showed about 70% fewer CPU cycles per input-output operation compared to software-based BitLocker, although actual gains depend on hardware configurations. 

On supported devices with NVMe drives and compatible processors, BitLocker will default to hardware-accelerated encryption using the XTS-AES-256 algorithm. This applies to automatic device encryption, manual activation, policy-based deployment, and script-driven setups, with some exceptions. 

The update also strengthens security by keeping encryption keys protected within hardware, reducing exposure to memory or CPU-based attacks. Combined with TPM protections, this moves BitLocker closer to eliminating key handling in general system memory.  

Hardware-accelerated BitLocker is available in Windows 11 version 24H2 with September updates installed and will also be included in version 25H2. Initial support is limited to Intel vPro systems with Intel Core Ultra Series 3 (Panther Lake) processors, with broader system-on-a-chip support planned. 

Users can confirm whether hardware acceleration is active by running the “manage-bde -status” command. Microsoft notes BitLocker will revert to software encryption if unsupported algorithms or key sizes are used, certain enterprise policies apply, or FIPS mode is enabled on hardware without certified cryptographic offloading.

Personal and Health Information of 22.6 Million Aflac Clients Stolen in Cyberattack

 


At the start of 2026, a significant cybersecurity breach that was disclosed heightened awareness of digital vulnerabilities within the American insurance industry, after Aflac, one of the largest supplemental insurance providers in the country, confirmed that a sophisticated cyberattack, which took place in June 2025, compromised approximately 22.65 million individuals' personal and protected health information. 

An intrusion took place during the summer of 2025 and has since been regarded as one of the biggest healthcare-related data breaches of the year. The attack pattern of advanced cybercriminals has shifted significantly from targeted low-value sectors to high-value sectors that handle sensitive consumer data, illustrating a noticeable shift in their attack patterns towards those sectors. 

In an effort to determine who is responsible for the breach, investigators and threat analysts have attributed it to the Scattered Spider cybercriminal collective, also referred to as UNC3944, who are widely known for their evolving campaign strategies and earlier compromises targeting retailers across the United States and United Kingdom.

It has been reported that Aflac contained the incident within hours of its detection and confirmed that no ransomware payload has been deployed. However, the attackers have managed to extract a wide range of sensitive information including Social Security numbers, government-issued identification numbers, medical and insurance records, claims data from policyholders, as well as confidential information about protected health. 

Since the disclosure came to light, it has sparked rare bipartisan concern among lawmakers, triggered multiple class-action lawsuits against insurance companies, and has intensified debate about the resilience of the insurance industry when it comes to cyber security, given the large amount of data it stores and its sensitivity, making it prime targets for highly coordinated cyber attacks. 

Anflac has submitted further details regarding the scope of the information exposed as a result of the incident to the Texas and Iowa attorneys generals' offices, confirming that the compromised data includes both sensitive and non-sensitive personal identifying information of a large range of individuals. 

A company disclosure stated that the stolen records included details such as customer names, dates of birth, home addresses, passports and state identification cards, driver's licenses, Social Security numbers, along with detailed medical information and health insurance information, as well as information about the company's employees. 

According to Aflac's submission to Iowa authorities, the perpetrators may have connections with a known cybercrime organization, according to the company's submission, while noting that the attackers might have been engaged in a broader campaign against multiple insurance firms. Both the government and external cybersecurity experts have suggested that the attackers could have been engaged in this kind of campaign. 

It is important to note that Scattered Spider, an informal collective of mainly young English-speaking threat actors, has not been publicly identified as the group that is responsible for the attacks, but some cybersecurity analysts believe it is an obvious candidate based on the overlapping tactics and timing of their attacks. 

According to news outlets, Aflac did not immediately respond to requests for comment from news outlets despite the fact that it serves approximately 50 million customers. Only now is the company attempting to deal with the fallout from what could be the largest data breach in recent memory. In the midst of an intensifying cyber threat that aimed directly at the insurance sector, the breach unfolded. 

Approximately a year after Aflac disclosed the June 2025 attack, the Threat Intelligence Group of Google released a security advisory suggesting that the group, Scattered Spider, a loosely organized group of mostly young, English-speaking hackers, had switched its targeting strategy from retail companies to insurers, indicating a significant increase in the group's operational focus. 

It is important to note that during the same period, Erie Insurance as well as Philadelphia Insurance both confirmed significant network interruptions, raising concerns about a coordinated probe across the entire industry. As of July 2025, Erie has reported that business operations have been fully restored, emphasizing that internal reviews did not reveal any evidence of data loss. 

Philadelphia has also reported the recovery of their network and confirmed that they have not experienced a ransomware incident. After the Aflac breach was discovered, the company made subsequent statements stating that it had initiated a comprehensive forensic investigation within hours of discovery, engaged external cyber specialists and informed federal law enforcement agencies and relevant authorities about the breach. 

This incident, according to the insurer, affected its entire ecosystem, including its customers, beneficiaries, employees, licensed agents, and other individuals associated with that ecosystem. It was revealed that exposed records included names, contact information, insurance claims, health information, Social Security numbers, and other protected personal identifiers related to insurance claims, health claims, and health information. 

As a symbol of their rapid response, Aflac reiterated that the breach was contained within hours, data remained safe, and no ransomware payload was deployed in the process of containing the breach. It is nonetheless notable that even though these assurances have been given, the scale of the compromise has resulted in legal action. 

An ongoing class action lawsuit has already been filed in Georgia federal court in June 2025, and two similarly filed suits have been filed against Erie Insurance as a result of its own cyber incident, reflecting increasing pressures on insurers to strengthen their defenses in a sector increasingly threatened by agile and persistent cybercriminals. 

With insurers struggling to keep up with the growing threat surface of an increasingly digitalized industry, the Aflac incident provides a vital lesson for both breach response and sectoral risk exposure as insurers deal with a growing threat surface. A swift containment prevented the system from paralyzing, but the breach underscores a larger truth, which is that security is no longer a matter of scale alone. 

According to industry experts, proactive reinforcement is the key to reducing vulnerability rather than reactive repair, and firms need to put a strong emphasis on real-time threat monitoring, identity-based access controls, and multilayered encryption of policyholder information to protect themselves against threats. 

As attackers move towards socially-engineered entry points and credential-based compromises, this is especially pertinent. It is also worth mentioning that this incident has sparked discussions about mandatory breach transparency and faster consumer notification frameworks, as well as tighter regulatory alignment across the US states, which remain fragmented regarding reporting requirements. 

Analysts have noted that incidents of this magnitude, despite the absence of ransomware deployment, can have long-term reputational and financial effects that may last longer than the technical intrusion itself. Cyber resilience must go beyond firewalls because it requires the adoption of an organizational culture, vendor governance, and a proactive approach to early anomaly detection. 

In the public, the need to monitor identities and account activity remains crucial - consumers should remain vigilant over identity monitoring. Although the breach of insurance security seems to have been contained, it still has a lasting impact on the insurance sector, which has become more cautious and prepared in the future.

Shinhan Card Probes Internal Data Leak Affecting About 190,000 Merchants

 

Shinhan Card, South Korea’s largest credit card issuer, said on December 23 that personal data linked to about 190,000 merchant representatives was improperly accessed and shared by employees over a three year period, highlighting ongoing concerns around internal data controls in the country’s financial sector. 

The company said roughly 192,000 records were leaked between March 2022 and May 2025. The exposed information included names, mobile phone numbers, dates of birth and gender details of franchise owners. 

Shinhan Card said no resident registration numbers, card details or bank account information were involved and that the incident did not affect general customers. According to the company, the breach was uncovered after a whistleblower submitted evidence to South Korea’s Personal Information Protection Commission, prompting an investigation. 

Shinhan Card began an internal review after receiving a request for information from the regulator in mid November. Investigators found that 12 employees across regional branches in the Chungcheong and Jeolla areas had taken screenshots or photos of merchant data and shared them via mobile messaging apps with external sales agents. 

The information was allegedly used to solicit new card applications from recently registered merchants, including restaurants and pharmacies. Shinhan Card said verifying the scale of the leak took several weeks because the data was spread across more than 2,200 image files containing about 280,000 merchant entries in varying formats. 

Each file had to be checked against internal systems to confirm what information was exposed. Chief Executive Park Chang hoon issued a public apology, saying the leak was caused by unauthorized employee actions rather than a cyberattack. 

He said the company had blocked further access, completed internal audits and strengthened access controls. Shinhan Card said the employees involved would be held accountable. The company added that affected merchants are being notified individually and can check their status through an online portal. 

It said compensation would be provided if any damage is confirmed. The incident adds to a series of internal data misuse cases in South Korea’s financial industry. Regulators said they are assessing whether the breach violates national data protection laws and what penalties may apply. 

The Financial Supervisory Service said it has so far found no evidence that credit information was leaked but will continue to monitor the case. 

Analysts say the Shinhan Card case underscores the growing risk posed by insider misuse as financial institutions expand digital services and data driven operations, putting renewed focus on employee oversight and internal governance.

Darknet AI Tool DIG AI Fuels Automated Cybercrime, Researchers Warn

 

Cybersecurity researchers have identified a new darknet-based artificial intelligence tool that allows threat actors to automate cyberattacks, generate malicious code and produce illegal content, raising concerns about the growing criminal misuse of AI. 

The tool, known as DIG AI, was uncovered by researchers at Resecurity and first detected on September 29, 2025. Investigators said its use expanded rapidly during the fourth quarter, particularly over the holiday season, as cybercriminals sought to exploit reduced vigilance and higher online activity. 

DIG AI operates on the Tor network and does not require user registration, enabling anonymous access. Unlike mainstream AI platforms, it has no content restrictions or safety controls, researchers said. 

The service offers multiple models, including an uncensored text generator, a text model believed to be based on a modified version of ChatGPT Turbo, and an image generation model built on Stable Diffusion. 

Resecurity said the platform is promoted by a threat actor using the alias “Pitch” on underground marketplaces, alongside listings for drugs and stolen financial data. The tool is offered for free with optional paid tiers that provide faster processing, a structure researchers described as a crime-as-a-service model. 

Analysts said DIG AI can generate functional malicious code, including obfuscated JavaScript backdoors that act as web shells. Such code can be used to steal user data, redirect traffic to phishing sites or deploy additional malware. 

While more complex tasks can take several minutes due to limited computing resources, paid options are designed to reduce delays. Beyond cybercrime, researchers warned the tool has been used to produce instructions for making explosives and illegal drugs. 

The image generation model, known as DIG Vision, was found capable of creating synthetic child sexual abuse material or altering real images, posing serious challenges for law enforcement and child protection efforts. 

Resecurity said DIG AI reflects a broader rise in so-called dark or jailbroken large language models, following earlier tools such as FraudGPT and WormGPT. 

Mentions of malicious AI tools on cybercrime forums increased by more than 200% between 2024 and 2025, the firm said. 

Researchers warned that as AI-driven attack tools become easier to access, they could be used to support large-scale cyber operations and real-world harm, particularly ahead of major global events scheduled for 2026.

Google Launches Emergency Location Services in India for Android Devices


Google starts emergency location service in India

Google recently announced the launch of its Emergency Location Service (ELS) in India for compatible Android smartphones. It means that users who are in an emergency can call or contact emergency service providers like police, firefighters, and healthcare professionals. ELS can share the user's accurate location immediately. 

Uttar Pradesh (UP) in India has become the first state to operationalise ELS for Android devices. Earlier, ELS was rolled out to devices having Android 6 or newer versions. For integration, however, ELS will require state authorities to connect it with their services for activation. 

More about ELS

According to Google, the ELS function on Android handsets has been activated in India. The built-in emergency service will enable Android users to communicate their location by call or SMS in order to receive assistance from emergency service providers, such as firefighters, police, and medical personnel. 

ELS on Android collects information from the device's GPS, Wi-Fi, and cellular networks in order to pinpoint the user's exact location, with an accuracy of up to 50 meters.

Implementation details

However, local wireless and emergency infrastructure operators must enable support for the ELS capability. The first state in India to "fully" operationalize the service for Android devices is Uttar Pradesh. 

ELS assistance has been integrated with the emergency number 112 by the state police in partnership with Pert Telecom Solutions. It is a free service that solely monitors a user's position when an Android phone dials 112. 

Google added that all suitable handsets running Android 6.0 and later versions now have access to the ELS functionality. 

Even if a call is dropped within seconds of being answered, the business claims that ELS in Android has enabled over 20 million calls and SMS messages to date. ELS is supported by Android Fused Location Provider- Google's machine learning tool.

Promising safety?

According to Google, the feature is only available to emergency service providers and it will never collect or share accurate location data for itself. The ELS data will be sent directly only to the concerned authority.

Recently, Google also launched the Emergency Live Video feature for Android devices. It lets users share their camera feed during an emergency via a call or SMS with the responder. But the emergency service provider has to get user approval for the access. The feature is shown on screen immediately when the responder requests a video from their side. User can accept the request and provide a visual feed or reject the request.

Critical n8n Vulnerabilty Enables Arbitrary Code Execution, Over 100,000 Instances at Risk

 


A severe security flaw has been identified in the n8n workflow automation platform that could allow attackers to run arbitrary code in specific scenarios. The vulnerability, assigned CVE-2025-68613, has been rated 9.9 on the CVSS scale, highlighting its critical severity. 

The issue was discovered and responsibly disclosed by security researcher Fatih Çelik. According to npm data, the affected package sees approximately 57,000 downloads each week.

"Under certain conditions, expressions supplied by authenticated users during workflow configuration may be evaluated in an execution context that is not sufficiently isolated from the underlying runtime," the maintainers of the npm package said

"An authenticated attacker could abuse this behavior to execute arbitrary code with the privileges of the n8n process. Successful exploitation may lead to full compromise of the affected instance, including unauthorized access to sensitive data, modification of workflows, and execution of system-level operations."

The vulnerability impacts all n8n versions starting from 0.211.0 up to, but not including, 1.120.4. The issue has been resolved in releases 1.120.4, 1.121.1, and 1.122.0. Data from attack surface management firm Censys indicates that as of December 22, 2025, around 103,476 n8n instances could still be exposed. Most of these potentially vulnerable deployments are based in the United States, Germany, France, Brazil, and Singapore.

Given the seriousness of the flaw, users are strongly urged to update their installations immediately. For environments where patching cannot be carried out right away, security experts recommend restricting workflow creation and editing rights to trusted users only. Additionally, deploying n8n within a hardened setup with limited operating system privileges and controlled network access can help reduce the risk of exploitation.

Phantom Shuttle Chrome Extensions Caught Stealing Credentials

 

Two malicious Chrome extensions named Phantom Shuttle have been discovered to have acted as proxies and network test tools while stealing internet browsing and private information from people’s browsers without their knowledge.

According to security researchers from Socket, these extensions have been around since at least 2017 and were present in the Chrome Web Store until the time of writing. This raises serious concerns regarding the dangers associated with browser extensions even from reputable sources. 

Analysis carried out by Socket indicates that the Phantom Shuttle extension directs the online traffic of the victims to a proxy setup that is controlled by the attackers using hardcoded credentials. The attackers hid the malcode using the approach of prepending the malcode to a jQuery library. 

The hardcoded credentials for the proxy are also obfuscated using a custom character index-based encoding scheme, which could impact detection and reverse engineering efficiency. The built-in traffic listener in the extensions is capable of intercepting HTTP authentication challenges on multiple websites.

Modus operandi 

To force traffic through its infrastructure, Phantom Shuttle dynamically modifies Chrome’s proxy configuration using an auto-configuration script. In a default mode labeled “smarty,” the extensions allegedly route more than 170 “high-value” domains through the proxy network, including developer platforms, cloud consoles, social media services, and adult sites. Additionally, to avoid breaking environments that could expose the operation, the extensions maintain an exclusion list that includes local network addresses and the command-and-control domain. 

Since the extensions operate a man-in-the-middle, they can seize data passed through forms such as credentials, payment card data, passwords and other personal information. Socket claims the extensions can also steal session cookies from HTTP headers, and parse API tokens from requests, potentially taking over accounts even if passwords aren't directly harvested. 

Mitigation tips 

Chrome users are warned to download extensions only from trusted developers, to verify multiple user reviews and to be attentive to the permissions asked for when installing. In sensitive workload environments (cloud admin, developer portals, finance tools), minimizing extensions and removing those not in use can also dramatically reduce exposure to similar proxy-based credential heists.

Government Flags WhatsApp Account Bans as Indian Number Misuse Raises Cyber Fraud Concerns

 

The Indian government has expressed concern over WhatsApp banning an average of nearly 9.8 million Indian accounts every month until October, amid fears that Indian mobile numbers are being widely misused for scams and cybercrime. Officials familiar with the discussions said the government is engaging with the Meta-owned messaging platform to understand how such large-scale misuse can be prevented and how enforcement efforts can be strengthened. 

Authorities believe WhatsApp’s current approach of not sharing details of the mobile numbers linked to banned accounts is limiting the government’s ability to track spam, impersonation, and cyber fraud. While WhatsApp publishes monthly compliance reports disclosing the number of accounts it removes for policy violations, officials said the lack of information about the specific numbers involved reduces transparency and weakens enforcement efforts. 

India is WhatsApp’s largest market, and the platform identifies Indian accounts through the +91 country code. Government officials noted that in several cases, numbers banned on WhatsApp later reappear on other messaging platforms such as Telegram, where they continue to be used for fraudulent activities. The misuse of Indian phone numbers by scammers operating both within and outside the country remains a persistent issue, despite multiple measures taken to combat digital fraud. 

According to officials, over-the-top messaging platforms are frequently used for scams because once an account is registered using a mobile number, it can function without an active SIM card. This makes it extremely difficult for law enforcement agencies to trace perpetrators. Authorities estimate that nearly 95% of cases involving digital arrest scams and impersonation fraud currently originate on WhatsApp. 

Government representatives said identifying when a SIM card was issued and verifying the authenticity of its know-your-customer details are critical steps in tackling such crimes. Discussions are ongoing with WhatsApp and other OTT platforms to find mechanisms that balance user privacy with national security and fraud prevention. 

The government also issues direct requests to platforms to disable accounts linked to illegal activities. Data from the Department of Telecommunications shows that by November this year, around 2.9 million WhatsApp profiles and groups were disengaged following government directives. However, officials pointed out that while these removals are documented, there is little clarity around accounts banned independently by WhatsApp.  

Former Ministry of Electronics and IT official Rakesh Maheshwari said the purpose of monthly compliance reports was to improve platform accountability. He added that if emerging patterns raise security concerns, authorities are justified in seeking additional information.  

WhatsApp has maintained that due to end-to-end encryption, its enforcement actions rely on behavioural indicators rather than message content. The company has also stated that sharing detailed account data involves complex legal and cross-border challenges. However, government officials argue that limited disclosure, even at the level of mobile numbers, poses a security risk when large-scale fraud is involved.

Spotify Data Scraping Incident Raises Questions on Copyright, Security, and Digital Preservation

 



A large collection of data reportedly taken from Spotify has surfaced online, drawing attention to serious issues around copyright protection, digital security, and large-scale data misuse. The dataset, which is estimated to be close to 300 terabytes in size, is already being distributed through public torrent networks.

The claim comes from Anna’s Archive, a group previously known for archiving books and academic research. According to information shared by the group, it collected metadata for roughly 256 million tracks and audio files for about 86 million songs from Spotify. Anna’s Archive alleges that this archive represents nearly all listening activity on the platform, estimating coverage at around 99.6 percent.

Anna’s Archive has framed the project as a cultural preservation effort. The group argues that while mainstream music is often stored in multiple locations, lesser-known songs are vulnerable to disappearing if streaming platforms remove content, lose licensing agreements, or shut down services. From this perspective, Spotify was described as a practical starting point for documenting modern music history.

The archive is reportedly organised by popularity and shared through bulk torrent files. Anna’s Archive claims that the total size of the collection makes it one of the largest publicly accessible music metadata databases ever assembled.

Details released by the group suggest that highly streamed tracks were stored in their original 160 kbps format, while less popular songs were compressed into smaller files to reduce storage demands. Music released after July 2025 may not be included. At present, full access is limited to metadata, with audio files being released gradually, beginning with the most popular tracks.

Spotify has since issued an updated statement addressing the situation. The company confirmed it identified and disabled the user accounts involved in what it described as unlawful scraping activity. Spotify said it has introduced additional safeguards to prevent similar incidents and is actively monitoring for suspicious behaviour.

The company reiterated its long-standing position against piracy, stating that it works closely with industry partners to protect artists and copyright holders. In an earlier clarification, Spotify explained that the incident did not involve a direct breach of its internal systems. Instead, it said a third party collected public metadata and used illicit methods to bypass digital rights protections in order to access some audio files.

Spotify has not confirmed the scale of the data collection claimed by Anna’s Archive. While the group asserts that almost the entire platform was archived, Spotify has only acknowledged that a portion of its audio content may have been affected.

At this stage, it remains unclear how much of Spotify’s library was actually accessed or whether legal action will be taken to remove the data from torrent networks. Copyright experts note that redistributing licensed music without permission violates copyright laws in many jurisdictions, regardless of whether it is presented as preservation.

Whether the archive can be effectively taken down or contained remains uncertain.

Amazon Busts DPRK Hacker on Tiny Typing Delay

 

Amazon recently uncovered a North Korean IT worker infiltrating its corporate network by tracking a tiny 110ms delay in keystrokes, highlighting a growing threat in remote hiring and cybersecurity. The anomaly, revealed by Amazon’s Chief Security Officer Stephen Schmidt, pointed to a worker supposedly based in the U.S. but actually operating from thousands of miles away.

The infiltration occurred when a contractor hired by Amazon shipped a company laptop to an individual later found to be a North Korean operative. Commands sent from the laptop to Amazon’s Seattle headquarters typically take less than 100 milliseconds, but these commands took over 110 milliseconds—a subtle clue that the user was located far from the U.S.. This delay signaled that the operator was likely in Asia, prompting further investigation.

Since April 2024, Amazon’s security team has blocked more than 1,800 attempts by North Korean workers to infiltrate its workforce, with attempts rising by 27% quarter-over-quarter in 2025. The North Korean operatives often use proxies and forged identities to access remote IT jobs, funneling earnings into the DPRK’s weapons programs and circumventing international sanctions.

Security monitoring revealed that the compromised laptop was being remotely controlled from China, though it did not have access to sensitive data. Investigators cross-referenced the suspect’s resume with system activity and identified a pattern consistent with previous North Korean fraud attempts. Schmidt noted that these operatives often fabricate employment histories tied to obscure consultancies, reuse the same feeder schools and firms, and display telltale signs such as mangled English idioms.

The front in this case was an Arizona woman who was sentenced to multiple years in prison for her role in a $1.7 million IT fraud ring that helped North Korean workers gain access to U.S. corporate networks. Schmidt emphasized that Amazon did not directly hire any North Koreans but warned that shipping company laptops to contractor proxies can create significant risks.

This incident underscores the importance of thorough background checks and advanced endpoint security for remote workers. Latency analysis, behavioral monitoring, and traffic forensics are now essential tools for detecting nation-state threats in the remote work era. Cybersecurity professionals are urged to go beyond basic vetting—such as LinkedIn scans—and adopt robust anomaly detection to protect against sophisticated grifters.As North Korean fraud tactics continue to evolve, companies must remain vigilant. Every lag, every odd behavior, and every unverified resume could be the first sign of a much larger threat hiding in plain sight.

High Severity Flaw In Open WebUI Can Leak User Conversations and Data


A high-severity security bug impacting Open WebUI has been found by experts. It may expose users to account takeover (ATO) and, in some incidents, cause full server compromise. 

Talking about WebUI, Cato researchers said, “When a platform of this size becomes vulnerable, the impact isn’t just theoretical. It affects production environments managing research data, internal codebases, and regulated information.”

The flaw is tracked as CVE-2025-64496 and found by Cato Networks experts. The vulnerability affects Open WebUI versions 0.6.34 and older if the Director Connection feature is allowed. The flaw has a severity rating of 7.3 out of 10. 

The vulnerability exists inside Direct Connections, which allows users to connect Open WebUI to external OpenAI-supported model servers. While built for supporting flexibility and self-hosted AI workflows, the feature can be exploited if a user is tricked into linking with a malicious server pretending to be a genuine AI endpoint. 

Fundamentally, the vulnerability comes from a trust relapse between unsafe model servers and the user's browser session. A malicious server can send a tailored server-sent events message that prompts the deployment of JavaScript code in the browser. This lets a threat actor steal authentication tokens stored in local storage. When the hacker gets these tokens, it gives them full access to the user's Open WebUI account. Chats, API keys, uploaded documents, and other important data is exposed. 

Depending on user privileges, the consequences can be different.

Consequences?

  • Hackers can steal JSON web tokens and hijack sessions. 
  • Full account hack, this includes access to chat logs and uploaded documents.
  • Leak of important data and credentials shared in conversations. 
  • If the user has enabled workspace.tools permission, it can lead to remote code execution (RCE). 

Open WebUI maintainers were informed about the issue in October 2025, and publicly disclosed in November 2025, after patch validation and CVE assignment. Open WebUI variants 0.6.35 and later stop the compromised execute events, patching the user-facing threat.

Open WebUI’s security patch will work for v0.6.35 or “newer versions, which closes the user-facing Direct Connections vulnerability. However, organizations still need to strengthen authentication, sandbox extensibility and restrict access to specific resources,” according to Cato Networks researchers.





FIR in Bengaluru Targets Social Media Accounts Spreading Obscene URLs


 

The Bengaluru Central Cyber Crime unit has taken legal steps to investigate allegations that explicit content was being distributed across the mainstream social media platforms in coordinated fashion, showing the ever-evolving challenges in the transformation of police work in the digital world.

In an effort to make public the identities of the users involved, authorities have registered a First Information Report (FIR) against them accusing them of publishing sexually explicit posts accompanied by captions that actively solicited private engagement on their respective platforms for the purpose of accessing obscene videos. 

The complaint alleges that these captions were structured with the intention of encouraging direct messaging under the pretense that they were sharing exclusive content, without disclosing that they were recruiting viewers to external pornographic sites. 

Harsha, a 27-year-old employee of a private firm living in Kumarapark West, reported the encounter with an Instagram account disseminating explicit images involving men and women while browsing the platform as the trigger for the investigation. 

Harsha lodged a formal grievance through his employer, who involved Police in investigating the incident. Earlier research has indicated that the posts were intentionally deceptive and contained deceptive prompts like "link in bio" and "DM for video," which investigators believe were deliberately intended to mislead users, drive traffic to adult websites, and use Instagram as a marketing tool in order to draw unsuspecting people into pornographic sites. 

User-generated content on social media has intensified regulatory scrutiny in India, where obscenity is legally defined as a form of expression that has the potential to corrupt and morally harm individuals, or that appeals to their prurient interests. Indian legislation governing such violations encompasses both offline as well as digital aspects of the law. 

In the Indian Penal Code (IPC), Section 292 prohibits the sale, distribution, and public exhibition of obscene material, whereas in Section 294 it prohibits the conduct of obscene acts or the production of audio content in public places. As the availability of online platforms has increased in recent years, law enforcement agencies have increasingly interpreted this framework to include online platforms as well. 

With Section 67 of the Information Technology (IT) Act, 2000, which provides further protection against digital violations, the government is strengthening oversight of digital violations, as the law provides a three-year imprisonment sentence and fine for publishing or transmitting obscene electronic content, while Section 67A increases the punishment to five years when it pertains to sexually explicit material.

Although these safeguards exist, platforms such as Instagram, Facebook, YouTube, and X continue to be confronted with content streams that evade automated moderation, and this raises particular concerns for the younger users. In repeated Parliamentary assessments, the consequences of unchecked pornography have repeatedly been pointed out, including heightened vulnerability among children, the risk of developing behavioral addiction, and an increased exposure to exploitation and objectification. 

Bengaluru has experienced a lot of complaints highlighting these fears, with investigators noticing a pattern of accounts sharing reels and posts with misleading prompts like “link in bio”, or inviting people to directly message them for explicit videos, which is believed to funnel audiences to adult sites or private exchanges of obscene material through these tactics. 

As part of the FIR filed by Harsha, a reference was made to reel URLs that were widely shared and links were provided to multiple social media profiles that were allegedly assisting in the activity. The case arises quite suddenly amid a landscape characterized by illicit digital markets trading illicit visual images. 

An investigation being conducted for the past four months has revealed that stolen CCTV footage has been covertly sold on Telegram, with images that have been sourced from theatres, homes, hostels, hospitals, and student accommodation facilities. This demonstrates how the scale of unregulated content economies is increasing, which operates outside the confines of public platforms and security controls. 

The investigators have traced the activity to 28 distinct URLs and identified 28 social media accounts that they believe were involved in posting sexually explicit screenshots, short clips, and promotional captions in order to increase engagement on the website. 

It is believed by law enforcement officials that the accounts were designed to encourage users to leave comments or initiate direct message conversations to view full versions of videos, while simultaneously embedding links to adult-focused sites in profile bios using common prompts such as "link in bio" or "DM for video" that they commonly use. 

According to authorities, these links directed viewers to external pornographic websites, causing alarms as a result of the unrestricted demographic reach of social networking sites such as Instagram, Facebook, YouTube, and many others. It has been noted that, due to the fact that curiosity-driven clicks to such links can lead children and young users into unmoderated web spaces containing obscene materials, the authorities have raised concerns about incidental exposure among minors. 

A police assessment suggests that the alleged motive of the circulation was not only the dissemination of explicit material, but that there was also an attempt to artificially inflate the visibility of an account, the number of followers, and the social media traction of a user. 

As an investigation officer told me, the trend constituted a "dangerous escalation of immoral digital marketing tactics," that the misuse of social media platforms as a means of attracting users to pornography poses a serious societal risk and should be punished strictly under the 2000 Information Technology Act. 

Additionally, the complaint provided references to viral reel links and a list of participants, causing authorities to formally file 28 accounts at the Central Cybercrime Police Station in order to conduct a structured investigation into the matter. 

Several sections of the Information Technology Act have been applied to the filing of a FIR at the Central Cybercrime Police Station, including Section 67, which stipulates penalties for publishing or transmitting obscene material electronically.

Officials are currently investigating this matter, and officials emphasize that social media's accessibility across age groups requires stronger vigilance, tighter content governance, and swift punitive measures when platforms are manipulated in order to facilitate explicit content syndication. 

It is apparent from the unfolding investigation that digital platforms, regulators, and users need to work together to strengthen online safety frameworks in order to combat the threat of cyber-attacks. 

In addition to the strengthening of monitoring mechanisms implemented by law enforcement agencies, experts note that sustainable deterrence requires a greater focus on improved content governance, a faster takedown protocol, and an enhanced integration of artificial intelligence-driven moderation, which is capable of adapting to evolving evasion techniques without compromising privacy or creative freedom. 

According to cyber safety advocates, user awareness plays a crucial role in disrupting such networks, encouraging people to report suspicious handles, links, or captions that appear to be designed for engagement farming, as well as reporting suspicious handles, links, and captions. 

In order to keep minors from being accidentally exposed to bio-embedded links, platforms may want to strengthen friction around them and add child safety filters to prevent accidental exposure. Additionally, public interest groups have advocated the need to create structured collaborations among social media companies and cyber cells in order to map emerging content funnels as soon as possible. 

Meanwhile, the digital literacy forums remind us that, although link clicking may seem harmless at first glance, it can lead us into a dangerous web environment, which could involve phishing, malware-laden websites, or illegal content markets. 

Ultimately, this case reinforces a bigger message: social media safety is not just a technological issue, but a societal responsibility that requires vigilance, accountability, and informed participation at all levels to ensure users are protected, no matter where they live.

Hackers Hijack WhatsApp Accounts Using ‘GhostPairing’ Scam Without Breaking Encryption

 

Cybersecurity experts have issued a warning after discovering a new method that allows hackers to take over WhatsApp accounts without compromising the app’s end-to-end encryption.

The attack, known as the GhostPairing scam, exploits WhatsApp’s legitimate device-linking feature. By manipulating users into unknowingly connecting their account to a device controlled by cybercriminals, attackers gain live access to private chats, images, videos, and voice messages. Once an account is compromised, hackers can impersonate the victim and message their contacts, enabling the scam to spread further.

The process begins when a target receives a message that appears to be sent by someone they trust. The message includes a link, often claiming to display a photo of the recipient. Clicking the link redirects the user to a fake Facebook login page that asks for their phone number.

Instead of displaying any image, the page triggers WhatsApp’s device-pairing process by showing a code and instructing the victim to enter it into the app. By doing so, the user unknowingly authorises an unfamiliar device to link with their account. This gives attackers full access without the need for passwords or additional verification.

The scam was identified by researchers at cybersecurity company Avast, who say it is particularly dangerous due to its ability to spread rapidly in a chain-like manner.

“This campaign highlights a growing shift in cybercrime: breaching people's trust is as important as breaching their security systems,” Luis Corrons, a Security Evangelist at Avast, told The Independent.

“Scammers are persuading people to approve access themselves by abusing familiar mechanisms like QR codes, pairing prompts, and ‘verify on your phone’ screens that feel routine.

“Scams like GhostPairing turn trust into a tool for abuse. This isn’t just a WhatsApp issue. It’s a warning sign for any platform that relies on fast, low-visibility device pairing.”

In a blog post explaining the scam, Avast cautioned that many victims may not even realise their accounts have been hijacked. WhatsApp users can review connected devices by opening Settings and tapping Linked Devices. Any unfamiliar device should be removed immediately.

“At Avast, we see this as a turning point in how we think about authentication and user intent,” Mr Corrons said.

“As attacks grow more manipulative, security must account not just for what users are doing intentionally, but also what they’re being tricked into doing. GhostPairing shows that when trust becomes automatic, it becomes exploitable."

How Gender Politics Are Reshaping Data Privacy and Personal Information




The contemporary legal and administrative actions in the United States are revamping how personal data is recorded, shared, and accessed by government systems. For transgender and gender diverse individuals, these changes carry heightened risks, as identity records and healthcare information are increasingly entangled with political and legal enforcement mechanisms.

One of the most visible shifts involves federal identity documentation. Updated rules now require U.S. passport applicants to list sex as assigned at birth, eliminating earlier flexibility in gender markers. Courts have allowed this policy to proceed despite legal challenges. Passport data does not function in isolation. It feeds into airline systems, border controls, employment verification processes, financial services, and law enforcement databases. When official identification does not reflect an individual’s lived identity, transgender and gender diverse people may face repeated scrutiny, increased risk of harassment, and complications during travel or routine identity checks. From a data governance perspective, embedding such inconsistencies also weakens the accuracy and reliability of federal record systems.

Healthcare data has become another major point of concern. The Department of Justice has expanded investigations into medical providers offering gender related care to minors by applying existing fraud and drug regulation laws. These investigations focus on insurance billing practices, particularly the use of diagnostic codes to secure coverage for treatments. As part of these efforts, subpoenas have been issued to hospitals and clinics across the country.

Importantly, these subpoenas have sought not only financial records but also deeply sensitive patient information, including names, birth dates, and medical intake forms. Although current health privacy laws permit disclosures for law enforcement purposes, privacy experts warn that this exception allows personal medical data to be accessed and retained far beyond its original purpose. Many healthcare providers report that these actions have created a chilling effect, prompting some institutions to restrict or suspend gender related care due to legal uncertainty.

Other federal agencies have taken steps that further intensify concern. The Federal Trade Commission, traditionally focused on consumer protection and data privacy, has hosted events scrutinizing gender affirming healthcare while giving limited attention to patient confidentiality. This shift has raised questions about how privacy enforcement priorities are being set.

As in person healthcare becomes harder to access, transgender and gender diverse individuals increasingly depend on digital resources. Research consistently shows that the vast majority of transgender adults rely on the internet for health information, and a large proportion use telehealth services for medical care. However, this dependence on digital systems also exposes vulnerabilities, including limited broadband access, high device costs, and gaps in digital literacy. These risks are compounded by the government’s routine purchase of personal data from commercial data brokers.

Privacy challenges extend into educational systems as well. Courts have declined to establish a national standard governing control over students’ gender related data, leaving unresolved questions about who can access, store, and disclose sensitive information held by schools.

Taken together, changes to identity documents, aggressive access to healthcare data, and unresolved data protections in education are creating an environment of increased surveillance for transgender and gender diverse individuals. While some state level actions have successfully limited overly broad data requests, experts argue that comprehensive federal privacy protections are urgently needed to safeguard sensitive personal data in an increasingly digital society.

Spotify Flags Unauthorised Access to Music Catalogue

 

Spotify reported that a third party had scraped parts of its music catalogue after a pirate activist group claimed it had released metadata and audio files linked to hundreds of millions of tracks. 

The streaming company said an investigation found that unauthorised users accessed public metadata and used illicit methods to bypass digital rights management controls to obtain some audio files. 

Spotify said it had disabled the accounts involved and introduced additional safeguards. The claims were made by a group calling itself Anna’s Archive, which runs an open source search engine known for indexing pirated books and academic texts. 

In a blog post, the group said it had backed up Spotify’s music catalogue and released metadata covering 256 million tracks and 86 million audio files. 

The group said the data spans music uploaded to Spotify between 2007 and 2025 and represents about 99.6 percent of listens on the platform. Spotify, which hosts more than 100 million tracks and has over 700 million users globally, said the material does not represent its full inventory. 

The company added that it has no indication that private user data was compromised, saying the only user related information involved was public playlists. The group said the files total just under 300 terabytes and would be distributed via peer to peer file sharing networks. 

It described the release as a preservation effort aimed at safeguarding cultural material. Spotify said it does not believe the audio files have been widely released so far and said it is actively monitoring the situation. 

The company said it is working with industry partners to protect artists and rights holders. Industry observers said the apparent scraping could raise concerns beyond piracy. 

Yoav Zimmerman, chief executive of intellectual property monitoring firm Third Chair, said the data could be attractive to artificial intelligence companies seeking to train music models. Others echoed those concerns, warning that training AI systems on copyrighted material without permission remains common despite legal risks. 

Campaigners have called on governments to require AI developers to disclose training data sources. Copyright disputes between artists and technology companies have intensified as generative AI tools expand. In the UK, artists have criticised proposals that could allow AI firms to use copyrighted material unless rights holders explicitly opt out. 

The government has said it will publish updated policy proposals on AI and copyright next year. Spotify said it remains committed to protecting creators and opposing piracy and that it has strengthened defences against similar attacks.

Eurostar’s AI Chatbot Exposed to Security Flaws, Experts Warn of Growing Cyber Risks

 

Eurostar’s newly launched AI-driven customer support chatbot has come under scrutiny after cybersecurity specialists identified several vulnerabilities that could have exposed the system to serious risks. 

Security researchers from Pen Test Partners found that the chatbot only validated the latest message in a conversation, leaving earlier messages open to manipulation. By altering these older messages, attackers could potentially insert malicious prompts designed to extract system details or, in certain scenarios, attempt to access sensitive information.

At the time the flaws were uncovered, the risks were limited because Eurostar had not integrated its customer data systems with the chatbot. As a result, there was no immediate threat of customer data being leaked.

The researchers also highlighted additional security gaps, including weak verification of conversation and message IDs, as well as an HTML injection vulnerability that could allow JavaScript to run directly within the chat interface. 

Pen Test Partners stated they were likely the first to identify these issues, clarifying: “No attempt was made to access other users’ conversations or personal data”. They cautioned, however, that “the same design weaknesses could become far more serious as chatbot functionality expands”.

Eurostar reiterated that customer information remained secure, telling City AM: “The chatbot did not have access to other systems and more importantly no sensitive customer data was at risk. All data is protected by a customer login.”

The incident highlights a broader challenge facing organizations worldwide. As companies rapidly adopt AI-powered tools, expanding cloud-based systems can unintentionally increase attack surfaces, making robust security measures more critical than ever.


New US Proposal Allows Users to Sue AI Companies Over Unauthorised Data Use


US AI developers would be subject to data privacy obligations applicable in federal court under a wide legislative proposal disclosed recently by the US senate Marsha Blackburn, R-Tenn. 

About the proposal

Beside this, the proposal will create a federal right for users to sue companies for misusing their personal data for AI model training without proper consent. The proposal allows statutory and punitive damages, attorney fees and injunctions. 

Blackburn is planning to officially introduce the bill this year to codify President Donald Trump’s push for “one federal rule book” for AI, according to the press release. 

Why the need for AI regulations 

The legislative framework comes on the heels of Trump’s signing of an executive order aimed at blocking “onerous” AI laws at the state level and promoting a national policy framework for the technology.  

In order to ensure that there is a least burdensome national standard rather than fifty inconsistent State ones, the directive required the administration to collaborate with Congress. 

Michael Kratsios, the president's science and technology adviser, and David Sacks, the White House special adviser for AI and cryptocurrency, were instructed by the president to jointly propose federal AI legislation that would supersede any state laws that would contradict with administration policy. 

Blackburn stated in the Friday release that rather than advocating for AI amnesty, President Trump correctly urged Congress to enact federal standards and protections to address the patchwork of state laws that have impeded AI advancement.

Key highlights of proposal:

  • Mandate that regulations defining "minimum reasonable" AI protections be created by the Federal Trade Commission. 
  • Give the U.S. attorney general, state attorneys general, and private parties the authority to sue AI system creators for damages resulting from "unreasonably dangerous or defective product claims."
  • Mandate that sizable, state-of-the-art AI developers put procedures in place to control and reduce "catastrophic" risks associated with their systems and provide reports to the Department of Homeland Security on a regular basis. 
  • Hold platforms accountable for hosting an unauthorized digital replica of a person if they have actual knowledge that the replica was not authorized by the person portrayed.
  • Require quarterly reporting to the Department of Labor of AI-related job effects, such as job displacement and layoffs.

The proposal will preempt state laws regulating the management of catastrophic AI risks. The legislation will also mostly “preempt” state laws for digital replicas to make a national standard for AI. 

The proposal will not preempt “any generally applicable law, including a body of common law or a scheme of sectoral governance that may address” AI. The bill becomes effective 180 days after enforcement. 

Romanian Water Authority Hit by BitLocker Ransomware, 1,000 Systems Disrupted

 

Romanian Waters, the country's national water management authority, was targeted by a significant ransomware attack over the weekend, affecting approximately 1,000 computer systems across its headquarters and 10 of its 11 regional offices. The breach disrupted servers running geographic information systems, databases, email, web services, Windows workstations, and domain name servers, but crucially, the operational technology (OT) systems controlling the actual water infrastructure were not impacted.

According to the National Cyber Security Directorate (DNSC), the attackers leveraged the built-in Windows BitLocker security feature to encrypt files on compromised systems and left a ransom note demanding contact within seven days. Despite the widespread disruption to IT infrastructure, the DNSC confirmed that the operation of hydrotechnical assets—such as dams and water treatment plants—remains unaffected, as these are managed through dispatch centers using voice communications and local personnel.

Investigators from multiple Romanian security agencies, including the Romanian Intelligence Service's National Cyberint Center, are actively working to identify the attack vector and contain the incident's fallout. Authorities have not yet attributed the attack to any specific ransomware group or state-backed actor. 

The DNSC also noted that the national cybersecurity system for critical IT infrastructure did not previously protect the water authority's systems, but efforts are underway to integrate them into broader protective measures. The incident follows recent warnings from international agencies, including the FBI, NSA, and CISA, about increased targeting of critical infrastructure by pro-Russia hacktivist groups such as Z-Pentest, Sector16, NoName, and CARR. 

This attack marks another major ransomware event in Romania, following previous breaches at Electrica Group and over 100 hospitals due to similar threats in recent years. Romanian authorities continue to stress that water supply and flood protection activities remain fully operational, and no disruption to public services has occurred as a result of the cyberattack.

University of Phoenix Data Breach Exposes Records of Nearly 3.5 Million Individuals

 

The University of Phoenix has confirmed a major cybersecurity incident that exposed the financial and personal information of nearly 3.5 million current and former students, employees, faculty members, and suppliers. The breach is believed to be linked to the Clop ransomware group, a cybercriminal organization known for large-scale data theft and extortion. The incident adds to a growing number of significant cyberattacks reported in 2025. 

Clop is known for exploiting weaknesses in widely used enterprise software rather than locking systems. Instead, the group steals sensitive data and threatens to publish it unless victims pay a ransom. In this case, attackers took advantage of a previously unknown vulnerability in Oracle Corporation’s E-Business Suite software, which allowed them to access internal systems. 

The breach was discovered on November 21 after the University of Phoenix appeared on Clop’s dark web leak site. Further investigation revealed that unauthorized access may have occurred as early as August 2025. The attackers used the Oracle E-Business Suite flaw to move through university systems and reach databases containing highly sensitive financial and personal records.  

The vulnerability used in the attack became publicly known in November, after reports showed Clop-linked actors had been exploiting it since at least September. During that time, organizations began receiving extortion emails claiming financial and operational data had been stolen from Oracle EBS environments. This closely mirrors the methods used in the University of Phoenix breach. 

The stolen data includes names, contact details, dates of birth, Social Security numbers, and bank account and routing numbers. While the university has not formally named Clop as the attacker, cybersecurity experts believe the group is responsible due to its public claims and known use of Oracle EBS vulnerabilities. 

Paul Bischoff, a consumer privacy advocate at Comparitech, said the incident reflects a broader trend in which Clop has aggressively targeted flaws in enterprise software throughout the year. In response, the University of Phoenix has begun notifying affected individuals and is offering 12 months of free identity protection services, including credit monitoring, dark web surveillance, and up to $1 million in fraud reimbursement. 

The breach ranks among the largest cyber incidents of 2025. Rebecca Moody, head of data research at Comparitech, said it highlights the continued risks organizations face from third-party software vulnerabilities. Security experts say the incident underscores the need for timely patching, proactive monitoring, and stronger defenses, especially in education institutions that handle large volumes of sensitive data.