Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

Italy Steps Up Cyber Defenses as Milano–Cortina Winter Olympics Approach

  Inside a government building in Rome, located opposite the ancient Aurelian Walls, dozens of cybersecurity professionals have been carryin...

All the recent news you need to know

Open-Source AI Models Pose Growing Security Risks, Researchers Warn

Hackers and other criminals can easily hijack computers running open-source large language models and use them for illicit activity, bypassing the safeguards built into major artificial intelligence platforms, researchers said on Thursday. The findings are based on a 293-day study conducted jointly by SentinelOne and Censys, and shared exclusively with Reuters. 

The research examined thousands of publicly accessible deployments of open-source LLMs and highlighted a broad range of potentially abusive use cases. According to the researchers, compromised systems could be directed to generate spam, phishing content, or disinformation while evading the security controls enforced by large AI providers. 

The deployments were also linked to activity involving hacking, hate speech, harassment, violent or graphic content, personal data theft, scams, fraud, and in some cases, child sexual abuse material. While thousands of open-source LLM variants are available, a significant share of internet-accessible deployments were based on Meta’s Llama models, Google DeepMind’s Gemma, and other widely used systems, the researchers said. 

They identified hundreds of instances in which safety guardrails had been deliberately removed. “AI industry conversations about security controls are ignoring this kind of surplus capacity that is clearly being utilized for all kinds of different stuff, some of it legitimate, some obviously criminal,” said Juan Andres Guerrero-Saade, executive director for intelligence and security research at SentinelOne. He compared the problem to an iceberg that remains largely unaccounted for across the industry and the open-source community. 

The study focused on models deployed using Ollama, a tool that allows users to run their own versions of large language models. Researchers were able to observe system prompts in about a quarter of the deployments analyzed and found that 7.5 percent of those prompts could potentially enable harmful behavior. 

Geographically, around 30 per cent of the observed hosts were located in China, with about 20 per cent based in the United States, the researchers said. Rachel Adams, chief executive of the Global Centre on AI Governance, said responsibility for downstream misuse becomes shared once open models are released.  “Labs are not responsible for every downstream misuse, but they retain an important duty of care to anticipate foreseeable harms, document risks, and provide mitigation tooling and guidance,” Adams said.  

A Meta spokesperson declined to comment on developer responsibility for downstream abuse but pointed to the company’s Llama Protection tools and Responsible Use Guide. Microsoft AI Red Team Lead Ram Shankar Siva Kumar said Microsoft believes open-source models play an important role but acknowledged the risks. 

“We are clear-eyed that open models, like all transformative technologies, can be misused by adversaries if released without appropriate safeguards,” he said. 

Microsoft conducts pre-release evaluations and monitors for emerging misuse patterns, Kumar added, noting that “responsible open innovation requires shared commitment across creators, deployers, researchers, and security teams.” 

Ollama, Google and Anthropic did not comment. 

Iran-Linked Hackers Target Human Rights Groups in Redkitten Malware Campaign

A Farsi-speaking threat actor believed to be aligned with Iranian state interests is suspected of carrying out a new cyber campaign targeting non-governmental organizations and individuals documenting recent human rights abuses in Iran, according to a report by HarfangLab. 

The activity, tracked in January 2026 and codenamed RedKitten, appears to coincide with nationwide unrest that erupted in Iran in late 2025 over soaring inflation, rising food prices, and currency depreciation. The protests were followed by a severe security crackdown, mass casualties, and an internet blackout. 

“The malware relies on GitHub and Google Drive for configuration and modular payload retrieval, and uses Telegram for command-and-control,” HarfangLab said. 

Researchers said the campaign is notable for its apparent use of large language models to help develop and coordinate its tooling. The attack chain begins with a 7-Zip archive bearing a Farsi filename, which contains malicious Microsoft Excel files embedded with macros. 

The XLSM spreadsheets purport to list details of protesters who died in Tehran between Dec. 22, 2025, and Jan. 20, 2026. Instead, the files deploy a malicious VBA macro that acts as a dropper for a C# implant known as AppVStreamingUX_Multi_User.dll using a technique called AppDomainManager injection. HarfangLab said the VBA code itself shows signs of being generated by an LLM, citing its structure, variable naming patterns, and comments such as “PART 5: Report the result and schedule if successful.”  
Investigators believe the campaign exploits the emotional distress of people searching for information about missing or deceased protesters. Analysis of the spreadsheet data found inconsistencies such as mismatched ages and birthdates, suggesting the content was fabricated. The implanted backdoor, dubbed SloppyMIO, uses GitHub as a dead drop resolver to obtain Google Drive links hosting images that conceal configuration data using steganography. This data includes Telegram bot tokens, chat IDs, and links to additional modules. 

The malware supports multiple modules that allow attackers to run commands, collect and exfiltrate files, establish persistence through scheduled tasks, and launch processes on infected systems. “The malware can fetch and cache multiple modules from remote storage, run arbitrary commands, collect and exfiltrate files and deploy further malware with persistence via scheduled tasks,” HarfangLab said. “SloppyMIO beacons status messages, polls for commands and sends exfiltrated files over to a specified operator leveraging the Telegram Bot API for command-and-control.” 

Attribution to Iranian-linked actors is based on the use of Farsi-language artifacts, protest-themed lures, and tactical overlaps with earlier operations, including campaigns associated with Tortoiseshell, which previously used malicious Excel documents and AppDomainManager injection techniques. The use of GitHub as part of the command infrastructure mirrors earlier Iranian-linked operations. In 2022, Secureworks, now part of Sophos, documented a campaign by a sub-group of Nemesis Kitten that also leveraged GitHub to distribute malware. 

HarfangLab noted that reliance on widely used platforms such as GitHub, Google Drive, and Telegram complicates traditional infrastructure-based attribution but can also expose operational metadata that poses risks to the attackers themselves. The findings follow recent disclosures by U.K.-based Iranian activist and cyber investigator Nariman Gharib, who detailed a separate phishing campaign using a fake WhatsApp Web login page to hijack victims’ accounts. 

“The page polls the attacker’s server every second,” Gharib said. “This lets the attacker serve a live QR code from their own WhatsApp Web session directly to the victim.” That phishing infrastructure was also designed to request access to a victim’s camera, microphone, and location, effectively turning the page into a surveillance tool. The identity and motive of the operators behind that campaign remain unclear. 

Separately, TechCrunch reporter Zack Whittaker reported that related activity also targeted Gmail credentials using fake login pages, impacting around 50 victims across the Kurdish community, academia, government, and business sectors. The disclosures come amid growing scrutiny of Iranian-linked cyber groups following a major data leak affecting Charming Kitten, which exposed details about its operations and a surveillance platform known as Kashef. Gharib has also highlighted leaked records tied to Ravin Academy, a cybersecurity school linked to Iran’s Ministry of Intelligence and Security, which was sanctioned by the United States in 2022.

Open VSX Supply Chain Breach Delivers GlassWorm Malware Through Trusted Developer Extensions

 

Cybersecurity experts have uncovered a supply chain compromise targeting the Open VSX Registry, where unknown attackers abused a legitimate developer’s account to distribute malicious updates to unsuspecting users.

According to findings from Socket, the attackers infiltrated the publishing environment of a trusted extension author and used that access to release tainted versions of widely used tools.

"On January 30, 2026, four established Open VSX extensions published by the oorzc author had malicious versions published to Open VSX that embed the GlassWorm malware loader," Socket security researcher Kirill Boychenko said in a Saturday report.

The compromised extensions had long been considered safe and were positioned as genuine developer utilities, with some having been available for more than two years.

"These extensions had previously been presented as legitimate developer utilities (some first published more than two years ago) and collectively accumulated over 22,000 Open VSX downloads prior to the malicious releases."

Socket noted that the incident stemmed from unauthorized access to the developer’s publishing credentials. The Open VSX security team believes the breach may have involved a leaked access token or similar misuse of credentials. All affected versions have since been taken down from the registry.

Impacted extensions include:
  • FTP/SFTP/SSH Sync Tool (oorzc.ssh-tools — version 0.5.1)
  • I18n Tools (oorzc.i18n-tools-plus — version 1.6.8)
  • vscode mindmap (oorzc.mind-map — version 1.0.61)
  • scss to css (oorzc.scss-to-css-compile — version 1.3.4)
The malicious updates were engineered to deploy GlassWorm, a loader malware linked to an ongoing campaign. The loader decrypts and executes payloads at runtime and relies on EtherHiding—a technique that conceals command-and-control infrastructure—to retrieve C2 endpoints. Its ultimate objective is to siphon Apple macOS credentials and cryptocurrency wallet information.

Before activating, the malware profiles the infected system and checks locale settings, avoiding execution on systems associated with Russian regions, a behavior often seen in malware tied to Russian-speaking threat groups.

The stolen data spans a broad range of sensitive assets, including browser credentials, cryptocurrency wallets, iCloud Keychain data, Safari cookies, Apple Notes, user documents, VPN configurations, and developer secrets such as AWS and SSH credentials.

The exposure of developer-related data is particularly dangerous, as it can lead to deeper enterprise breaches, cloud account takeovers, and lateral movement across networks.

"The payload includes routines to locate and extract authentication material used in common workflows, including inspecting npm configuration for _authToken and referencing GitHub authentication artifacts, which can provide access to private repositories, CI secrets, and release automation," Boychenko said.

What sets this incident apart is the delivery method. Instead of relying on fake or lookalike extensions, the attackers leveraged a real developer’s account to push the malware—an evolution from earlier GlassWorm campaigns that depended on typosquatting and brand impersonation.

"The threat actor blends into normal developer workflows, hides execution behind encrypted, runtime-decrypted loaders, and uses Solana memos as a dynamic dead drop to rotate staging infrastructure without republishing extensions," Socket said. "These design choices reduce the value of static indicators and shift defender advantage toward behavioral detection and rapid response."

ShinyHunters Claims Match Group Data Breach Exposing 10 Million Records

 

A new data theft has surfaced linked to ShinyHunters, which now claims it stole more than 10 million user records from Match Group, the U.S. company behind several major swipe-based dating platforms. The group has positioned the incident as another major addition to its breach history, alleging that personal data and internal materials were taken without authorization. 

According to ShinyHunters, the stolen data relates to users of Hinge, Match.com, and OkCupid, along with hundreds of internal documents. The Register reported seeing a listing on the group’s dark web leak site stating that “over 10 million lines” of data were involved. The exposure was also linked to AppsFlyer, a marketing analytics provider, which was referenced as the likely source connected to the incident. 

Match Group confirmed it is investigating what it described as a recently identified security incident, and said some user data may have been accessed. The company stated it acted quickly to terminate the unauthorized access and is continuing its investigation with external cybersecurity experts. Match Group also said there was no indication that login credentials, financial information, or private communications were accessed, and added that it believes only a limited amount of user data was affected. 

It said notifications are being issued to impacted individuals where appropriate. However, Match Group did not disclose what categories of data were accessed, how many users were impacted, or whether any ransom demand was made or paid, leaving key details about the scope and motivation unresolved. Cybernews, which reviewed samples associated with the listing, reported that the dataset appears to include customer personal data, some employee-related information, and internal corporate documents. 

The analysis also suggested the presence of Hinge subscription details, including user IDs, transaction IDs, payment amounts, and records linked to blocked installations, along with IP addresses and location-related data. In a separate post published the same week, ShinyHunters also claimed it had stolen data from Bumble. The group uploaded what it described as 30 GB of compressed files allegedly sourced from Google Drive and Slack. The claims come shortly after researchers reported that ShinyHunters targeted around 100 organizations by abusing stolen Okta single sign-on credentials. The alleged victim list included well-known SaaS and technology firms such as Atlassian, AppLovin, Canva, Epic Games, Genesys, HubSpot, Iron Mountain, RingCentral, and ZoomInfo, among others. 

Bumble has issued a statement saying that one contractor’s account had been compromised in a phishing incident. The company said the account had limited privileges but was used for brief unauthorized access to a small portion of Bumble’s network. Bumble stated its security team detected and removed the access quickly, confirmed the incident was contained, engaged external cybersecurity experts, and notified law enforcement. Bumble also emphasized that there was no access to its member database, member accounts, the Bumble app, or member direct messages or profiles.

Apple's New Feature Will Help Users Restrict Location Data


Apple has introduced a new privacy feature that allows users to restrict the accuracy of location data shared with cellular networks on a few iPad models and iPhone. 

About the feature

The “Limit Precise Location” feature will start after updating to iOS26.3 or later. It restricts the information that mobile carriers use to decide locations through cell tower connections. Once turned on, cellular networks can only detect the device’s location, like neighbourhood instead of accurate street address. 

According to Apple, “The precise location setting doesn't impact the precision of the location data that is shared with emergency responders during an emergency call.” “This setting affects only the location data available to cellular networks. It doesn't impact the location data that you share with apps through Location Services. For example, it has no impact on sharing your location with friends and family with Find My.”

Users can turn on the feature by opening “Settings,” selecting “Cellular,” “Cellular Data Options,” and clicking the “Limit Precise Location” setting. After turning on limited precise location, the device may trigger a device restart to complete activation. 

The privacy enhancement feature works only on iPhone Air, iPad Pro (M5) Wi-Fi + Cellular variants running on iOS 26.3 or later. 

Where will it work?

The availability of this feature will depend on carrier support. The mobile networks compatible are:

EE and BT in the UK

Boost Mobile in the UK

Telecom in Germany 

AIS and True in Thailand 

Apple hasn't shared the reason for introducing this feature yet.

Compatibility of networks with the new feature 

Apple's new privacy feature, which is currently only supported by a small number of networks, is a significant step towards ensuring that carriers can only collect limited data on their customers' movements and habits because cellular networks can easily track device locations via tower connections for network operations.

“Cellular networks can determine your location based on which cell towers your device connects to. The limit precise location setting enhances your location privacy by reducing the precision of location data available to cellular networks,”

WhatsApp Launches High-Security Mode for Ultimate User Protection

 

WhatsApp has launched a new high-security mode called "Strict Account Settings," providing users with enhanced defenses against sophisticated cyber threats. This feature, introduced on January 27, 2026, allows one-click activation and builds on the platform's existing end-to-end encryption. It targets high-risk individuals like journalists and public figures facing advanced attacks, marking WhatsApp as the third major tech firm to offer such protections after Apple's Lockdown Mode and Google's Advanced Protection.

The mode activates multiple safeguards simultaneously through a simple toggle in WhatsApp settings under Privacy > Advanced. It blocks media files and attachments from unknown senders, preventing potential malware delivery via images or documents. Link previews—thumbnails that appear for shared URLs—are disabled to eliminate risks from embedded tracking or exploits, while calls from unknown numbers are automatically silenced, appearing only in missed calls.

These measures address common attack vectors identified in cyber surveillance campaigns. For instance, malicious attachments and link previews have been exploited in spyware incidents targeting activists and reporters. By muting unknown calls, the feature reduces social engineering attempts like vishing scams, where attackers impersonate contacts to extract information. WhatsApp's blog emphasizes that while everyday users benefit from standard encryption, this mode offers "extreme safeguards" for rare, high-sophistication threats.

Similar to competitors' offerings, robust account settings trades convenience for security, limiting app functionality for greater protection. Apple's Lockdown Mode, available since 2022, restricts attachments and browser features, while Google's Android version blocks risky app downloads. Cybersecurity experts have welcomed WhatsApp's step, calling it a "very welcome development" for civil society defenders. The rollout is global on iOS and Android, with full availability expected in coming weeks.

As cyber threats evolve with AI-driven attacks and state-sponsored hacking, features like this empower users to customize defenses. High-risk professionals can now layer protections without switching apps, fostering safer digital communication. However, Meta advises reviewing settings post-activation, as it may block legitimate interactions from new contacts. This move aligns with rising demands for privacy amid global data scandals.

Featured