Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Cyber Security. Show all posts

European Governments Turn to Matrix for Secure Sovereign Messaging Amid US Big Tech Concerns

 

A growing number of European governments are turning to Matrix, an open-source messaging architecture, as they seek greater technological sovereignty and independence from US Big Tech companies. Matrix aims to create an open communication standard that allows users to message each other regardless of the platform they use—similar to how email works across different providers. The decentralized protocol supports secure messaging, voice, and video communications while ensuring data control remains within sovereign boundaries. 

Matrix, co-founded by Matthew Hodgson in 2014 as a not-for-profit open-source initiative, has seen wide-scale adoption across Europe. The French government and the German armed forces now have hundreds of thousands of employees using Matrix-based platforms like Tchap and BwMessenger. Swiss Post has also built its own encrypted messaging system for public use, while similar deployments are underway across Sweden, the Netherlands, and the European Commission. NATO has even adopted Matrix to test secure communication alternatives under its NICE2 project. 

Hodgson, who also serves as CEO of Element—a company providing Matrix-based encrypted services to governments and organizations such as France and NATO—explained that interest in Matrix has intensified following global geopolitical developments. He said European governments now view open-source software as a strategic necessity, especially after the US imposed sanctions on the International Criminal Court (ICC) in early 2025. 

The sanctions, which impacted US tech firms supporting the ICC, prompted several European institutions to reconsider their reliance on American cloud and communication services. “We have seen first-hand that US Big Tech companies are not reliable partners,” Hodgson said. “For any country to be operationally dependent on another is a crazy risk.” He added that incidents such as the “Signalgate” scandal—where a US official accidentally shared classified information on a Signal chat—have further fueled the shift toward secure, government-controlled messaging infrastructure. 

Despite this, Europe’s stance on encryption remains complex. While advocating for sovereign encrypted messaging platforms, some governments are simultaneously supporting proposals like Chat Control, which would require platforms to scan messages before encryption. Hodgson criticized such efforts, warning they could weaken global communication security and force companies like Element to withdraw from regions that mandate surveillance. Matrix’s decentralized design offers resilience and security advantages by eliminating a single point of failure. 

Unlike centralized apps such as Signal or WhatsApp, Matrix operates as a distributed network, reducing the risk of large-scale breaches. Moreover, its interoperability means that various Matrix-based apps can communicate seamlessly—enabling, for example, secure exchanges between French and German government networks. Although early Matrix apps were considered less user-friendly, Hodgson said newer versions now rival mainstream encrypted platforms. Funding challenges have slowed development, as governments using Matrix often channel resources toward system integrators rather than the project itself. 

To address this, Matrix is now sustained by a membership model and potential grant funding. Hodgson’s long-term vision is to establish a fully peer-to-peer global communication network that operates without servers and cannot be compromised or monitored. Supported by the Dutch government, Matrix’s ongoing research into such peer-to-peer technology aims to simplify deployment further while enhancing security. 

As Europe continues to invest in secure digital infrastructure, Matrix’s open standard represents a significant step toward technological independence and privacy preservation. 

By embracing decentralized communication, European governments are asserting control over their data, reducing foreign dependence, and reshaping the future of secure messaging in an increasingly uncertain geopolitical landscape.

UK Digital ID Faces Security Crisis Ahead of Mandatory Rollout

 

The UK’s digital ID system, known as One Login, triggered major controversy in 2025 due to serious security vulnerabilities and privacy concerns, leading critics to liken it to the infamous Horizon scandal. 

One Login is a government-backed identity verification platform designed for access to public services and private sector uses such as employment verification and banking. Despite government assurances around its security and user benefits, public confidence plummeted amid allegations of cybersecurity failures and rushed implementation planned for November 18, 2025.

Critics, including MPs and cybersecurity experts, revealed that the system failed critical red-team penetration tests, with hackers gaining privileged access during simulated cyberattacks. Further concerns arose over development practices, with portions of the platform built by contractors in Romania on unsecured workstations without adequate security clearance. The government missed security deadlines, with full compliance expected only by March 2026—months after the mandatory rollout began.

This “rollout-at-all-costs” approach amidst unresolved security flaws has created a significant trust deficit, risking citizens’ personal data, which includes sensitive information like biometrics and identification documents. One Login collects comprehensive data, such as name, birth date, biometrics, and a selfie video for identity verification. This data is shared across government services and third parties, raising fears of surveillance, identity theft, and misuse.

The controversy draws a parallel to the Horizon IT scandal, where faulty software led to wrongful prosecutions of hundreds of subpostmasters. Opponents warn that flawed digital ID systems could cause similar large-scale harms, including wrongful exclusions and damaged reputations, undermining public trust in government IT projects.

Public opposition has grown, with petitions and polls showing more people opposing digital ID than supporting it. Civil liberties groups caution against intrusive government tracking and call for stronger safeguards, transparency, and privacy protections. The Prime Minister defends the program as a tool to simplify life and reduce identity fraud, but critics label it expensive, intrusive, and potentially dangerous.

In conclusion, the UK’s digital ID initiative stands at a critical crossroads, facing a crisis of confidence and comparisons to past government technology scandals. Robust security, oversight, and public trust are imperative to avoid a repeat of such failures and ensure the system serves citizens without compromising their privacy or rights.

TRAI Approves Caller Name Display Feature to Curb Spam and Fraud Calls

 

The Telecom Regulatory Authority of India (TRAI) has officially approved a long-awaited proposal from the Department of Telecommunications (DoT) to introduce a feature that will display the caller’s name by default on the receiver’s phone screen. Known as the Calling Name Presentation (CNAP) feature, this move is aimed at improving transparency in phone communications, curbing the growing menace of spam calls, and preventing fraudulent phone-based scams across the country. 

Until now, smartphone users in India have relied heavily on third-party applications such as Truecaller and Bharat Caller ID for identifying incoming calls. However, these apps often depend on user-generated databases and unverified information, which may not always be accurate. TRAI’s newly approved system will rely entirely on verified details gathered during the SIM registration process, ensuring that the name displayed is authentic and directly linked to the caller’s government-verified identity. 

According to the telecom regulator, the CNAP feature will be automatically activated for all subscribers across India, though users will retain the option to opt out by contacting their telecom service provider. TRAI explained that the feature will function as a supplementary service integrated with basic telecom offerings rather than as a standalone service. Every telecom operator will be required to maintain a Calling Name (CNAM) database, which will map subscribers’ verified names to their registered mobile numbers. 

When a call is placed, the receiving network will search this CNAM database through the Local Number Portability Database (LNPD) and retrieve the verified caller’s name in real-time. This name will then appear on the recipient’s screen, allowing users to make informed decisions about whether to answer the call. The mechanism aims to replicate the caller ID functionality offered by third-party apps, but with government-mandated accuracy and accountability. 

Before final approval, the DoT conducted pilot tests of the CNAP system across select cities using 4G and 5G networks. The trials revealed several implementation challenges, including software compatibility issues and the need for network system upgrades. As a result, the initial testing was primarily focused on packet-switched networks, which are more commonly used for mobile data transmission than circuit-switched voice networks.  

Industry analysts believe the introduction of CNAP could significantly enhance consumer trust and reshape how users interact with phone calls. By reducing reliance on unregulated third-party applications, the feature could also help improve data privacy and limit exposure to malicious data harvesting. Additionally, verified caller identification is expected to reduce incidents of spam calls, phishing attempts, and impersonation scams that have increasingly plagued Indian users in recent years.  

While TRAI has not announced an official rollout date, telecom operators have reportedly begun upgrading their systems and databases to accommodate the CNAP infrastructure. The rollout is expected to be gradual, starting with major telecom circles before expanding nationwide in the coming months. Once implemented, CNAP could become a major step forward in digital trust and consumer protection within India’s rapidly growing telecommunications ecosystem. 

By linking phone communication with verified identities, TRAI’s caller name display feature represents a significant shift toward a safer and more transparent mobile experience. It underscores the regulator’s ongoing efforts to safeguard users against fraudulent activities while promoting accountability within India’s telecom sector.

Nearly 50% of IoT Device Connections Pose Security Threats, Study Finds

 




A new security analysis has revealed that nearly half of all network communications between Internet of Things (IoT) devices and traditional IT systems come from devices that pose serious cybersecurity risks.

The report, published by cybersecurity company Palo Alto Networks, analyzed data from over 27 million connected devices across various organizations. The findings show that 48.2 percent of these IoT-to-IT connections came from devices classified as high risk, while an additional 4 percent were labeled critical risk.

These figures underline a growing concern that many organizations are struggling to secure the rapidly expanding number of IoT devices on their networks. Experts noted that a large portion of these devices operate with outdated software, weak default settings, or insecure communication protocols, making them easy targets for cybercriminals.


Why It’s a Growing Threat

IoT devices, ranging from smart security cameras and sensors to industrial control systems are often connected to the same network as computers and servers used for daily business operations. This creates a problem: once a vulnerable IoT device is compromised, attackers can move deeper into the network, access sensitive data, and disrupt normal operations.

The study emphasized that the main cause behind such widespread exposure is poor network segmentation. Many organizations still run flat networks, where IoT devices and IT systems share the same environment without proper separation. This allows a hacker who infiltrates one device to move easily between systems and cause greater harm.


How Organizations Can Reduce Risk

Security professionals recommend several key actions for both small businesses and large enterprises to strengthen their defenses:

1. Separate Networks:

Keep IoT devices isolated from core IT infrastructure through proper network segmentation. This prevents threats in one area from spreading to another.

2. Adopt Zero Trust Principles:

Follow a security model that does not automatically trust any device or user. Each access request should be verified, and only the minimum level of access should be allowed.

3. Improve Device Visibility:

Maintain an accurate inventory of all devices connected to the network, including personal or unmanaged ones. This helps identify and secure weak points before they can be exploited.

4. Keep Systems Updated:

Regularly patch and update device firmware and software. Unpatched systems often contain known vulnerabilities that attackers can easily exploit.

5. Use Strong Endpoint Protection:

Deploy Endpoint Detection and Response (EDR) or Extended Detection and Response (XDR) tools across managed IT systems, and use monitoring solutions for IoT devices that cannot run these tools directly.


As organizations rely more on connected devices to improve efficiency, the attack surface grows wider. Without proper segmentation, monitoring, and consistent updates, one weak device can become an entry point for cyberattacks that threaten entire operations.

The report reinforces an important lesson: proactive network management is the foundation of cybersecurity. Ensuring visibility, limiting trust, and continuously updating systems can significantly reduce exposure to emerging IoT-based threats.




Chinese Hackers Attack Prominent U.S Organizations


Chinese cyber-espionage groups attacked U.S organizations with links to international agencies. This has now become a problem for the U.S, as state-actors from China keep attacking.  Attackers were trying to build a steady presence inside the target network.

Series of attacks against the U.S organizations 

Earlier this year, the breach was against a famous U.S non-profit working in advocacy, that demonstrated advanced techniques and shared tools among Chinese cyber criminal gangs like APT41, Space Pirates, and Kelp.

They struck again in April with various malicious prompts checking both internal network breach and internet connectivity, particularly targeting a system at 192.0.0.88. Various tactics and protocols were used, showing both determination and technical adaptability to get particular internal resources.

Attack tactics 

Following the connectivity tests, the hackers used tools like netstat for network surveillance and made an automatic task via the Windows command-line tools.

This task ran a genuine MSBuild.exe app that processed an outbound.xml file to deploy code into csc.exe and connected it to a C2 server. 

These steps hint towards automation (through scheduled tasks) and persistence via system-level privileges increasing the complexity of the compromise and potential damage.

Espionage methods 

The techniques and toolkit show traces of various Chinese espionage groups. The hackers weaponized genuine software elements. This is called DLL sideloading by abusing vetysafe.exe (a VipreAV component signed by Sunbelt Software, Inc.) to load a malicious payload called sbamres.dll.

This tactic was earlier found in campaigns lkmkedytl Earth Longzhi and Space Pirates, the former also known as APT41 subgroup.

Coincidentally, the same tactic was found in cases connected to Kelp, showing the intrusive tool-sharing tactics within Chinese APTs.

Google Probes Weeks-Long Security Breach Linked to Contractor Access

 




Google has launched a detailed investigation into a weeks-long security breach after discovering that a contractor with legitimate system privileges had been quietly collecting internal screenshots and confidential files tied to the Play Store ecosystem. The company uncovered the activity only after it had continued for several weeks, giving the individual enough time to gather sensitive technical data before being detected.

According to verified cybersecurity reports, the contractor managed to access information that explained the internal functioning of the Play Store, Google’s global marketplace serving billions of Android users. The files reportedly included documentation describing the structure of Play Store infrastructure, the technical guardrails that screen malicious apps, and the compliance systems designed to meet international data protection laws. The exposure of such material presents serious risks, as it could help malicious actors identify weaknesses in Google’s defense systems or replicate its internal processes to deceive automated security checks.

Upon discovery of the breach, Google initiated a forensic review to determine how much information was accessed and whether it was shared externally. The company has also reported the matter to law enforcement and begun a complete reassessment of its third-party access procedures. Internal sources indicate that Google is now tightening security for all contractor accounts by expanding multi-factor authentication requirements, deploying AI-based systems to detect suspicious activities such as repeated screenshot captures, and enforcing stricter segregation of roles and privileges. Additional measures include enhanced background checks for third-party employees who handle sensitive systems, as part of a larger overhaul of Google’s contractor risk management framework.

Experts note that the incident arrives during a period of heightened regulatory attention on Google’s data protection and antitrust practices. The breach not only exposes potential security weaknesses but also raises broader concerns about insider threats, one of the most persistent and challenging issues in cybersecurity. Even companies that invest heavily in digital defenses remain vulnerable when authorized users intentionally misuse their access for personal gain or external collaboration.

The incident has also revived discussion about earlier insider threat cases at Google. In one of the most significant examples, a former software engineer was charged with stealing confidential files related to Google’s artificial intelligence systems between 2022 and 2023. Investigators revealed that he had transferred hundreds of internal documents to personal cloud accounts and even worked with external companies while still employed at Google. That case, which resulted in multiple charges of trade secret theft and economic espionage, underlined how intellectual property theft by insiders can evolve into major national security concerns.

For Google, the latest breach serves as another reminder that internal misuse, whether by employees or contractors remains a critical weak point. As the investigation continues, the company is expected to strengthen oversight across its global operations. Cybersecurity analysts emphasize that organizations managing large user platforms must combine strong technical barriers with vigilant monitoring of human behavior to prevent insider-led compromises before they escalate into large-scale risks.



EU Accuses Meta of Breaching Digital Rules, Raises Questions on Global Tech Compliance

 




The European Commission has accused Meta Platforms, the parent company of Facebook and Instagram, of violating the European Union’s Digital Services Act (DSA) by making it unnecessarily difficult for users to report illegal online content and challenge moderation decisions.

In its preliminary findings, the Commission said both platforms lack a user-friendly “Notice and Action” system — the mechanism that allows people to flag unlawful material such as child sexual abuse content or terrorist propaganda. Regulators noted that users face multiple steps and confusing options before they can file a report. The Commission also claimed that Meta’s interface relies on “dark patterns”, which are design features that subtly discourage users from completing certain actions, such as submitting reports.

According to the Commission, Meta’s appeal process also falls short of DSA requirements. The current system allegedly prevents users from adding explanations or submitting supporting evidence when disputing a moderation decision. This, the regulator said, limits users’ ability to express why they believe a decision was unfair and weakens the overall transparency of Meta’s content moderation practices.

The European Commission’s findings are not final, and Meta has the opportunity to respond before any enforcement action is taken. If the Commission confirms these violations, it could issue a non-compliance decision, which may result in penalties of up to 6 percent of Meta’s global annual revenue. The Commission may also impose recurring fines until the company aligns its operations with EU law.

Meta, in a public statement, said it “disagrees with any suggestion” that it breached the DSA. The company stated that it has already made several updates to comply with the law, including revisions to content reporting options, appeals procedures, and data access tools.

The European Commission also raised similar concerns about TikTok, saying that both companies have limited researchers’ access to public data on their platforms. The DSA requires large online platforms to provide sufficient data access so independent researchers can analyze potential harms — for example, whether minors are exposed to illegal or harmful content. The Commission’s review concluded that the data-access tools of Facebook, Instagram, and TikTok are burdensome and leave researchers with incomplete or unreliable datasets, which hinders academic and policy research.

TikTok responded that it has provided data to almost 1,000 research teams and remains committed to transparency. However, the company noted that the DSA’s data-sharing obligations sometimes conflict with the General Data Protection Regulation (GDPR), making it difficult to comply with both laws simultaneously. TikTok urged European regulators to offer clarity on how these two frameworks should be balanced.

Beyond Europe, the investigation may strain relations with the United States. American officials have previously criticized the EU for imposing regulatory burdens on U.S.-based tech firms. U.S. FTC Chairman Andrew Ferguson recently warned companies that censoring or modifying content to satisfy foreign governments could violate U.S. law. Former President Donald Trump has also expressed opposition to EU digital rules and even threatened tariffs against countries enforcing them.

For now, the Commission’s investigation continues. If confirmed, the case could set a major precedent for how global social media companies manage user safety, transparency, and accountability under Europe’s strict online governance laws.


Tata Motors Fixes Security Flaws That Exposed Sensitive Customer and Dealer Data

 

Indian automotive giant Tata Motors has addressed a series of major security vulnerabilities that exposed confidential internal data, including customer details, dealer information, and company reports. The flaws were discovered in the company’s E-Dukaan portal, an online platform used for purchasing spare parts for Tata commercial vehicles. 

According to security researcher Eaton Zveare, the exposed data included private customer information, confidential documents, and access credentials to Tata Motors’ cloud systems hosted on Amazon Web Services (AWS). Headquartered in Mumbai, Tata Motors is a key global player in the automobile industry, manufacturing passenger, commercial, and defense vehicles across 125 countries. 

Zveare revealed to TechCrunch that the E-Dukaan website’s source code contained AWS private keys that granted access to internal databases and cloud storage. These vulnerabilities exposed hundreds of thousands of invoices with sensitive customer data, including names, mailing addresses, and Permanent Account Numbers (PANs). Zveare said he avoided downloading large amounts of data “to prevent triggering alarms or causing additional costs for Tata Motors.” 

The researcher also uncovered MySQL database backups, Apache Parquet files containing private communications, and administrative credentials that allowed access to over 70 terabytes of data from Tata Motors’ FleetEdge fleet-tracking software. Further investigation revealed backdoor admin access to a Tableau analytics account that stored data on more than 8,000 users, including internal financial and performance reports, dealer scorecards, and dashboard metrics. 

Zveare added that the exposed credentials provided full administrative control, allowing anyone with access to modify or download the company’s internal data. Additionally, the vulnerabilities included API keys connected to Tata Motors’ fleet management system, Azuga, which operates the company’s test drive website. Zveare responsibly reported the flaws to Tata Motors through India’s national cybersecurity agency, CERT-In, in August 2023. 

The company acknowledged the findings in October 2023 and stated that it was addressing the AWS-related security loopholes. However, Tata Motors did not specify when all issues were fully resolved. In response to TechCrunch’s inquiry, Tata Motors confirmed that all reported vulnerabilities were fixed in 2023. 

However, the company declined to say whether it notified customers whose personal data was exposed. “We can confirm that the reported flaws and vulnerabilities were thoroughly reviewed following their identification in 2023 and were promptly and fully addressed,” said Tata Motors communications head, Sudeep Bhalla. “Our infrastructure is regularly audited by leading cybersecurity firms, and we maintain comprehensive access logs to monitor unauthorized activity. We also actively collaborate with industry experts and security researchers to strengthen our security posture.” 

The incident reveals the persistent risks of misconfigured cloud systems and exposed credentials in large enterprises. While Tata Motors acted swiftly after the report, cybersecurity experts emphasize that regular audits, strict access controls, and robust encryption are essential to prevent future breaches. 

As more automotive companies integrate digital platforms and connected systems into their operations, securing sensitive customer and dealer data remains a top priority.

Study Reveals Health Apps Secretly Sharing User Data Before Consent

 

Think your favorite health apps are safe? Think again. A new study from researchers at the University of Bremen in Germany has revealed that many popular fitness and wellness apps are quietly sharing your personal data before you even click “I agree.”

The research team analyzed 20 widely used health apps—covering everything from fitness tracking and sleep monitoring to diet and menstrual cycle logging—and found that every single one transmitted user data to servers in other countries, particularly the United States.

Even more concerning, these apps were found using “dark patterns”—deceptive design tactics that nudge users into giving permissions they might otherwise refuse.

While most apps appear to comply with GDPR, the European data privacy law, the study found they often violate its intent. Some apps began sending out information like advertising IDs as soon as users opened them, long before consent was granted.

Adding to the confusion, 10 out of 16 apps designed for German users offered privacy policies only in English, making it almost impossible for users to fully understand what they were agreeing to. The policies were also vague, referring to “partners” and “service providers” without naming who actually receives the data.

As the lead researcher stated, “Trust is crucial, especially when it comes to sensitive health data.”

The findings raise serious concerns about how little control users have over their private health and fitness information, which could be shared across the globe — from the U.S. to China — without their awareness.

The Bremen researchers aren’t stopping there; they’re now developing tools to automatically detect these hidden data leaks and manipulative designs. Their goal is to empower regulators and ethical developers to hold these apps accountable.

Ultimately, this study serves as a wake-up call for users and policymakers alike: the apps meant to support your health might actually be putting your privacy at risk. It’s time for clearer, stricter regulations to ensure transparency and protect users’ data from exploitation.

AWS Apologizes for Massive Outage That Disrupted Major Platforms Worldwide

 

Amazon Web Services (AWS) has issued an apology to customers following a widespread outage on October 20 that brought down more than a thousand websites and services globally. The disruption affected major platforms including Snapchat, Reddit, Lloyds Bank, Venmo, and several gaming and payment applications, underscoring the heavy dependence of the modern internet on a few dominant cloud providers. The outage originated in AWS’s North Virginia region (US-EAST-1), which powers a significant portion of global online infrastructure. 

According to Amazon’s official statement, the outage stemmed from internal errors that prevented systems from properly linking domain names to the IP addresses required to locate them. This technical fault caused a cascade of connectivity failures across multiple services. “We apologize for the impact this event caused our customers,” AWS said. “We know how critical our services are to our customers, their applications, and their businesses. We are committed to learning from this and improving our availability.”

While some platforms like Fortnite and Roblox recovered within a few hours, others faced extended downtime. Lloyds Bank customers, for instance, reported continued access issues well into the afternoon. Similarly, services like Reddit and Venmo were affected for longer durations. The outage even extended to connected devices such as Eight Sleep’s smart mattresses, which rely on internet access to adjust temperature and elevation. 

The company stated it would work to make its systems more resilient after some users reported overheating or malfunctioning devices during the outage. AWS’s detailed incident summary attributed the issue to a “latent race condition” in the systems managing the Domain Name System (DNS) records in the affected region. Essentially, one of the automated processes responsible for maintaining synchronization between critical database systems malfunctioned, triggering a chain reaction that disrupted multiple dependent services. Because many of AWS’s internal processes are automated, the problem propagated without human intervention until it was detected and mitigated. 

Dr. Junade Ali, a software engineer and fellow at the Institute for Engineering and Technology, explained that “faulty automation” was central to the failure. He noted that the internal “address book” system in the region broke down, preventing key infrastructure components from locating each other. “This incident demonstrates how businesses relying on a single cloud provider remain vulnerable to regional failures,” Dr. Ali added, emphasizing the importance of diversifying cloud service providers to improve resilience. 

The event once again highlights the concentration of digital infrastructure within a few dominant providers, primarily AWS and Microsoft Azure. Experts warn that such dependency increases systemic risk, as disruptions in one region can have global ripple effects. Amazon has stated that it will take measures to strengthen fault detection, introduce greater redundancy, and enhance the reliability of automated processes in its network. 

As the world grows increasingly reliant on cloud computing, the AWS outage serves as a critical reminder of the fragility of internet infrastructure and the urgent need for redundancy and diversification.

New Vidar Variant Uses API Hooking to Steal Data Before Encryption

 

A recent investigation by Aryaka Threat Research Labs has revealed a new version of the Vidar infostealer that demonstrates how cybercriminals are refining existing malware to make it more discreet and effective. Vidar, which has circulated for years through malware-as-a-service platforms, is known for its modular structure that allows operators to customize attacks easily. 

The latest strain introduces a significant upgrade: the ability to intercept sensitive information directly through API hooking. 

This method lets the malware capture credentials, authentication tokens, and encryption keys from Windows systems at the precise moment they are accessed by legitimate applications, before they are encrypted or secured. 

By hooking into cryptographic functions such as CryptProtectMemory, Vidar injects its own code into running processes to momentarily divert execution and extract unprotected data before resuming normal operations. 

This process enables it to gather plaintext credentials silently from memory, avoiding noisy file activity that would typically trigger detection. Once harvested, the stolen data which includes browser passwords, cookies, payment information, cryptocurrency wallets, and two-factor tokens is compressed and sent through encrypted network channels that mimic legitimate internet traffic. 

The malware also maintains persistence by using scheduled tasks, PowerShell loaders, and randomized installation paths, while employing in-memory execution to reduce forensic traces. 

These refinements make it harder for traditional antivirus or behavioral tools to identify its presence. The evolution of Vidar highlights the need for defenders to rethink detection strategies that depend solely on file signatures or activity volume. 

Security teams are encouraged to implement Zero Trust principles, monitor API calls for evidence of hooking, and apply runtime integrity checks to detect tampering within active processes. Using endpoint detection and response tools that analyze process behavior and adopting memory-safe programming practices can further strengthen protection. 

Experts warn that Vidar’s development may continue toward more advanced capabilities, including kernel-level hooking, fileless operations, and AI-based targeting that prioritizes valuable data depending on the victim’s environment. 

The findings reflect a broader shift in cybercrime tactics, where minor technical improvements have a major impact on stealth and efficiency. Defending against such threats requires a multi-layered security approach that focuses on process integrity, vigilant monitoring, and consistent patch management.

Passkeys vs Passwords: Why Passkeys Are the Future of Secure Logins

 

Passwords have long served as the keys to our digital world—granting access to everything from social media to banking apps. Yet, like physical keys, they can easily be lost, copied, or stolen. As cyber threats evolve, new alternatives such as passkeys are stepping in to offer stronger, simpler, and safer ways to log in.

Why passwords remain risky

A password is essentially a secret code you use to prove your identity online. But weak password habits are widespread. A CyberNews report revealed that 94% of 19 billion leaked passwords were reused, and many followed predictable patterns—think “123456,” names, cities, or popular brands.

When breaches occur, these passwords spread rapidly, leading to account takeovers, phishing scams, and identity theft. In fact, hackers often attempt to exploit leaked credentials within an hour of a breach.

Phishing attacks—where users are tricked into entering their passwords on fake websites—continue to rise, with more than 3 billion phishing emails sent daily worldwide.

Experts recommend creating unique, complex passwords or even memorable passphrases like “CrocApplePurseBike.” Associating it with a story can help you recall it easily.

Enter passkeys: a new way to log in

Emerging around four years ago, passkeys use public-key cryptography, a process that creates two linked keys—one public and one private.

  • The public key is shared with the website.

  • The private key stays safely stored on your device.

When you log in, your device signs a unique challenge using the private key, confirming your identity without sending any password. To authorize this action, you’ll usually verify with your fingerprint or face ID, ensuring that only you can access your accounts.

Even if the public key is stolen, it’s useless without the private one—making passkeys inherently phishing-proof and more secure. Each passkey is also unique to the website, so it can’t be reused elsewhere.

Why passkeys are better

Passkeys eliminate the need to remember passwords or type them manually. Since they’re tied to your device and require biometric approval, they’re both more convenient and more secure.

However, the technology isn’t yet universal. Compatibility issues between platforms like Apple and Microsoft have slowed adoption, though these gaps are closing as newer devices and systems improve integration.

The road ahead

From a cybersecurity perspective, passkeys are clearly the superior option—they’re stronger, resistant to phishing, and easy to use. But widespread adoption will take time. Many websites still rely on traditional passwords, and transitioning millions of users will be a long process.

Until then, maintaining good password hygiene remains essential: use unique passwords for every account, enable multi-factor authentication, and change any reused credentials immediately.

How to Make Zoom Meetings More Secure and Protect Your Privacy

 

Zoom calls remain an essential part of remote work and digital communication, but despite their convenience, they are not entirely private. Cybercriminals can exploit vulnerabilities to steal sensitive information, intercept conversations, or access meeting data. However, several practical measures can strengthen your security and make Zoom safer to use for both personal and professional meetings. 

One of the most effective security steps is enabling meeting passwords. Password protection ensures that only authorized participants can join, preventing “Zoom-bombing” and uninvited guests from entering. Passwords are enabled by default for most users, but it’s important to confirm this setting before hosting. Similarly, adding a waiting room provides another layer of control, requiring participants to be manually admitted by the host. 

This step helps prevent intruders even if meeting details are leaked. End-to-end encryption (E2EE) is another crucial feature for privacy. While Zoom’s standard encryption protects data in transit, enabling E2EE ensures that only participants can access meeting content — not even Zoom itself. Each device stores encryption keys locally, making intercepted data unreadable. 

However, when E2EE is activated, some features like recording, AI companions, and live streaming are disabled. To use E2EE, all participants must join via the Zoom app rather than the web client. Users should also generate random meeting IDs instead of using personal ones. A personal meeting ID remains constant, allowing anyone with previous access to rejoin later. Random IDs create a unique space for each session, reducing the risk of unauthorized reentry. Two-Factor Authentication (2FA) offers further protection by requiring a verification code during login, preventing unauthorized account access even if passwords are compromised. 

Meeting links should always be shared privately via direct messages or emails, never publicly. Sharing on social platforms increases the risk of unwanted guests and phishing attempts. During meetings, hosts should manage participants closely — monitoring for suspicious activity, restricting screen and file sharing, and remaining alert for fake prompts requesting personal information. Maintaining strict host control helps minimize the risk of data theft or identity fraud. Zoom’s data collection settings can also be adjusted for privacy. 

While the platform gathers some anonymized diagnostic data, users can disable “Optional Diagnostic Data” under My Account → Data & Privacy to limit information sharing. Keeping the Zoom application up to date is equally important, as regular updates patch security vulnerabilities and improve overall system protection. Finally, operational security (OPSEC) practices outside Zoom are essential. Users should participate in meetings from private spaces, use headphones to limit audio leakage, and employ physical camera covers for additional protection. 

When connecting through public Wi-Fi, using a Virtual Private Network (VPN) adds encryption to internet traffic, shielding sensitive data from potential interception. While Zoom provides several built-in safeguards, the responsibility of maintaining secure communication lies equally with users. 

By enabling passwords, encryption, and 2FA — and combining these with good digital hygiene — individuals and organizations can significantly reduce privacy risks and create a safer virtual meeting environment.

Online Identity Is Evolving: From Data Storage to Proof-Based Verification with zkTLS

 

The next phase of online identity is shifting from data storage to proof-based verification. Today, the internet already contains much of what verification and compliance teams require — from academic credentials and payment confirmations to loyalty program details. The real challenge lies in confirming these facts securely, without exposing or hoarding personal data. This is where the Transport Layer Security (TLS) protocol can evolve with a zero-knowledge proof (ZKP) approach, ensuring verification happens without revealing sensitive information.

For founders, every onboarding form, fraud check, or compliance workflow demands a delicate balance — verifying authenticity while avoiding becoming a data honeypot. Although the internet already holds verifiable information like proof of education or transactions, what’s missing is a safe way to confirm it. Imagine if verification could happen without storing any data at all.

The need for such innovation is growing. IBM’s estimates suggest the average global cost of a data breach in 2025 will reach $4.4 million, while automated cyber threats and bots now account for nearly 37% of internet activity. Meanwhile, privacy expectations are tightening. A 2025 investigation revealed that more than 30 data brokers were hiding opt-out options, prompting federal and state investigations. In response, California introduced DROP, a unified deletion system under the Delete Act, emphasizing the move toward proof-based identity over data retention.

Whenever a user visits a secure website, a “TLS handshake” occurs between the browser and the site. Zero-Knowledge Transport Layer Security (zkTLS) builds on this by producing a cryptographic proof during the session — confirming that a specific interaction took place without exposing the underlying data or page details. This enables verification without storage, transforming security from document uploads to cryptographic attestations.

Unlike password sharing or screen scraping, zkTLS relies on session-derived evidence, ensuring that verification stems directly from real interactions. It provides yes/no proofs tied to genuine TLS sessions, perfectly aligning with the philosophy of proof over storage.

This approach dramatically reduces data exposure risks, accelerates verification, and improves user experience. By only requesting minimal proofs, businesses can eliminate data honeypots, simplify audits, and create faster onboarding experiences. It respects privacy while building trust — verifying identity without retaining personal details.

Humanity Protocol exemplifies this shift by using zkTLS to convert Web2 credentials into reusable, privacy-preserving proofs. Users visiting trusted sites can generate verified claims — such as proof of employment or travel status — linked to their Human ID, which apps can confirm without viewing private pages or unrelated data.

Companies can start applying zkTLS today. For example, instead of requesting full bank statements, they can simply verify whether a user’s balance exceeds a threshold (“balance above X”) to streamline onboarding and reduce storage risks. Similarly, loyalty programs can confirm member status without exposing data, creating smoother sign-in experiences.

The technology also supports sybil-resistant verification, leveraging human reputation over personally identifiable information (PII). Combined with anomaly detection, this mitigates the risks of automated abuse and fake accounts.

Employment verification can also be completed in minutes through zkTLS-based proofs from official portals, removing the back-and-forth of document collection and focusing attention on people rather than paperwork. Each verified claim substitutes a stored document, minimizing data exposure and speeding decision-making.

Businesses should begin by identifying a specific claim that builds trust or improves conversion rates. From there, they can define clear success metrics — such as faster approvals or reduced manual reviews.

Excess data should be treated as a liability, and consent-driven proof generation should happen within user browsers, clearly showing what’s being verified and what stays private. Alternate verification paths, like manual reviews, should remain available for inclusivity.

As companies scale, verified attributes can be reused across products and partners, creating interoperable ecosystems of trust. With less data stored, the blast radius of breaches shrinks, aligning with emerging privacy laws like California’s DROP system.

The evolution of online identity won’t be measured by the volume of databases, but by the strength of proofs. zkTLS transforms conventional trust signals into portable, privacy-first credentials controlled by users and verifiable by systems. The key is to start small — implement one proof, measure its impact, and expand.

The Growing Role of Cybersecurity in Protecting Nations

 




It is becoming increasingly complex and volatile for nations to cope with the threat landscape facing them in an age when the boundaries between the digital and physical worlds are rapidly dissolving. Cyberattacks have evolved from isolated incidents of data theft to powerful instruments capable of undermining economies, destabilising governments and endangering the lives of civilians. 

It is no secret that the accelerating development of technologies, particularly generative artificial intelligence, has added an additional dimension to the problem at hand. A technology that was once hailed as a revolution in innovation and defence, GenAI has now turned into a double-edged sword.

It has armed malicious actors with the capability of automating large-scale attacks, crafting convincing phishing scams, generating convincing deepfakes, and developing adaptive malware that is capable of sneaking past conventional defences, thereby giving them an edge over conventional adversaries. 

Defenders are facing a growing set of mounting pressures as adversaries become increasingly sophisticated. There is an estimated global cybersecurity talent gap of between 2.8 and 4.8 million unfilled positions, putting nearly 70% of organisations at risk. Meanwhile, regulatory requirements, fragile supply chains, and an ever-increasing digital attack surface have compounded vulnerabilities across a broad range of industries. 

Geopolitics has added to the tensions against this backdrop, exacerbated by the ever-increasing threat of cybercrime. There is no longer much difference between espionage, sabotage, and warfare when it comes to state-sponsored cyber operations, which have transformed cyberspace into a crucial battleground for national power. 

It has been evident in recent weeks that digital offensives can now lead to the destruction of real-world infrastructure—undermining public trust, disrupting critical systems, and redefining the very concept of national security—as they have been used to attack Ukraine's infrastructure as well as campaigns aimed at crippling essential services around the globe. 

In India, there is an ambitious goal to develop a $1 trillion digital economy by the year 2025, and cybersecurity has quietly emerged as a key component of that transformation. In order to support the nation's digital expansion—which covers financial, commerce, healthcare, and governance—a fragile yet vital foundation of trust is being built on a foundation of cybersecurity, which has now become the scaffolding for this expansion. 

It has become more important than ever for enterprises to be capable of anticipating, detecting, and neutralising threats, as artificial intelligence, cloud computing, and data-driven systems are increasingly integrated into their operations. This ability is critical not only to their resilience but also to their long-term competitiveness. In addition to the increasing use of digital technologies, the complexity of safeguarding interconnected ecosystems has increased as well. 

During October's Cybersecurity Awareness Month 2025, a renewed focus has been placed on strengthening artificial intelligence-powered defences as well as encouraging collective security measures. As a senior director at Acuity Knowledge Partners, Sameer Goyal stated that India's financial and digital sectors are increasingly operating within an always-on, API-driven environment defined by instant payments, open platforms, and expanding integrations with third-party services—factors that inevitably widen the attack surface for hackers. He argued that security was not an optional provision; it was fundamental. 

Taking note of the rise in sophisticated threats such as account takeovers, API abuse, ransomware, and deepfake fraud, he indicated that security is not optional. According to him, the primary challenge of a company is to protect its customers' trust while still providing frictionless digital experiences. According to Goyal, forward-thinking organisations are focusing on three key strategic pillars to ensure their digital experiences are frictionless: adopting zero-trust architectures, leveraging artificial intelligence for threat detection, and incorporating secure-by-design principles into development processes. 

Despite this caution, he warned that technology alone cannot guarantee security. For true cyber readiness, employees should be well-informed, well-practised and well-rehearsed in incident response playbooks, as well as participate in proactive red-team and purple-team simulations. “Trust is our currency in today’s digital age,” he said. “By combining zero-trust frameworks with artificial intelligence-driven analytics, cybersecurity has become much more than compliance — it is becoming a crucial element of competitiveness.” 

Among the things that make cybersecurity an exceptionally intricate domain of diplomacy are its deep entanglement with nearly every dimension of international relations-economics, military, and human rights, to name a few. As a result of the interconnectedness of our society, data movement across borders has become as crucial to global commerce as capital and goods moving across borders. It is no longer just tariffs and market access that are at the centre of trade disputes. 

It is also about the issues of data localisation, encryption standards, and technology transfer policies that matter the most. While the General Data Protection Regulation (GDPR) sets an international standard for data protection, it has also become a focal point in a number of ongoing debates regarding digital sovereignty and cross-border data governance that have been ongoing for some time. 

 As far as defence and security are concerned, geopolitical stakes are of equal importance to those of air, land, and sea. Since NATO officially recognised cyberspace in 2016—as a distinct operational domain comparable with the other three domains—allies have expanded their collective security frameworks to include cyber defence. To ensure a rapid collective response to cyber incidents, nations share threat intelligence, conduct simulation exercises, and harmonise their policies in coordination with one another. 

The alliance still faces a dilemma which is very sensitive and unresolved to the point where determining the threshold at which a cyberattack would qualify as an act of aggression enough to trigger Article 5, which is the cornerstone of NATO's commitment to mutual defence. Cybersecurity has become inextricable from concerns about human rights and democracy as well, in addition to commerce and defence.

In recent years, authoritarian states have increasingly abused digital tools for spying on dissidents, manipulating public discourse, and undermining democratic institutions abroad. As a consequence of these actions, the global community has been forced to examine issues of accountability and ethical technology use. The diplomatic community struggles with the establishment of international norms for responsible behaviour in cyberspace while it must navigate profound disagreements over internet governance, censorship, and the delicate balancing act between national security and individuals' privacy through the process of developing ethical norms.

There is no doubt that the tensions around cybersecurity have emerged over time from merely being a technical issue to becoming one of the most consequential arenas in modern diplomacy-shaping not only international stability, but also the very principles that underpin global cooperation. Global cybersecurity leaders are facing an age of uncertainty in the face of a raging tide of digital threats to economies and societies around the world. 

Almost six in ten executives, according to the Global Cybersecurity Outlook 2025, feel that cybersecurity risks have intensified over the past year, with almost 60 per cent of them admitting that geopolitical tensions are directly influencing their defence strategies in the near future. According to the survey, one in three CEOs is most concerned about cyber espionage, data theft, and intellectual property loss, and another 45 per cent are concerned about disruption to their business operations. 

Even though cybersecurity has increasingly become a central component of corporate and national strategy, these findings underscore a broader truth: cybersecurity is no longer just for IT departments anymore. Experts point out that the threat landscape has become increasingly complex over the past few years, but generative artificial intelligence offers both a challenge and an opportunity as well. 

Several threat actors have learned to weaponise artificial intelligence so they can craft realistic deepfakes, automate phishing campaigns, and develop adaptive malware, but defenders are also utilising the same technology to enhance their resilience. The advent of AI-enabled security systems has revolutionised the way organisations anticipate and react to threats by analysing anomalies in real time, automating response cycles, and simulating complex attack vectors. 

It is important to note, however, that progress remains uneven, with large corporations and developed economies being able to deploy cutting-edge artificial intelligence defences, but smaller businesses and public institutions continue to suffer from outdated infrastructure and a lack of talented workers, which makes global cybersecurity preparedness a growing concern. However, several nations are taking proactive steps toward closing this gap.

An example is the United Arab Emirates, which embraces cybersecurity not just as a technology imperative but also as a societal responsibility. A National Cybersecurity Strategy for the UAE was unveiled in early 2025. It is based on five pillars — governance, protection, innovation, capacity building, and partnerships. It is structured around five core pillars. It was also a result of these efforts that the UAE Cybersecurity Council, in partnership with the Tawazun Council and Lockheed Martin, established a Cybersecurity Centre of Excellence, which would develop domestic expertise and align national capabilities with global standards.

As a result of its innovative Public-Private-People model, which combines school curricula with nationwide drill and strengthens coordination between government and private sector, the country can further embed cybersecurity awareness across society. As a result of this approach, a more general realisation is taking shape globally: cybersecurity should be enshrined in the fabric of national governance, not as a secondary item but as a fundamental aspect of national governance. If cyber resilience is to be reframed as a core component of national security, sustained investment in infrastructure, talent, and innovation is needed, as well as rigorous oversight at the board and policy levels. 

The plan calls for the establishment of red-team exercises, stress testing, and cross-border intelligence sharing to prevent local incidents from spiralling into systemic crises. The collective action taken by these institutions marks an important shift in global security thinking, a shift that recognises that an economy's vitality and geopolitical stability are inseparable from the resilience of a nation's digital infrastructure. 

In the era of global diplomacy, cybersecurity has grown to be a key component, but it is much more than just an administrative adjustment or a passing policy trend. In this sense, it indicates the acknowledgement that all of the world's security, economic stability, and individual rights are inextricably intertwined within the fabric of the internet and cyberspace that we live in today. 

Considering the sophistication and borderless nature of threats in today's world, the field of cyber diplomacy is becoming more and more important as a defining arena of global engagement as a result. As much as traditional forms of military and economic statecraft play a significant role in shaping global stability, the ability to foster cooperation, set shared norms, and resolve digital conflicts holds as much weight.

In the international community, the central question facing it is no longer whether the concept of cybersecurity deserves to be included in diplomatic dialogue, but rather how effectively global institutions can implement this recognition into tangible results in the future. To maintain peace in an era where the next global conflict could start with just one line of malicious code, it is becoming imperative to establish frameworks for responsible behaviour, enhance transparency, and strengthen crisis communications mechanisms. 

Quite frankly, the stakes are simply too high, as if they were not already high enough. Considering how easily a cyberattack can disrupt power grids, paralyse transportation systems, or compromise electoral integrity, diplomacy in the digital sphere has become crucial to the protection of international order, especially in a world where cyberattacks are a daily occurrence.

The cybersecurity diplomacy sector is now a cornerstone of 21st-century governance – vital to safeguarding the interests of not only national governments, but also the broader ideals of peace, prosperity, and freedom that are at the foundation of globalisation. During these times of technological change and geopolitical uncertainty, the reality of cyber security is undeniable — it is no longer a specialized field but rather a shared global responsibility that requires all nations, corporations, and individuals to embrace a mindset in which digital trust is seen as an investment in long-term prosperity, and cyber resilience is seen as a crucial part of enhancing long-term security. 

The building of this future will not only require advanced technologies but also collaboration between governments, industries, and academia to develop skilled professionals, standardise security frameworks, and create a transparent approach to threat intelligence exchange. For the digital order to remain secure and stable, it will be imperative to raise public awareness, develop ethical technology, and create stronger cross-border partnerships. 

Those countries that are able to embrace cybersecurity in governance, innovation, and education right now will define the next generation of global leaders. There will come a point in the future when the strength of digital economies will not depend merely on their innovation, but on the depth of the protection they provide, for the interconnected world ahead will demand a currency of security that will represent progress in the long run.

Madras High Court says cryptocurrencies are property, not currency — what the ruling means for investors

 



Chennai, India — In a paradigm-shifting  judgment that reshapes how India’s legal system views digital assets, the Madras High Court has ruled that cryptocurrencies qualify as property under Indian law. The verdict, delivered by Justice N. Anand Venkatesh, establishes that while cryptocurrencies cannot be considered legal tender, they are nonetheless assets capable of ownership, transfer, and legal protection.


Investor’s Petition Leads to Legal Precedent

The case began when an investor approached the court after her 3,532.30 XRP tokens, valued at around ₹1.98 lakh, were frozen by the cryptocurrency exchange WazirX following a major cyberattack in July 2024.

The breach targeted Ethereum and ERC-20 tokens, resulting in an estimated loss of $230 million (approximately ₹1,900 crore) and prompted the platform to impose a blanket freeze on user accounts.

The petitioner argued that her XRP holdings were unrelated to the hacked tokens and should not be subject to the same restrictions. She sought relief under Section 9 of the Arbitration and Conciliation Act, 1996, requesting that Zanmai Labs Pvt. Ltd., the Indian operator of WazirX, be restrained from redistributing or reallocating her digital assets during the ongoing restructuring process.

Zanmai Labs contended that its Singapore-based parent company, Zettai Pte Ltd, was undergoing a court-supervised restructuring that required all users to share losses collectively. However, the High Court rejected this defense, observing that the petitioner’s assets were distinct from the ERC-20 tokens involved in the hack.

Justice Venkatesh ruled that the exchange could not impose collective loss-sharing on unrelated digital assets, noting that “the tokens affected by the cyberattack were ERC-20 coins, which are entirely different from the petitioner’s XRP holdings.”


Court’s Stance: Cryptocurrency as Property

In his judgment, Justice Venkatesh explained that although cryptocurrencies are intangible and do not function as physical goods or official currency, they meet the legal definition of property.

He stated that these assets “can be enjoyed, possessed, and even held in trust,” reinforcing their capability of ownership and protection under law.

To support this interpretation, the court referred to Section 2(47A) of the Income Tax Act, which classifies cryptocurrencies as Virtual Digital Assets (VDAs). This legal category recognizes digital tokens as taxable and transferable assets, strengthening the basis for treating them as property under Indian statutes.


Jurisdiction and Legal Authority

Addressing the question of jurisdiction, the High Court noted that Indian courts have the authority to protect assets located within the country, even if international proceedings are underway. Justice Venkatesh cited the Supreme Court’s 2021 ruling in PASL Wind Solutions v. GE Power Conversion India, which affirmed that Indian courts retain the right to intervene in matters involving domestic assets despite foreign arbitration.

Since the petitioner’s crypto transactions were initiated in Chennai and linked to an Indian bank account, the Madras High Court asserted complete jurisdiction to hear the dispute.

Beyond resolving the individual case, Justice Venkatesh emphasized the urgent need for robust regulatory and governance frameworks for India’s cryptocurrency ecosystem.

The judgment recommended several safeguards to protect users and maintain market integrity, including:

• Independent audits of cryptocurrency exchanges,

• Segregation of customer funds from company finances, and

• Stronger KYC (Know Your Customer) and AML (Anti-Money Laundering) compliance mechanisms.

The court underlined that as India transitions toward a Web3-driven economy, accountability, transparency, and investor protection must remain central to digital asset governance.


Impact on India’s Crypto Industry

Legal and financial experts view the judgment as a turning point in India’s treatment of digital assets.

By recognizing cryptocurrencies as property, the ruling gives investors a clearer legal foundation for ownership rights and judicial remedies in case of disputes. It also urges exchanges to improve corporate governance and adopt transparent practices when managing customer funds.

“This verdict brings long-needed clarity,” said a corporate lawyer specializing in digital finance. “It does not make crypto legal tender, but it ensures that investors’ holdings are legally recognized as assets, something the Indian market has lacked.”

The decision is expected to influence future policy discussions surrounding the Digital India Act and the government’s Virtual Digital Asset Taxation framework, both of which are likely to define how crypto businesses and investors operate in the country.


A Legally Secure Digital Future

By aligning India’s legal reasoning with international trends, the Madras High Court has placed the judiciary at the forefront of global crypto jurisprudence. Similar to rulings in the UK, Singapore, and the United States, this decision formally acknowledges that cryptocurrencies hold measurable economic value and are capable of legal protection.

While the ruling does not alter the Reserve Bank of India’s stance that cryptocurrencies are not legal currency, it does mark a decisive step toward legal maturity in digital asset regulation.

It signals a future where blockchain-based assets will coexist within a structured legal framework, allowing innovation and investor protection to advance together.



AI Poisoning: How Malicious Data Corrupts Large Language Models Like ChatGPT and Claude

 

Poisoning is a term often associated with the human body or the environment, but it is now a growing problem in the world of artificial intelligence. Large language models such as ChatGPT and Claude are particularly vulnerable to this emerging threat known as AI poisoning. A recent joint study conducted by the UK AI Security Institute, the Alan Turing Institute, and Anthropic revealed that inserting as few as 250 malicious files into a model’s training data can secretly corrupt its behavior. 

AI poisoning occurs when attackers intentionally feed false or misleading information into a model’s training process to alter its responses, bias its outputs, or insert hidden triggers. The goal is to compromise the model’s integrity without detection, leading it to generate incorrect or harmful results. This manipulation can take the form of data poisoning, which happens during the model’s training phase, or model poisoning, which occurs when the model itself is modified after training. Both forms overlap since poisoned data eventually influences the model’s overall behavior. 

A common example of a targeted poisoning attack is the backdoor method. In this scenario, attackers plant specific trigger words or phrases in the data—something that appears normal but activates malicious behavior when used later. For instance, a model could be programmed to respond insultingly to a question if it includes a hidden code word like “alimir123.” Such triggers remain invisible to regular users but can be exploited by those who planted them. 

Indirect attacks, on the other hand, aim to distort the model’s general understanding of topics by flooding its training sources with biased or false content. If attackers publish large amounts of misinformation online, such as false claims about medical treatments, the model may learn and reproduce those inaccuracies as fact. Research shows that even a tiny amount of poisoned data can cause major harm. 

In one experiment, replacing only 0.001% of the tokens in a medical dataset caused models to spread dangerous misinformation while still performing well in standard tests. Another demonstration, called PoisonGPT, showed how a compromised model could distribute false information convincingly while appearing trustworthy. These findings highlight how subtle manipulations can undermine AI reliability without immediate detection. Beyond misinformation, poisoning also poses cybersecurity threats. 

Compromised models could expose personal information, execute unauthorized actions, or be exploited for malicious purposes. Previous incidents, such as the temporary shutdown of ChatGPT in 2023 after a data exposure bug, demonstrate how fragile even the most secure systems can be when dealing with sensitive information. Interestingly, some digital artists have used data poisoning defensively to protect their work from being scraped by AI systems. 

By adding misleading signals to their content, they ensure that any model trained on it produces distorted outputs. This tactic highlights both the creative and destructive potential of data poisoning. The findings from the UK AI Security Institute, Alan Turing Institute, and Anthropic underline the vulnerability of even the most advanced AI models. 

As these systems continue to expand into everyday life, experts warn that maintaining the integrity of training data and ensuring transparency throughout the AI development process will be essential to protect users and prevent manipulation through AI poisoning.

Arctic Wolf Report Reveals IT Leaders’ Overconfidence Despite Rising Phishing and AI Data Risks

 

A new report from Arctic Wolf highlights troubling contradictions in how IT leaders perceive and respond to cybersecurity threats. Despite growing exposure to phishing and malware attacks, many remain overly confident in their organization’s ability to withstand them — even when their own actions tell a different story.  

According to the report, nearly 70% of IT leaders have been targeted in cyberattacks, with 39% encountering phishing, 35% experiencing malware, and 31% facing social engineering attempts. Even so, more than three-quarters expressed confidence that their organizations would not fall victim to a phishing attack. This overconfidence is concerning, particularly as many of these leaders admitted to clicking on phishing links themselves. 

Arctic Wolf, known for its endpoint security and managed detection and response (MDR) solutions, also analyzed global breach trends across regions. The findings revealed that Australia and New Zealand recorded the sharpest surge in data breaches, rising from 56% in 2024 to 78% in 2025. Meanwhile, the United States reported stable breach rates, Nordic countries saw a slight decline, and Canada experienced a marginal increase. 

The study, based on responses from 1,700 IT professionals including leaders and employees, also explored how organizations are handling AI adoption and data governance. Alarmingly, 60% of IT leaders admitted to sharing confidential company data with generative AI tools like ChatGPT — an even higher rate than the 41% of lower-level employees who reported doing the same.  

While 57% of lower-level staff said their companies had established policies on generative AI use, 43% either doubted or were unaware of any such rules. Researchers noted that this lack of awareness and inconsistent communication reflects a major policy gap. Arctic Wolf emphasized that organizations must not only implement clear AI usage policies but also train employees on the data and network security risks these technologies introduce. 

The report further noted that nearly 60% of organizations fear AI tools could leak sensitive or proprietary data, and about half expressed concerns over potential misuse. Arctic Wolf’s findings underscore a growing disconnect between security perception and reality. 

As cyber threats evolve — particularly through phishing and AI misuse — complacency among IT leaders could prove dangerous. The report concludes that sustained awareness training, consistent policy enforcement, and stronger data protection strategies are critical to closing this widening security gap.