Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Data protection. Show all posts

Zimbra Zero-Day Exploit Used in ICS File Attacks to Steal Sensitive Data

 

Security researchers have discovered that hackers exploited a zero-day vulnerability in Zimbra Collaboration Suite (ZCS) earlier this year using malicious calendar attachments to steal sensitive data. The attackers embedded harmful JavaScript code inside .ICS files—typically used to schedule and share calendar events—to target vulnerable Zimbra systems and execute commands within user sessions. 

The flaw, identified as CVE-2025-27915, affected ZCS versions 9.0, 10.0, and 10.1. It stemmed from inadequate sanitization of HTML content in calendar files, allowing cybercriminals to inject arbitrary JavaScript code. Once executed, the code could redirect emails, steal credentials, and access confidential user information. Zimbra patched the issue on January 27 through updates (ZCS 9.0.0 P44, 10.0.13, and 10.1.5), but at that time, the company did not confirm any active attacks. 

StrikeReady, a cybersecurity firm specializing in AI-based threat management, detected the campaign while monitoring unusually large .ICS files containing embedded JavaScript. Their investigation revealed that the attacks began in early January, predating the official patch release. In one notable instance, the attackers impersonated the Libyan Navy’s Office of Protocol and sent a malicious email targeting a Brazilian military organization. The attached .ICS file included Base64-obfuscated JavaScript designed to compromise Zimbra Webmail and extract sensitive data. 

Analysis of the payload showed that it was programmed to operate stealthily and execute in asynchronous mode. It created hidden fields to capture usernames and passwords, tracked user actions, and automatically logged out inactive users to trigger data theft. The script exploited Zimbra’s SOAP API to search through emails and retrieve messages, which were then sent to the attacker every four hours. It also added a mail filter named “Correo” to forward communications to a ProtonMail address, gathered contacts and distribution lists, and even hid user interface elements to avoid detection. The malware delayed its execution by 60 seconds and only reactivated every three days to reduce suspicion. 

StrikeReady could not conclusively link the attack to any known hacking group but noted that similar tactics have been associated with a small number of advanced threat actors, including those linked to Russia and the Belarusian state-sponsored group UNC1151. The firm shared technical indicators and a deobfuscated version of the malicious code to aid other security teams in detection efforts. 

Zimbra later confirmed that while the exploit had been used, the scope of the attacks appeared limited. The company urged all users to apply the latest patches, review existing mail filters for unauthorized changes, inspect message stores for Base64-encoded .ICS entries, and monitor network activity for irregular connections. The incident highlights the growing sophistication of targeted attacks and the importance of timely patching and vigilant monitoring to prevent zero-day exploitation.

AI Tools Make Phishing Attacks Harder to Detect, Survey Warns


 

Despite the ever-evolving landscape of cyber threats, the phishing method remains the leading avenue for data breaches in the years to come. However, in 2025, the phishing method has undergone a dangerous transformation. 

What used to be a crude attempt to deceive has now evolved into an extremely sophisticated operation backed by artificial intelligence, transforming once into an espionage. Traditionally, malicious actors are using poorly worded, grammatically incorrect, and inaccurate messages to spread their malicious messages; now, however, they are deploying systems based on generative AI, such as GPT-4 and its successors, to craft emails that are eerily authentic, contextually aware, and meticulously tailored to each target.

Cybercriminals are increasingly using artificial intelligence to orchestrate highly targeted phishing campaigns, creating communications that look like legitimate correspondence with near-perfect precision, which has been sounded alarming by the U.S. Federal Bureau of Investigation. According to FBI Special Agent Robert Tripp, these tactics can result in a devastating financial loss, a damaged reputation, or even a compromise of sensitive data. 

By the end of 2024, the rise of artificial intelligence-driven phishing had become no longer just another subtle trend, but a real reality that no one could deny. According to cybersecurity analysts, phishing activity has increased by 1,265 percent over the last three years, as a direct result of the adoption of generative AI tools. In their view, traditional email filters and security protocols, which were once effective against conventional scams, are increasingly being outmanoeuvred by AI-enhanced deceptions. 

Artificial intelligence-generated phishing has been elevated to become the most dominant email-borne threat of 2025, eclipsing even ransomware and insider risks because of its sophistication and scale. There is no doubt that organisations throughout the world are facing a fundamental change in how digital defence works, which means that complacency is not an option. 

Artificial intelligence has fundamentally altered the anatomy of phishing, transforming it from a scattershot strategy to an alarmingly precise and comprehensive threat. According to experts, adversaries now exploit artificial intelligence to amplify their scale, sophistication, and success rates by utilising AI, rather than just automating attacks.

As AI has enabled criminals to create messages that mimic human tone, context, and intent, the line between legitimate communication and deception is increasingly blurred. The cybersecurity analyst emphasises that to survive in this evolving world, security teams and decision-makers need to maintain constant vigilance, urging them to include AI-awareness in workforce training and defensive strategies. This new threat is manifested in the increased frequency of polymorphic phishing attacks. It is becoming increasingly difficult for users to detect phishing emails due to their enhanced AI automation capabilities. 

By automating the process of creating phishing emails, attackers are able to generate thousands of variants, each with slight changes to the subject line, sender details, or message structure. In the year 2024, according to recent research, 76 per cent of phishing attacks had at least one polymorphic trait, and more than half of them originated from compromised accounts, and about a quarter relied on fraudulent domains. 

Acanto alters URLs in real time and resends modified messages in real time if initial attempts fail to stimulate engagement, making such attacks even more complicated. AI-enhanced schemes can be extremely adaptable, which makes traditional security filters and static defences insufficient when they are compared to these schemes. Thus, organisations must evolve their security countermeasures to keep up with this rapidly evolving threat landscape. 

An alarming reality has been revealed in a recent global survey: the majority of individuals are still having difficulty distinguishing between phishing attempts generated by artificial intelligence and genuine messages.

According to a study by the Centre for Human Development, only 46 per cent of respondents correctly recognised a simulated phishing email crafted by artificial intelligence. The remaining 54 per cent either assumed it was real or acknowledged uncertainty about it, emphasising the effectiveness of artificial intelligence in impersonating legitimate communications now. 

Several age groups showed relatively consistent levels of awareness, with Gen Z (45%), millennials (47%), Generation X (46%) and baby boomers (46%) performing almost identically. In this era of artificial intelligence (AI) enhanced social engineering, it is crucial to note that no generation is more susceptible to being deceived than the others. 

While most of the participants acknowledged that artificial intelligence has become a tool for deceiving users online, the study demonstrated that awareness is not enough to prevent compromise, since the study found that awareness alone cannot prevent compromise. The same group was presented with a legitimate, human-written corporate email, and only 30 per cent of them correctly identified it as authentic. This is a sign that digital trust is slipping and that people are relying on instinct rather than factual evidence. 

The study was conducted by Talker Research as part of the Global State of Authentication Survey for Yubico, conducted on behalf of Yubico. During Cybersecurity Awareness Month this October, Talker Research collected insights from users throughout the U.S., the U.K., Australia, India, Japan, Singapore, France, Germany, and Sweden in order to gather insights from users across those regions. 

As a result of the findings, it is clear that users are vulnerable to increasingly artificial intelligence-driven threats. A survey conducted by the National Institute for Health found that nearly four in ten people (44%) had interacted with phishing messages within the past year by clicking links or opening attachments, and 1 per cent had done so within the past week. 

The younger generations seem to be more susceptible to phishing content, with Gen Z (62%) and millennials (51%) reporting significantly higher levels of engagement than the Gen X generation (33%) or the baby boom generation (23%). It continues to be email that is the most prevalent attack vector, accounting for 51 per cent of incidents, followed by text messages (27%) and social media messages (20%). 

There was a lot of discussion as to why people fell victim to these messages, with many citing their convincing nature and their similarities to genuine corporate correspondence, demonstrating that even the most technologically advanced individuals struggle to keep up with the sophistication of artificial intelligence-driven deception.

Although AI-driven scams are becoming increasingly sophisticated, cybersecurity experts point out that families do not have to give up on protecting themselves. It is important to take some simple, proactive actions to prevent risk from occurring. Experts advise that if any unexpected or alarming messages are received, you should pause before responding and verify the source by calling back from a trusted number, rather than the number you receive in the communication. 

Family "safe words" can also help confirm authenticity during times of emergency and help prevent emotional manipulation when needed. In addition, individuals can be more aware of red flags, such as urgent demands for action, pressure to share personal information, or inconsistencies in tone and detail, in order to identify deception better. 

Additionally, businesses must be aware of emerging threats like deepfakes, which are often indicated by subtle signs like mismatched audio, unnatural facial movements, or inconsistent visual details. Technology can play a crucial role in ensuring that digital security is well-maintained as well as fortified. 

It is a fact that Bitdefender offers a comprehensive approach to family protection by detecting and blocking fraudulent content before it gets to users by using a multi-layered security suite. Through email scam detection, malicious link filtering, and artificial intelligence-driven tools like Bitdefender Scamio and Link Checker, the platform is able to protect users across a broad range of channels, all of which are used by scammers. 

It is for mobile users, especially users of Android phones, that Bitdefender has integrated a number of call-blocking features within its application. These capabilities provide an additional layer of defence against attacks such as robocalls and impersonation schemes, which are frequently used by fraudsters targeting American homes. 

In Bitdefender's family plans, users have the chance to secure all their devices under a unified umbrella, combining privacy, identity monitoring, and scam prevention into a seamless, easily manageable solution in a seamless manner. As people move into an era where digital deception has become increasingly human-like, effective security is about much more than just blocking malware. 

It's about preserving trust across all interactions, no matter what. In the future, as artificial intelligence continues to influence phishing, it will become increasingly difficult for people to distinguish between the deception of phishing and its own authenticity of the phishing, which will require a shift from reactive defence to proactive digital resilience. 

The experts stress that not only advanced technology, but also a culture of continuous awareness, is needed to fight AI-driven social engineering. Employees need to be educated regularly about security issues that mirror real-world situations, so they can become more aware of potential phishing attacks before they click on them. As well, individuals should utilise multi-factor authentication, password managers and verified communication channels to safeguard both personal and professional information. 

On a broader level, government, cybersecurity vendors, and digital platforms must collaborate in order to create a shared framework that allows them to identify and report AI-enhanced scams as soon as they occur in order to prevent them from spreading.

Even though AI has certainly enhanced the arsenal of cybercriminals, it has also demonstrated the ability of AI to strengthen defence systems—such as adaptive threat intelligence, behavioural analytics, and automated response systems—as well. People must remain vigilant, educated, and innovative in this new digital battleground. 

There is no doubt that the challenge people face is to seize the potential of AI not to deceive people, but to protect them instead-and to leverage the power of digital trust to make our security systems of tomorrow even more powerful.

AI Adoption Outpaces Cybersecurity Awareness as Users Share Sensitive Data with Chatbots

 

The global surge in the use of AI tools such as ChatGPT and Gemini is rapidly outpacing efforts to educate users about the cybersecurity risks these technologies pose, according to a new study. The research, conducted by the National Cybersecurity Alliance (NCA) in collaboration with cybersecurity firm CybNet, surveyed over 6,500 individuals across seven countries, including the United States. It found that 65% of respondents now use AI in their everyday lives—a 21% increase from last year—yet 58% said they had received no training from employers on the data privacy and security challenges associated with AI use. 

“People are embracing AI in their personal and professional lives faster than they are being educated on its risks,” said Lisa Plaggemier, Executive Director of the NCA. The study revealed that 43% of respondents admitted to sharing sensitive information, including company financial data and client records, with AI chatbots, often without realizing the potential consequences. The findings highlight a growing disconnect between AI adoption and cybersecurity preparedness, suggesting that many organizations are failing to educate employees on how to use these tools responsibly. 

The NCA-CybNet report aligns with previous warnings about the risks posed by AI systems. A survey by software company SailPoint earlier this year found that 96% of IT professionals believe AI agents pose a security risk, while 84% said their organizations had already begun deploying the technology. These AI agents—designed to automate tasks and improve efficiency—often require access to sensitive internal documents, databases, or systems, creating new vulnerabilities. When improperly secured, they can serve as entry points for hackers or even cause catastrophic internal errors, such as one case where an AI agent accidentally deleted an entire company database. 

Traditional chatbots also come with risks, particularly around data privacy. Despite assurances from companies, most chatbot interactions are stored and sometimes used for future model training, meaning they are not entirely private. This issue gained attention in 2023 when Samsung engineers accidentally leaked confidential data to ChatGPT, prompting the company to ban employee use of the chatbot. 

The integration of AI tools into mainstream software has only accelerated their ubiquity. Microsoft recently announced that AI agents will be embedded into Word, Excel, and PowerPoint, meaning millions of users may interact with AI daily—often without any specialized training in cybersecurity. As AI becomes an integral part of workplace tools, the potential for human error, unintentional data sharing, and exposure to security breaches increases. 

While the promise of AI continues to drive innovation, experts warn that its unchecked expansion poses significant security challenges. Without comprehensive training, clear policies, and safeguards in place, individuals and organizations risk turning powerful productivity tools into major sources of vulnerability. The race to integrate AI into every aspect of modern life is well underway—but for cybersecurity experts, the race to keep users informed and protected is still lagging far behind.

Moving Toward a Quantum-Safe Future with Urgency and Vision


It is no secret that the technology of quantum computing is undergoing a massive transformation - one which promises to redefine the very foundations of digital security worldwide. Quantum computing, once thought to be nothing more than a theoretical construct, is now beginning to gain practical application in the world of computing. 

A quantum computer, unlike classical computers that process information as binary bits of zeros or ones, is a device that enables calculations to be performed at a scale and speed previously deemed impossible by quantum mechanics, leveraging the complex principles of quantum mechanics. 

In spite of their immense capabilities, this same power poses an unprecedented threat to the digital safeguards underpinning today's connected world, since conventional systems would have to solve problems that would otherwise require centuries to solve. 

 The science of cryptography at the heart of this looming challenge is the science of protecting sensitive data through encryption and ensuring its confidentiality and integrity. Although cryptography remains resilient to today's cyber threats, experts believe that a sufficiently advanced quantum computer could render these defences obsolete. 

Governments around the world have begun taking decisive measures in recognition of the importance of this threat. In 2024, the U.S. National Institute of Standards and Technology (NIST) released three standards on postquantum cryptography (PQC) for protecting against quantum-enabled threats in establishing a critical benchmark for global security compliance. 

Currently, additional algorithms are being evaluated to enhance post-quantum encryption capabilities even further. In response to this lead, the National Cyber Security Centre of the United Kingdom has urged high-risk systems to adopt PQC by 2030, with full adoption by 2035, based on the current timeline. 

As a result, European governments are developing complementary national strategies that are aligned closely with NIST's framework, while nations in the Asia-Pacific region are putting together quantum-safe roadmaps of their own. Despite this, experts warn that these transitions will not happen as fast as they should. In the near future, quantum computers capable of compromising existing encryption may emerge years before most organisations have implemented quantum-resistant systems.

Consequently, the race to secure the digital future has already begun. The rise of quantum computing is a significant technological development that has far-reaching consequences that extend far beyond the realm of technological advancement. 

Although it has undeniable transformative potential - enabling breakthroughs in sectors such as healthcare, finance, logistics, and materials science - it has at the same time introduced one of the most challenging cybersecurity challenges of the modern era, a threat that is not easily ignored. Researchers warn that as quantum research continues to progress, the cryptographic systems safeguarding global digital infrastructure may become susceptible to attack. 

A quantum computer that has sufficient computational power may render public key cryptography ineffective, rendering secure online transactions, confidential communications, and data protection virtually obsolete. 

By having the capability to decrypt information that was once considered impenetrable, these hackers could undermine the trust and security frameworks that have shaped the digital economy so far. The magnitude of this threat has caused business leaders and information technology leaders to take action more urgently. 

Due to the accelerated pace of quantum advancement, organisations have an urgent need to reevaluate, redesign, and future-proof their cybersecurity strategies before the technology reaches critical maturity in the future. 

It is not just a matter of adopting new standards when trying to move towards quantum-safe encryption; it is also a matter of reimagining the entire architecture of data security in the long run. In addition to the promise of quantum computing to propel humanity into a new era of computational capability, it is also necessary to develop resilience and foresight in parallel.

There will be disruptions that are brought about by the digital age, not only going to redefine innovation, but they will also test the readiness of institutions across the globe to secure the next frontier of the digital age. The use of cryptography is a vital aspect of digital trust in modern companies. It secures communication across global networks, protects financial transactions, safeguards intellectual property, and secures all communications across global networks. 

Nevertheless, moving from existing cryptographic frameworks into quantum-resistant systems is much more than just an upgrade in technology; it means that a fundamental change has been made to the design of the digital trust landscape itself. With the advent of quantum computing, adversaries have already begun using "harvest now, decrypt later" tactics, a strategy which collects encrypted data now with the expectation that once quantum computing reaches maturity, they will be able to decrypt it. 

It has been shown that sensitive data with long retention periods, such as medical records, financial archives, or classified government information, can be particularly vulnerable to retrospective exposure as soon as quantum capabilities become feasible on a commercial scale. Waiting for a definitive quantum event to occur before taking action may prove to be perilous in a shifting environment. 

Taking proactive measures is crucial to ensuring operational resilience, regulatory compliance, as well as the protection of critical data assets over the long term. An important part of this preparedness is a concept known as crypto agility—the ability to move seamlessly between cryptographic algorithms without interrupting business operations. 

Crypto agility has become increasingly important for organisations operating within complex and interconnected digital ecosystems rather than merely an option for technical convenience. Using the platform, enterprises are able to keep their systems and vendors connected, maintain robust security in the face of evolving threats, respond to algorithmic vulnerabilities quickly, comply with global standards and remain interoperable despite diverse systems and vendors.

There is no doubt that crypto agility forms the foundation of a quantum-secure future—and is an essential attribute that all organisations must possess for them to navigate the coming era of quantum disruption confidently and safely. As a result of the transition from quantum cryptography to post-quantum cryptography (PQC), it is no longer merely a theoretical exercise, but now an operational necessity. 

Today, almost every digital system relies heavily on cryptographic mechanisms to ensure the security of software, protect sensitive data, and authenticate transactions in order to ensure that security is maintained. When quantum computing capabilities become available to malicious actors, these foundational security measures could become ineffective, resulting in the vulnerability of critical data around the world to attack and hacking. 

Whether or not quantum computing will occur is not the question, but when. As with most emerging technologies, quantum computing will probably begin as a highly specialised, expensive, and limited capability available to only a few researchers and advanced enterprises at first. Over the course of time, as innovation accelerates and competition increases, accessibility will grow, and costs will fall, which will enable a broader adoption of the technology, including by threat actors. 

A parallel can be drawn to the evolution of artificial intelligence. The majority of advanced AI systems were confined mainly to academic or industrial research environments before generative AI models like ChatGPT became widely available in recent years. Within a few years, however, the democratisation of these capabilities led to increased innovation, but it also increased the likelihood of malicious actors gaining access to powerful new tools that could be used against them. 

The same trajectory is forecast for quantum computing, except with stakes that are exponentially higher than before. The ability to break existing encryption protocols will no longer be limited to nation-states or elite research groups as a result of the commoditization process, but will likely become the property of cybercriminals and rogue actors around the globe as soon as it becomes commoditised. 

In today's fast-paced digital era, adapting to a secure quantum framework is not simply a question of technological evolution, but of long-term survival-especially in the face of catastrophic cyber threats that are convergent at an astonishing rate. A transition to post-quantum cryptography (PQC), or post-quantum encryption, is expected to be seamless through regular software updates for users whose digital infrastructure includes common browsers, applications, and operating systems. 

As a result, there should be no disruption or awareness on the part of users as far as they are concerned. The gradual process of integrating PQC algorithms has already started, as emerging algorithms are being integrated alongside traditional public key cryptography in order to ensure compatibility during this transition period. 

As a precautionary measure, system owners are advised to follow the National Cyber Security Centre's (NCSC's) guidelines to keep their devices and software updated, ensuring readiness once the full implementation of the PQC standards has taken place. While enterprise system operators ought to engage proactively with technology vendors to determine what their PQC adoption timelines are and how they intend to integrate it into their systems, it is important that they engage proactively. 

In organisations with tailored IT or operational technology systems, risk and system owners will need to decide which PQC algorithms best align with the unique architecture and security requirements of these systems. PQC upgrades must be planned now, ideally as part of a broader lifecycle management and infrastructure refresh effort. This shift has been marked by global initiatives, including the publication of ML-KEM, ML-DSA, and SLH-DSA algorithms by NIST in 2024. 

It marks the beginning of a critical shift in the development of quantum-resistant cryptographic systems that will define the next generation of cybersecurity. In the recent surge of scanning activity, it is yet another reminder that cyber threats are continually evolving, and that maintaining vigilance, visibility, and speed in the fight against them is essential. 

Eventually, as reconnaissance efforts become more sophisticated and automated, organisations will not only have to depend on vendor patches but also be proactive in integrating threat intelligence, continuously monitoring, and managing attack surfaces as a result of the technological advancements. 

The key to improving network resilience today is to take a layered approach, which includes hardening endpoints, setting up strict access controls, deploying timely updates, and utilising behaviour analytics-based intelligent anomaly detection to monitor the network infrastructure for anomalies from time to time. 

Further, security teams should take an active role in safeguarding the entire network against attacks that can interfere with any of the exposed interfaces by creating zero-trust architectures that verify every connection that is made to the network. Besides conducting regular penetration tests, active participation in information-sharing communities can help further detect early warning signs before adversaries gain traction.

Attackers are playing the long game, as shown by the numerous attacks on Palo Alto Networks and Cisco infrastructure that they are scanning, waiting, and striking when they become complacent. Consistency is the key to a defender's edge, so they need to make sure they know what is happening and keep themselves updated.

Where Your Data Goes After a Breach and How to Protect Yourself

 

Data breaches happen every day—and they’re almost never random. Most result from deliberate, targeted cyberattacks or the exploitation of weak security systems that allow cybercriminals to infiltrate networks and steal valuable data. These breaches can expose email addresses, passwords, credit card details, Social Security numbers, medical records, and even confidential business documents. While it’s alarming to think about, understanding what happens after your data is compromised is key to knowing how to protect yourself.  

Once your information is stolen, it essentially becomes a commodity traded for profit. Hackers rarely use the data themselves. Instead, they sell it—often bundled with millions of other records—to other cybercriminals who use it for identity theft, fraud, or extortion. In underground networks, stolen information has its own economy, with prices fluctuating depending on how recent or valuable the data is. 

The dark web is the primary marketplace for stolen information. Hidden from regular search engines, it provides anonymity for sellers and buyers of credit cards, logins, and personal identifiers. Beyond that, secure messaging platforms such as Telegram and Signal are also used to trade stolen data discreetly, thanks to their encryption and privacy features. Some invite-only forums on the surface web also serve as data exchange hubs, while certain hacktivists or whistleblowers may release stolen data publicly to expose unethical practices. Meanwhile, more sophisticated cybercriminal groups operate privately, sharing or selling data directly to trusted clients or other hacker collectives. 

According to reports from cybersecurity firm PrivacyAffairs, dark web markets offer everything from bank login credentials to passports and crypto wallets. Payment card data—often used in “carding” scams—remains one of the most traded items. Similarly, stolen social media and email accounts are in high demand, as they allow attackers to launch phishing campaigns or impersonate victims. Even personal documents such as birth certificates or national IDs are valuable for identity theft schemes. 

Although erasing your personal data from the internet entirely is nearly impossible, there are ways to limit your exposure. Start by using strong, unique passwords managed through a reputable password manager, and enable multi-factor authentication wherever possible. A virtual private network (VPN) adds another layer of protection by encrypting your internet traffic and preventing data collection by third parties. 

It’s also wise to tighten your social media privacy settings and avoid sharing identifiable details such as your workplace, home address, or relationship status. Be cautious about what information you provide to websites and services—especially when signing up or making purchases. Temporary emails, one-time payment cards, and P.O. boxes can help preserve your anonymity online.  

If you discover that your data was part of a breach, act quickly. Monitor all connected accounts for suspicious activity, reset compromised passwords, and alert your bank or credit card provider if financial details were involved. For highly sensitive leaks, such as stolen ID numbers, consider freezing your credit report to prevent identity fraud. Data monitoring services can also help by tracking the dark web for mentions of your personal information.

In today’s digital world, data is currency—and your information is one of the most valuable assets you own. Staying vigilant, maintaining good cyber hygiene, and using privacy tools are your best defenses against becoming another statistic in the global data breach economy.

WestJet Confirms Cyberattack Exposed Passenger Data but No Financial Details

 

WestJet has confirmed that a cyberattack in June compromised certain passenger information, though the airline maintains that the breach did not involve sensitive financial or password data. The incident, which took place on June 13, was attributed to a “sophisticated, criminal third party,” according to a notice issued by the airline to U.S. residents earlier this week. 

WestJet stated that its internal precautionary measures successfully prevented the attackers from gaining access to credit and debit card details, including card numbers, expiry dates, and CVV codes. The airline further confirmed that no user passwords were stolen. However, the company acknowledged that some passengers’ personal information had been exposed. The compromised data included names, contact details, information and documents related to reservations and travel, and details regarding the passengers’ relationship with WestJet. 

“Containment is complete, and additional system and data security measures have been implemented,” WestJet said in an official release. The airline emphasized that analysis of the incident is still ongoing and that it continues to strengthen its cybersecurity framework to safeguard customer data. 

As part of its response plan, WestJet is contacting affected customers to offer support and guidance. The airline has partnered with Cyberscout, a company specializing in identity theft protection and fraud assistance, to help impacted individuals with remediation services. WestJet has also published advisory information on its website to assist passengers who may be concerned about their data.  

In its statement, the airline reassured customers that swift containment measures limited the breach’s impact. “Our cybersecurity teams acted immediately to contain the situation and secure our systems. We take our responsibility to protect customer information very seriously,” the company said. 

WestJet confirmed that it is working closely with law enforcement agencies, including the U.S. Federal Bureau of Investigation (FBI) and the Canadian Centre for Cyber Security. The airline also notified U.S. credit reporting agencies—TransUnion, Experian, and Equifax—along with the attorneys general of several U.S. states, Transport Canada, the Office of the Privacy Commissioner of Canada, and relevant provincial and international data protection authorities. 

While WestJet maintains that the exposed information does not appear to include sensitive financial or authentication details, cybersecurity experts note that personal identifiers such as names and contact data can still pose privacy and fraud risks if misused. The airline’s transparency and engagement with regulatory agencies reflect an effort to mitigate potential harm and restore public trust. 

The company reiterated that it remains committed to improving its security posture through enhanced monitoring, employee training, and the implementation of additional cybersecurity controls. The investigation into the breach continues, and WestJet has promised to provide further updates as new information becomes available. 

The incident highlights the ongoing threat of cyberattacks against the aviation industry, where companies hold large volumes of personal and travel-related data. Despite the rise in security investments, even well-established airlines remain attractive targets for sophisticated cybercriminals. WestJet’s quick response and cooperation with authorities underscore the importance of rapid containment and transparency in handling such data breaches.

Protecting Sensitive Data When Employees Use AI Chatbots


 

In today's digitised world, where artificial intelligence tools are rapidly reshaping the way people work, communicate, and work together, it's important to be aware that a quiet but pressing risk has emerged-that what individuals choose to share with chatbots may not remain entirely private for everyone involved.

A patient can use ChatGPT to receive health advice about an embarrassing health condition, or an employee can upload sensitive corporate documents into Google's Gemini system to generate a summary of them, but the information they disclose will ultimately play a part in the algorithms that power these systems. 

It has come to the attention of a lot of experts that AI models, built on large datasets collected from all across the internet, such as blogs and news articles, as well as from social media posts, are often trained without user consent, resulting in not only copyright problems but also significant privacy concerns. 

In light of the opaque nature of machine learning processes, experts warn that once data has been ingested into a model's training pool, it will be almost impossible to remove it. In this world we live in, individuals and businesses alike are forced to ask themselves what level of trust we can place in tools that, while extremely powerful, may also expose us to unseen risks. 

Considering that we are living in a hybrid age, where artificial intelligence tools such as ChatGPT are rapidly becoming a new frontier for data breaches, this is particularly true in the age of hybrid work. While these platforms offer businesses a number of valuable features, including the ability to draft content and troubleshoot software, they also carry inherent risks. 

Experts warn that poor management of them can result in leakage of training data, violations of privacy, and accidental disclosure of sensitive company data. The latest Fortinet Work From Anywhere study highlights the magnitude of the problem: nearly 62% of organisations have reported experiencing data breaches as a result of switching to remote working. 

Analysts believe some of these incidents could have been prevented if employees had stayed on-premises with company-managed devices and applications and had not experienced the same issues. Nevertheless, security experts claim that the solution is not to return to the office again, but rather to create a robust framework for data loss prevention (DLP) in a decentralised work environment to safeguard the information.

To prevent sensitive information from being lost, stolen, or leaked across networks, storage systems, endpoints, and cloud environments, a robust DLP strategy combines tools, technologies, and best practices. A successful framework focuses on data at rest, in motion, and in use and ensures that they are continuously monitored and protected. 

Experts outline four essential components that a framework must have to succeed: Make sure the company data is classified and assigned security levels across the network, and that the network is secure. Maintain strict adherence to compliance when storing, deleting, and retaining user information. Make sure staff are educated regarding clear policies that prevent accidental sharing of information or unauthorised access to information. 

Embrace protection tools that can detect phishing, ransomware, insider threats, and unintentional exposures in order to protect the organisation's data. It is not enough to use technology alone to protect organisations; it is also essential to have clear policies in place. With DLP implemented correctly, organisations are not only less likely to suffer from leaks, but they are also more likely to comply with industry standards, government regulations, and the like. 

The balance between innovation and responsibility in the digital age, particularly in the era of digital transformation, is crucial for businesses that adopt hybrid work and AI-based tools. According to the UK General Data Protection Regulation (UK GDPR), businesses that utilise AI platforms, such as ChatGPT, must adhere to a set of strict obligations designed to protect personal information from unauthorised access.

In terms of data protection, any data that could identify the individual - such as an employee file, customer contact details, or client database - falls within the regulations' scope, and ultimately, business owners are responsible for protecting that data, even when it is handled by third parties. In order to cope with this scenario, companies will need to carefully evaluate how external platforms process, store, and protect their data. 

They often do so through legally binding Data Processing Agreements that specify confidentiality standards, privacy controls, and data deletion requirements for the platforms. It is equally important to ensure that organisations communicate with individuals when their information is incorporated into artificial intelligence tools and, if necessary, obtain explicit consent from them.

As part of the law, firms are also required to implement “appropriate technical and organisational measures.” These measures include checking whether AI vendors are storing their data overseas, ensuring that it is kept in order to prevent misuse, and determining what safeguards are in place. Besides the risks of financial penalties or fines that are imposed for failing to comply, there is also the risk of eroding employee and customer trust, which can be more difficult to repair than the financial penalties themselves. 

When it comes to ensuring safe data practices in the age of artificial intelligence, businesses are increasingly turning to Data Loss Prevention (DLP) solutions as a way of automating the otherwise unmanageable task of monitoring vast networks of users, devices, and applications, which can be a daunting task. The state and flow of information have determined the four primary categories of DLP software that have emerged. 

Often, DLP tools utilise artificial intelligence and machine learning to identify suspicious traffic within and outside a company's system — whether by downloading, transferring, or through mobile connections — by tracking data movement within and outside a company's systems. In addition to preventing unauthorised activities at the source, endpoint DLP is also installed directly on users' computers, which monitors memory, cached data, and files being accessed or transferred as they occur. 

In general, cloud DLP solutions are intended to safeguard information stored in online environments such as backups, archives, and databases. They are characterised by encryption, scanning, and access controls that are used for securing corporate assets. While Email DLP is largely responsible for keeping sensitive details from being leaked through internal and external correspondence, it is also designed to prevent these sensitive details from getting shared accidentally, maliciously or through a compromised mailbox as well. 

Despite some businesses' concerns about whether Extended Detection and Response (XDR) platforms are adequate, experts think that DLP serves a different purpose than XDR: XDR provides broad threat detection and incident response, while DLP focuses on protecting sensitive data, categorising information, reducing breach risks, and ultimately maintaining company reputations.

A number of major technology companies have adopted varying approaches to dealing with the data their AI chatbots have collected from their users, often raising concerns about transparency and control. Google, for example, maintains conversations with its Gemini chatbot by default for 18 months, but the setting can be modified if users desire. Although activity tracking can be disabled, these chats remain in storage for at least 72 hours even if they are not reviewed by human moderators in order to refine the system. 

However, Google warns users that sharing confidential information is not advisable and that any conversations that have already been flagged for human review cannot be erased. As part of Meta's artificial intelligence assistant, which can be found on Facebook, WhatsApp, and Instagram, it is trained to understand public posts, photos, captions, and data scraped from around the web. However, the application cannot handle private messages. 

There is no doubt that citizens of the European Union and the United Kingdom have the right to object to the use of their information for training under stricter privacy laws. However, those living in countries without such protections, such as the United States, have fewer options than their citizens in other countries. The opt-out process for Meta is quite complicated, and it is available only where it is available; users must submit evidence of their interactions with the chatbot as evidence of the opt-out. 

It is worth noting that Microsoft's Copilot does not provide an opt-out mechanism for personal accounts; users are only limited in their ability to delete their interaction history through their account settings, and there is no option to prevent future data retention. These practices demonstrate how AI privacy controls can be patchy, with users' choices often being more influenced by the laws and regulations of their jurisdiction, rather than corporate policy. 

The responsibility organisations as they navigate this evolving landscape relates not only to complying with regulations or implementing technical safeguards, but also to cultivating a culture of digital responsibility in their organisations. Employees need to be taught how important it is to understand and respect the value of their information, and how important it is to exercise caution when using AI-powered applications. 

By taking proactive measures such as implementing clear guidelines on chatbot usage, conducting regular risk assessments, and ensuring that vendors are compliant with stringent data protection standards, an organisation can significantly reduce the threat exposure they are exposed to. 

The businesses that implement a strong governance framework, at the same time, are not only protected but are also able to take advantage of AI's advantages with confidence, enhancing productivity, streamlining workflows, and maintaining competitiveness in an era of data-driven economies. Thus, it is essential to embrace AI responsibly, balancing innovation and vigilance, so that it isn't avoided, but rather embraced responsibly. 

A company's use of AI can be transformed from a potential liability to a strategic asset by combining regulatory compliance, advanced DLP solutions, and transparent communication with staff and stakeholders. It is important to remember that trust is currency in a marketplace where security is king, and companies that protect sensitive data will not only prevent costly breaches from occurring but also strengthen their reputation in the long run.

Why CEOs Must Go Beyond Backups and Build Strong Data Recovery Plans

 

We are living in an era where fast and effective solutions for data challenges are crucial. Relying solely on backups is no longer enough to guarantee business continuity in the face of cyberattacks, hardware failures, human error, or natural disasters. Every CEO must take responsibility for ensuring that their organization has a comprehensive data recovery plan that extends far beyond simple backups. 

Backups are not foolproof. They can fail, be misconfigured, or become corrupted, leaving organizations exposed at critical moments. Modern attackers are also increasingly targeting backup systems directly, making it impossible to restore data when needed. Even when functioning correctly, traditional backups are usually scheduled once a day and do not run in real time, putting businesses at risk of losing hours of valuable work. Recovery time is equally critical, as lengthy downtime caused by delays in data restoration can severely damage both reputation and revenue.  

Businesses often overestimate the security that traditional backups provide, only to discover their shortcomings when disaster strikes. A strong recovery plan should include proactive measures such as regular testing, simulated ransomware scenarios, and disaster recovery drills to ensure preparedness. Without this, the organization risks significant disruption and financial losses. 

The consequences of poor planning extend beyond operational setbacks. For companies handling sensitive personal or financial data, legal and compliance requirements demand advanced protection and recovery systems. Failure to comply can lead to legal penalties and fines in addition to reputational harm. To counter modern threats, organizations should adopt solutions like immutable backups, air-gapped storage, and secure cloud-based systems. While migrating to cloud storage may seem time-consuming, it offers resilience against physical damage and ensures that data cannot be lost through hardware failures alone. 

An effective recovery plan must be multi-layered. Following the 3-2-1 backup rule—keeping three copies of data, on two different media, with one offline—is widely recognized as best practice. Cloud-based disaster recovery platforms such as Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) should also be considered to provide automated failover and minimize downtime. Beyond technology, employee awareness is essential. IT and support staff should be well-trained and recovery protocols tested quarterly to confirm readiness. 

Communication plays a vital role in data recovery planning. How an organization communicates a disruption to clients can directly influence how much trust is retained. While some customers may inevitably be lost, a clear and transparent communication strategy can help preserve the majority. CEOs should also evaluate cyber insurance options to mitigate financial risks tied to recovery costs. 

Ultimately, backups are just snapshots of data, while a recovery plan acts as a comprehensive playbook for survival when disaster strikes. CEOs who neglect this responsibility risk severe financial losses, regulatory penalties, and even business closure. A well-designed, thoroughly tested recovery plan not only minimizes downtime but also protects revenue, client trust, and the long-term future of the organization.

OpenAI Patches ChatGPT Gmail Flaw Exploited by Hackers in Deep Research Attacks

 

OpenAI has fixed a security vulnerability that could have allowed hackers to manipulate ChatGPT into leaking sensitive data from a victim’s Gmail inbox. The flaw, uncovered by cybersecurity company Radware and reported by Bloomberg, involved ChatGPT’s “deep research” feature. This function enables the AI to carry out advanced tasks such as web browsing and analyzing files or emails stored in services like Gmail, Google Drive, and Microsoft OneDrive. While useful, the tool also created a potential entry point for attackers to exploit.  

Radware discovered that if a user requested ChatGPT to perform a deep research task on their Gmail inbox, hackers could trigger the AI into executing malicious instructions hidden inside a carefully designed email. These hidden commands could manipulate the chatbot into scanning private messages, extracting information such as names or email addresses, and sending it to a hacker-controlled server. The vulnerability worked by embedding secret instructions within an email disguised as a legitimate message, such as one about human resources processes. 

The proof-of-concept attack was challenging to develop, requiring a detailed phishing email crafted specifically to bypass safeguards. However, if triggered under the right conditions, the vulnerability acted like a digital landmine. Once ChatGPT began analyzing the inbox, it would unknowingly carry out the malicious code and exfiltrate data “without user confirmation and without rendering anything in the user interface,” Radware explained. 

This type of exploit is particularly difficult for conventional security tools to catch. Since the data transfer originates from OpenAI’s own infrastructure rather than the victim’s device or browser, standard defenses like secure web gateways, endpoint monitoring, or browser policies are unable to detect or block it. This highlights the growing challenge of AI-driven attacks that bypass traditional cybersecurity protections. 

In response to the discovery, OpenAI stated that developing safe AI systems remains a top priority. A spokesperson told PCMag that the company continues to implement safeguards against malicious use and values external research that helps strengthen its defenses. According to Radware, the flaw was patched in August, with OpenAI acknowledging the fix in September.

The findings emphasize the broader risk of prompt injection attacks, where hackers insert hidden commands into web content or messages to manipulate AI systems. Both Anthropic and Brave Software recently warned that similar vulnerabilities could affect AI-enabled browsers and extensions. Radware recommends protective measures such as sanitizing emails to remove potential hidden instructions and enhancing monitoring of chatbot activities to reduce exploitation risks.

Insight Partners Ransomware Attack Exposes Data of Thousands of Individuals

 

Insight Partners, a New York-based venture capital and private equity firm, is notifying thousands of individuals that their personal information was compromised in a ransomware attack. The firm initially disclosed the incident in February, confirming that the intrusion stemmed from a sophisticated social engineering scheme that gave attackers access to its systems. Subsequent investigations revealed that sensitive data had also been stolen, including banking details, tax records, personal information of current and former employees, as well as information connected to limited partners, funds, management companies, and portfolio firms. 

The company stated that formal notification letters are being sent to all affected parties, with complimentary credit monitoring and identity protection services offered as part of its response. It clarified that individuals who do not receive a notification letter by the end of September 2025 can assume their data was not impacted. According to filings with California’s attorney general, which were first reported by TechCrunch, the intrusion occurred in October 2024. Attackers exfiltrated data before encrypting servers on January 16, 2025, in what appears to be the culmination of a carefully planned ransomware campaign. Insight Partners explained that the attacker gained access to its environment on or around October 25, 2024, using advanced social engineering tactics. 

Once inside, the threat actor began stealing data from affected servers. Months later, at around 10:00 a.m. EST on January 16, the same servers were encrypted, effectively disrupting operations. While the firm has confirmed the theft and encryption, no ransomware group has claimed responsibility for the incident so far. A separate filing with the Maine attorney general disclosed that the breach impacted 12,657 individuals. The compromised information poses risks ranging from financial fraud to identity theft, underscoring the seriousness of the incident. 

Despite the scale of the attack, Insight Partners has not yet responded to requests for further comment on how it intends to manage recovery efforts or bolster its cybersecurity posture going forward. Insight Partners is one of the largest venture capital firms in the United States, with over $90 billion in regulatory assets under management. Over the past three decades, it has invested in more than 800 software and technology startups globally, making it a key player in the tech investment ecosystem. 

The breach marks a significant cybersecurity challenge for the firm as it balances damage control, regulatory compliance, and the trust of its investors and partners.

Chat Control Faces Resistance from VPN Industry Over Privacy Concerns


 

The European Union is poised at a decisive crossroads when it comes to shaping the future of digital privacy and is rapidly approaching a landmark ruling which will profoundly alter the way citizens communicate online. 

A final vote on October 14 is expected to take place on September 12, 2025, as Member States will be required to state their position on the proposed Child Sexual Abuse Regulation — commonly referred to as "Chat Control" — in advance of its final vote. Designed to combat the spread of child abuse content, the regulation would place an onus on the providers of messaging services such as WhatsApp, Signal, and iMessage to scan every private message sent between users, even those messages protected from being read by third parties. 

The supporters of the legislation argue that it is a necessary step for ensuring the safety of children, but critics argue that it would effectively legalise mass surveillance, thereby denying citizens access to secure communication and exposing their personal data to the possibility of being misused by government agents or exploited by malicious actors. 

Many observers warn that this vote will set a precedent that could have profound implications for the privacy and democratic freedoms of the continent as a whole if its outcome were to turn out favorably. 

The proposal is called “Chat Control” by its critics, since it requires all messaging platforms operating in Europe to actively scan user conversations, including those that are protected by end-to-end encryption, in search of child sexual abuse material that is well-known and previously unknown. 

In their opinion, such obligations threaten to undermine the very foundations of secure digital communication, creating the possibility of unprecedented levels of monitoring and abuse, which advocates argue could undermine the very foundations of secure digital communication.

The VPN Trust Initiative (VTI), an organisation which represents a group of major VPN providers, has been pushing back strongly against the draft regulation, stating that any attempt to weaken encryption would erode the very basis of the Internet's security. VTI co-chair, Emilija Beranskait, emphasised that "encryption either protects everybody or it doesn't," imploring governments to preserve strong encryption as a cornerstone of privacy, trust, and democratic values, urging them to adopt stronger encryption. 

According to NordVPN's privacy advocate, Laura Tyrylyte, while client-side scanning is indeed a safety and security concern, it is not an acceptable compromise between an organisation's safety and security, contending that solutions must not be compromised in the interest of addressing a single issue alone. 

Moreover, NymVPN's CEO, Harry Halpin, condemned the proposal as “a major step backwards for privacy” and warned that, once normalised, such surveillance tools could be used against journalists, activists, or political opponents. In addition, experts have raised significant technical concerns with the introduction of mandatory scanning mechanisms, stating that such mechanisms will fundamentally undermine the technology underlying online security. 

Moreover, they are concerned that client-side scanning infrastructure could be repurposed so that surveillance is widened far beyond what it was originally intended to do, which runs counter to the European Union's own commitments under initiatives such as the Cyber Resilience Act and efforts to prepare for quantum cryptography in the future. 

However, a deeply divided political debate is ongoing in the EU. Eight member states have formally opposed the proposal, including Germany and Luxembourg, while fifteen others, including France, Italy, and Spain, are still in favour of the proposal. 

There is still some uncertainty regarding the outcome of the October vote because only Estonia, Greece, and Romania have not decided. In addition to the pressure being put on the EU Council, more than 500 cryptography experts and researchers have signed an open letter urging it to reconsider the risks associated with introducing what they consider a dangerous precedent for the future of the digital world in Europe. 

It has been suggested that under the Danish-led proposal, messaging platforms such as WhatsApp, Signal, and ProtonMail would have to scan private communications without discrimination. In their current form, the proposal would violate end-to-end encryption in an irreparable way, according to experts. 

A direct analysis of links, photos, and videos is part of the system that will run directly on the users' devices before messages are encrypted. 

Only government and military accounts are exempt from this analysis, with the draft regulation last circulated to EU delegations on July 24, 2025, claiming to safeguard encryption. Still, privacy specialists are of the opinion that true security cannot be maintained using client-side scanning. 

Laura Tyrylyte, NordVPN's privacy advocate, observed that "Chat Control's client-side scanning provisions create a false choice between security and safety." The solution to one problem, even a serious one like child safety, cannot be at the expense of creating systemic vulnerabilities that are more dangerous to everyone." 

Several other industry leaders expressed similar concerns as well, including Harry Halpin, CEO of NymVPN, who condemned the measure as “a significant step backwards for privacy.” He explained that the indiscriminate scans of private communications are disproportionate in nature, creating a backdoor that could be exploited if it is normalised. 

There is a risk that such infrastructure could easily be redirected towards attacking journalists, political opponents, or activists while also exposing ordinary citizens to hostile cyberattacks. In Halpin's view and the opinion of others, it is more effective to carry out targeted, warrant-based investigations, to take down illegal material swiftly, and to use properly resourced specialist teams rather than universal surveillance as a means of detecting illegal activity. 

However, despite the simple concessions made in the latest draft, such as restricting the detection to visual contents and excluding audio and text, the scientific community has remained steadfast in its criticism regardless of the concessions made. 

The researchers point out that there are four critical flaws to the system: the inability to scan billions of messages accurately; the inevitable weakening of encryption through the monitoring of devices on-device; the high risk that surveillance can expand beyond its stated purpose due to "function creep"; and the danger that mass monitoring in the name of child protection will erode democratic norms. 

While the EU has promised oversight and consent mechanisms, cryptography experts claim that secure and reliable client-side scanning cannot be performed at scale, despite promises of EU oversight and consent mechanisms. This proposal, therefore, is technically flawed as well as politically perilous. 

VPN providers are also signalling that they will not stand on the sidelines if the regulation is passed. Several leading companies, including Mullvad, a popular privacy-focused service, have expressed concern about the possibility of withdrawing from the European market altogether if the proposed legislation is passed. 

If this happens, millions of users will be impacted, and innovation in this field may be curtailed. Similar advocacy groups, including Privacy Guides, have sounded the alarm in the past weeks, warning that the new regulations threaten to undermine the privacy of all citizens, not only those suspected of wrongdoing, and they urge all citizens to take notice before the September 12 deadline. 

A growing number of social media platforms are also being criticised, and voices like Telegram founder Pavel Durov have pointed out that comparable laws have failed in the past, as determined offenders have simply moved to smaller applications or VPNs to avoid these weaker protections, which leaves ordinary users to bear the brunt. 

The debate carries significant economic weight. The Security.org website indicates that more than 75 million Americans already use VPN services to keep their privacy online. As Chat Control advances, this demand is expected to grow rapidly in Europe. As per Future Market Insights, by 2035, the VPN industry is expected to grow to a value of $481.5 billion; however, experts caution that heavy regulation may fragment the market and stifle technological development.

Denmark has continued to lobby for the proposal despite mounting opposition from civil society groups, technology companies, and several member states as the EU Council prepares to vote on October 14, as tensions are increasing. In recent weeks, citizens have taken to online platforms such as X to voice their concerns about the proposed legislation, warning that Europeans would not have fundamentally secure digital privacy. 

Analysts point out that in order to adapt to this changing environment, VPN providers may need to use quantum-resistant technologies faster or explore decentralised models, as highlighted in recent forward-looking studies, which point to the existential stakes of the industry. 

However, one central fear remains across all debates: once surveillance infrastructure is embedded in the environment, its scope is unlikely to be limited to combating child abuse. In their view, it could create a framework for broad and permanent monitoring, reshaping the global norms of digital privacy in a way that undermines both the rights of users and technological innovation in the process. 

A key question to be answered before the EU's vote on October 14 is whether it can successfully balance child protection with its longstanding commitments to privacy and digital rights while maintaining a sense of security. 

It is noted that decisions made in Brussels will have a global impact, potentially setting global standards for how governments deal with encryption, surveillance, and online safety, as experts warn. For legislators, the challenge is to devise effective solutions that protect vulnerable groups without dismantling the secure infrastructures that rely on modern communication, commerce and civic participation. 

One possible path forward, according to observers, could be bolstering cross-border investigative collaboration, strengthening rapid takedown protocols for harmful material, and building specialised law enforcement units which are equipped with advanced tools that are able to target perpetrators rather than citizens collectively, to achieve a better outcome. 

In addition to the fact that private measures would prove better at combating criminal networks, privacy advocates argue that they would also preserve the trust and innovation that Europe has championed for decades, as well as the sense of security that Europe has promoted for decades. 

There will be a clear indication of the EU's global leadership position in safeguarding both child safety and civil liberties through this decision, or whether it will serve as a model for other nations to emulate in terms of surveillance frameworks to maintain secure neighbourhoods.

EU Data Act Compliance Deadline Nears With Three Critical Takeaways


 

A decisive step forward in shaping the future of Europe's digital economy has been taken by the regulation of harmonised rules for fair access to and use of data, commonly known as the EU Data Act, which has moved from a legislative text to a binding document. 

The regulation was first adopted into force on the 11th of January 2024 and came into full effect on the 12th of September 2025, and is regarded as the foundation for the EU’s broader data strategy. Its policymakers believe that this is crucial to the Digital Decade's goal of accelerating digital transformation across industries by ensuring that the data generated within the EU can be shared, accessed, and used more equitably, as a cornerstone of the Digital Decade's ambition. 

The Data Act is not only a technical framework for creating a more equitable digital landscape, but it is also meant to rebalance the balance of power in the digital world, giving rise to new opportunities for innovation while maintaining the integrity of the information. With the implementation of the Data Act in place from 12 September 2025, the regulatory landscape will be dramatically transformed for companies that deal with connected products, digital services, or cloud or other data processing solutions within the European Union, regardless of whether the providers are located within its borders or beyond. 

It seems that businesses were underestimating the scope of the regime before it was enforced, but as a result, the law sets forth a profound set of obligations that go well beyond what was previously known. In essence, this regulation grants digital device and service users unprecedented access rights to the data they generate, regardless of whether that data is personal or otherwise. Until recently, the rights were mostly unregulated, which meant users had unmatched access to data. 

The manufacturer, service provider, and data owner will have to revise existing contractual arrangements in order to comply with this regulation. This will be done by creating a framework for data sharing on fair and transparent terms, as well as ensuring that extensive user entitlements are in place. 

It also imposes new obligations on cloud and processing service providers, requiring them to provide standardised contractual provisions that allow for switching between services. A violation of these requirements will result in a regulatory investigation, civil action, or significant financial penalties, which is the same as a stringent enforcement model used by the General Data Protection Regulation (GDPR), which has already changed the way data practices are handled around the world today. 

According to the EU Data Act, the intention is to revolutionise the way information generated by connected devices and cloud-based services is accessed, managed and exchanged within and across the European Union. In addition to establishing clear rules for access to data, the regulations incorporate obligations to guarantee organisations' service portability, and they embed principles of contractual fairness into business agreements as a result. 

The legislation may have profound long-term consequences, according to industry observers. It is not possible to ignore the impact that the law could have on the digital economy, as Soniya Bopache, vice president and general manager for data compliance at Arctera, pointed out, and she expected that the law would change the dynamics of the use and governance of data for a long time to come. 

It is important to note that the EU Data Act has a broader scope than the technology sector, with implications for industries that include manufacturing, transportation, consumer goods, and cloud computing in addition to the technology sector. Additionally, the regulation is expected to benefit both public and private institutions, emphasising how the regulation has a broad impact. 

Cohesity's vice president and head of technology, Peter Grimmond, commented on the law's potential by suggesting that, by democratising and allowing greater access to data, the law could act as a catalyst for innovation. It was suggested that organisations that already maintain strong compliance and classification procedures will benefit from the Act because it will provide an environment where collaboration can thrive without compromising individual rights or resilience. 

Towards the end of the EU regulation, the concept of data access and transparency was framed as a way to strengthen Europe's data economy and increase competitiveness in the market, according to EU policymakers. It is becoming increasingly evident that connected devices generate unprecedented amounts of information. 

As a result of this legislation, businesses and individuals alike are able to use this data more effectively by granting greater control over the information they produce, which is of great importance to businesses and individuals alike. Additionally, Grimmond said that the new frameworks for data sharing between enterprises are an important driver of long-term benefits for the development of new products, services, and business models, and they will contribute to the long-term development of the economy. 

There is also an important point to be made, which is that the law aims to achieve a balance between the openness of the law and the protected standards that Europe has established, aligned with GDPR's global privacy benchmark, and complementing the Digital Operational Resilience Act (DORA), so that the levels of trust and security are maintained. 

In some ways, the EU Data Act will prove to be even more disruptive than the EU Artificial Intelligence Act, as it will be the most significant overhaul of European data laws since the GDPR and will have a fundamental effect on how businesses handle information collected by connected devices and digital services in the future. 

Essentially, the Regulation is a broad-reaching law that covers both personal data about individuals as well as non-personal data, such as technical and usage information that pertains to virtually every business model associated with digital products and services within the European Union. This law creates new sweeping rights for users, who are entitled to access to the data generated by their connected devices at any time, including real-time, where it is technically feasible, as per Articles 4 and 5. 

Additionally, these rights allow users to determine who else may access such data, whether it be repairers, aftermarket service providers, or even direct competitors, while allowing users to limit how such data is distributed by companies. During the years 2026 and 2030, manufacturers will be required to make sure that products have built-in data accessibility at no extra charge, which will force companies to reconsider their product development cycles, IT infrastructure, and customer contracts in light of this requirement. 

Moreover, the legislation provides guidelines for fair data sharing and stipulates that businesses are required to provide access on reasonable, non-discriminatory terms, and prohibits businesses from stating terms in their contracts that impede or overcharge for access in a way that obstructs it. As a result of this, providers of cloud computing and data processing services face the same transformative obligations as other companies, such as mandatory provisions that allow customers to switch services within 30 days, prohibit excessive exit fees, and insist that contracts be transparent so vendors won't get locked into contracts. 

There are several ways in which these measures could transform fixed-term service contracts into rolling, short-term contracts, which could dramatically alter the business model and competitive dynamics in the cloud industry. The regulation also gives local authorities the right to request data access in cases of emergency or when the public interest requires it, extending its scope beyond purely commercial applications. 

In all Member States, enforcement will be entrusted to national authorities who will be able to impose large fines for non-compliance, as well as provide a new path for collective civil litigation, opening doors to the possibility of mass legal actions similar to class actions in the US. Likely, businesses from a broad range of industries, from repair shops to insurers to logistics providers to AI developers, will all be able to benefit from greater access to operational data. 

In the meantime, sectors such as the energy industry, healthcare, agriculture, and transportation need to be prepared to respond to potential government requests. In total, the Data Act constitutes an important landmark law that rebalances power between companies and users, while redrawing the competitive landscape for Europe's digital economy in the process. In the wake of the EU Data Act's compliance deadline, it will not simply be viewed as a regulatory milestone, but also as a strategic turning point for the digital economy as a whole. 

Business owners must now shift from seeing compliance as an obligation to a means of increasing competitiveness, improving customer trust, and unlocking new value through data-driven innovation to strengthen their competitiveness and deepen customer relationships. In the future, businesses that take proactive steps towards redesigning their products, modernising their IT infrastructure, and cultivating transparent data practices are better positioned to stay ahead of the curve and develop stronger relationships with their users, for whom information is now more in their control. 

Aside from that, the regulation has the potential to accelerate the pace of digital innovation across a wide range of sectors by lowering barriers to switching providers and enforcing fairer contractual standards, stimulating a more dynamic and collaborative marketplace. This Act provides the foundation for a robust public-interest data use system in times of need for governments and regulators. 

In the end, the success of this ambitious framework will rest on how quickly the business world adapts and how effective its methods are at developing a fairer, more transparent, and more competitive European data economy, which can be used as a global benchmark in the future.