Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label GDPR. Show all posts

Legal Battle Over Meta’s AI Training Likely to Reach Europe’s Top Court

 


The ongoing debate around Meta’s use of European data to train its artificial intelligence (AI) systems is far from over. While Meta has started training its large language models (LLMs) using public content from Facebook and Instagram, privacy regulators in Europe are still questioning whether this is lawful and the issue may soon reach the European Court of Justice (ECJ).

Meta began training its AI using public posts made by users in the EU shortly after getting the go-ahead from several privacy watchdogs. This approval came just before Meta launched AI-integrated products, including its smart glasses, which rely heavily on understanding cultural and regional context from online data.

However, some regulators and consumer groups are not convinced the approval was justified. A German consumer organization had attempted to block the training through an emergency court appeal. Although the request was denied, that was only a temporary decision. The core legal challenges, including one led by Hamburg’s data protection office, are still expected to proceed in court.

Hamburg’s commissioner, who initially supported blocking the training, later withdrew a separate emergency measure under Europe’s data protection law. He stated that while the training has been allowed to continue for now, it’s highly likely that the final ruling will come from the EU’s highest court.

The controversy centers on whether Meta has a strong enough legal reason, known as "legitimate interest" to use personal data for AI training. Meta’s argument was accepted by Irish regulators, who oversee Meta’s EU operations, on the condition that strict privacy safeguards are in place.


What Does ‘Legitimate Interest’ Mean Under GDPR?

Under the General Data Protection Regulation (GDPR), companies must have a valid reason to collect and use personal data. One of the six legal bases allowed is called “legitimate interest.” 

This means a company can process someone’s data if it’s necessary for a real business purpose, as long as it does not override the privacy rights of the individual.

In the case of AI model training, companies like Meta claim that building better products and improving AI performance qualifies as a legitimate interest. However, this is debated, especially when public data includes posts with personal opinions, cultural expressions, or identity-related content.

Data protection regulators must carefully balance:

1. The company’s business goals

2. The individual’s right to privacy

3. The potential long-term risks of using personal data for AI systems


Some experts argue that this sets a broader precedent. If Meta can train its AI using public data under the concept of legitimate interest, other companies may follow. This has raised hopes among many European AI firms that have felt held back by unclear or strict regulations.

Industry leaders say that regulatory uncertainty, specifically surrounding Europe’s General Data Protection Regulation (GDPR) and the upcoming AI Act has been one of the biggest barriers to innovation in the region. Others believe the current developments signal a shift toward supporting responsible AI development while protecting users’ rights.

Despite approval from regulators and support from industry voices, legal clarity is still missing. Many legal experts and companies agree that only a definitive ruling from the European Court of Justice can settle whether using personal data for AI training in this way is truly lawful.


Can AI Be Trusted With Sensitive Business Data?

 



As artificial intelligence becomes more common in businesses, from retail to finance to technology— it’s helping teams make faster decisions. But behind these smart predictions is a growing problem: how do you make sure employees only see what they’re allowed to, especially when AI mixes information from many different places?

Take this example: A retail company’s AI tool predicts upcoming sales trends. To do this, it uses both public market data and private customer records. The output looks clean and useful but what if that forecast is shown to someone who isn’t supposed to access sensitive customer details? That’s where access control becomes tricky.


Why Traditional Access Rules Don’t Work for AI

In older systems, access control was straightforward. Each person had certain permissions: developers accessed code, managers viewed reports, and so on. But AI changes the game. These systems pull data from multiple sources, internal files, external APIs, sensor feeds, and combine everything to create insights. That means even if a person only has permission for public data, they might end up seeing results that are based, in part, on private or restricted information.


Why It Matters

Security Concerns: If sensitive data ends up in the wrong hands even indirectly, it can lead to data leaks. A 2025 study showed that over two-thirds of companies had AI-related security issues due to weak access controls.

Legal Risks: Privacy laws like the GDPR require clear separation of data. If a prediction includes restricted inputs and is shown to the wrong person, companies can face heavy fines.

Trust Issues: When employees or clients feel their data isn’t safe, they lose trust in the system, and the business.


What’s Making This So Difficult?

1. AI systems often blend data so deeply that it’s hard to tell what came from where.

2. Access rules are usually fixed, but AI relies on fast-changing data.

3. Companies have many users with different roles and permissions, making enforcement complicated.

4. Permissions are often too broad, for example, someone allowed to "view reports" might accidentally access sensitive content.


How Can Businesses Fix This?

• Track Data Origins: Label data as "public" or "restricted" and monitor where it ends up.

• Flexible Access Rules: Adjust permissions based on user roles and context.

• Filter Outputs: Build AI to hide or mask parts of its response that come from private sources.

• Separate Models: Train different AI models for different user groups, each with its own safe data.

• Monitor Usage: Keep logs of who accessed what, and use alerts to catch suspicious activity.


As AI tools grow more advanced and rely on live data from many sources, managing access will only get harder. Businesses must modernize their security strategies to protect sensitive information without slowing down innovation.

EU Fines TikTok $600 Million for Data Transfers to China

EU Fines TikTok $600 Million for Data Transfers to China

Regulators said that the EU has fined TikTok 530 million euros (around $600 million). Chinese tech giant ByteDance owns TikTok, which has been found guilty of illegally sending the private data of EU users to China and lack of compliance to ensure the protection of data from potential access by Chinese authorities. According to an AFP news report, the penalty— one of the largest ever issued to date by EU’s data protection agencies— comes after a detailed inquiry into the legitimacy of TikTok’s data transfer rules. 

TikTok Fine and EU

TikTok’s lead regulator in Europe, Ireland’s Data Protection Commission (DPC) said that TikTok accepted during the probe about hosting European user data in China. DPC’s deputy commissioner Graham Doyle said that “TikTok failed to verify, guarantee, and demonstrate that the personal data of (European) users, remotely accessed by staff in China, was afforded a level of protection essentially equivalent to that guaranteed within the EU,”

Besides this, Doyle said that TikTok’s failure to address the dangers of possible access to Europeans’s private data by Chinese authorities under China’s anti-terrorism, counter-espionage, and other regulations, which TikTok itself found different than EU’s data protection standards. 

TikTok will contest the decision

TikTok has declared to contest the heavy EU fine, despite the findings. TikTok Europe’s Christine Grahn stressed that the company has “never received a request” from authorities in China for European users’ data and that “TikTok” has never given EU users’ data to Chinese authorities. “We disagree with this decision and intend to appeal it in full,” Christine said. 

TikTok boasts a massive 1.5 billion users worldwide. In recent years, the social media platform has been under tough pressure from Western governments due to worries about the misuse of data by Chinese actors for surveillance and propaganda aims. 

TikTok to comply with EU Rules

In 2023, the Ireland DPC fined TikTok 354 million euros for violating EU rules related to the processing of children’s information. The DPC’s recent judgment also revealed that TikTok violated requirements under the EU’s General Data Protection Regulation (GDPR) by sending user data to China. The decision includes a 530 million euro administrative penalty plus a mandate that TikTok aligns its data processing rules with EU practices within 6 months. 

Brave Browser’s New ‘Cookiecrumbler’ Tool Aims to Eliminate Annoying Cookie Consent Pop-Ups

 

While the General Data Protection Regulation (GDPR) was introduced with noble intentions—to protect user privacy and control over personal data—its practical side effects have caused widespread frustration. For many internet users, GDPR has become synonymous with endless cookie consent pop-ups and hours of compliance training. Now, Brave Browser is stepping up with a new solution: Cookiecrumbler, a tool designed to eliminate the disruptive cookie notices without compromising web functionality. 

Cookiecrumbler is not Brave’s first attempt at combating these irritating banners. The browser has long offered pop-up blocking capabilities. However, the challenge hasn’t been the blocking itself—it’s doing so while preserving website functionality. Many websites break or behave unexpectedly when these notices are blocked improperly. Brave’s new approach promises to fix that by taking cookie blocking to a new level of sophistication.  

According to a recent announcement, Cookiecrumbler combines large language models (LLMs) with human oversight to automate and refine the detection of cookie banners across the web. This hybrid model allows the tool to scale effectively while maintaining precision. By running on Brave’s backend servers, Cookiecrumbler crawls websites, identifies cookie notices, and generates custom rules tailored to each site’s layout and language. One standout feature is its multilingual capability. Cookie notices often vary not just in structure but in language and legal formatting based on the user’s location. 

Cookiecrumbler accounts for this by using geo-targeted vantage points, enabling it to view websites as a local user would, making detection far more effective. The developers highlight several reasons for using LLMs in this context: cookie banners typically follow predictable language patterns, the work is repetitive, and it’s relatively low-risk. The cost of each crawl is minimal, allowing the team to test different models before settling on smaller, efficient ones that provide excellent results with fine-tuning. Importantly, human reviewers remain part of the process. While AI handles the bulk detection, humans ensure that the blocking rules don’t accidentally interfere with important site functions. 

These reviewers refine and validate Cookiecrumbler’s suggestions before they’re deployed. Even better, Brave is releasing Cookiecrumbler as an open-source tool, inviting integration by other browsers and developers. This opens the door for tools like Vivaldi or Firefox to adopt similar capabilities. 

Looking ahead, Brave plans to integrate Cookiecrumbler directly into its browser, but only after completing thorough privacy reviews to ensure it aligns with the browser’s core principle of user-centric privacy. Cookiecrumbler marks a significant step forward in balancing user experience and privacy compliance—offering a smarter, less intrusive web.

Yoojo Exposes Millions of Sensitive Files Due to Misconfigured Database

 

Yoojo, a European service marketplace, accidentally left a cloud storage bucket unprotected online, exposing around 14.5 million files, including highly sensitive user data. The data breach was uncovered by Cybernews researchers, who immediately informed the company. Following the alert, Yoojo promptly secured the exposed archive.

The database contained a range of personally identifiable information (PII), including full names, passport details, government-issued IDs, user messages, and phone numbers. This level of detail, according to experts, could be exploited for phishing, identity theft, or even financial fraud.

Yoojo offers an online platform connecting users with service providers for tasks like cleaning, gardening, childcare, IT support, moving, and homecare. With over 500,000 downloads on Google Play, the app has gained significant traction in France, Spain, the Netherlands, and the UK.

Cybernews stated that the exposed database was publicly accessible for at least 10 days, though there's no current evidence of malicious exploitation. Still, researchers cautioned that unauthorized parties might have already accessed the data. Yoojo has yet to issue a formal comment on the incident.

“Leaked personal details enables attackers to create highly targeted phishing, vishing, and smishing campaigns. Fraudulent emails and SMS scams could involve impersonating Yoojo service providers asking for sensitive information like payment details or verification documents,” Cybernews researchers said.

The incident underscores how frequently misconfigured databases lead to data exposures. While many organizations rely on cloud services for storing confidential information, they often overlook the shared responsibility model that cloud infrastructure follows.

On a positive note, most companies act swiftly once made aware of such vulnerabilities—just as Yoojo did—by promptly restricting access to the exposed data.

The Need for Unified Data Security, Compliance, and AI Governance

 

Businesses are increasingly dependent on data, yet many continue to rely on outdated security infrastructures and fragmented management approaches. These inefficiencies leave organizations vulnerable to cyber threats, compliance violations, and operational disruptions. Protecting data is no longer just about preventing breaches; it requires a fundamental shift in how security, compliance, and AI governance are integrated into enterprise strategies. A proactive and unified approach is now essential to mitigate evolving risks effectively. 

The rapid advancement of artificial intelligence has introduced new security challenges. AI-powered tools are transforming industries, but they also create vulnerabilities if not properly managed. Many organizations implement AI-driven applications without fully understanding their security implications. AI models require vast amounts of data, including sensitive information, making governance a critical priority. Without robust oversight, these models can inadvertently expose private data, operate without transparency, and pose compliance challenges as new regulations emerge. 

Businesses must ensure that AI security measures evolve in tandem with technological advancements to minimize risks. Regulatory requirements are also becoming increasingly complex. Governments worldwide are enforcing stricter data privacy laws, such as GDPR and CCPA, while also introducing new regulations specific to AI governance. Non-compliance can result in heavy financial penalties, reputational damage, and operational setbacks. Businesses can no longer treat compliance as an afterthought; instead, it must be an integral part of their data security strategy. Organizations must shift from reactive compliance measures to proactive frameworks that align with evolving regulatory expectations. 

Another significant challenge is the growing issue of data sprawl. As businesses store and manage data across multiple cloud environments, SaaS applications, and third-party platforms, maintaining control becomes increasingly difficult. Security teams often lack visibility into where sensitive information resides, making it harder to enforce access controls and protect against cyber threats. Traditional security models that rely on layering additional tools onto existing infrastructures are no longer effective. A centralized, AI-driven approach to security and governance is necessary to address these risks holistically. 

Forward-thinking businesses recognize that managing security, compliance, and AI governance in isolation is inefficient. A unified approach consolidates risk management efforts into a cohesive, scalable framework. By breaking down operational silos, organizations can streamline workflows, improve efficiency through AI-driven automation, and proactively mitigate security threats. Integrating compliance and security within a single system ensures better regulatory adherence while reducing the complexity of data management. 

To stay ahead of emerging threats, organizations must modernize their approach to data security and governance. Investing in AI-driven security solutions enables businesses to automate data classification, detect vulnerabilities, and safeguard sensitive information at scale. Shifting from reactive compliance measures to proactive strategies ensures that regulatory requirements are met without last-minute adjustments. Moving away from fragmented security solutions and adopting a modular, scalable platform allows businesses to reduce risk and maintain resilience in an ever-evolving digital landscape. Those that embrace a forward-thinking, unified strategy will be best positioned for long-term success.

Privacy Concerns Rise Over Antivirus Data Collection

 


To maintain the security of their devices from cyberattacks, users rely critically on their operating systems and trusted anti-virus programs, which are among the most widely used internet security solutions. Well-established operating systems and reputable cybersecurity software need to provide users with regular updates.

As a result of these updates, security flaws in your system are fixed and security programs are upgraded, enhancing your system's protection, and preventing cybercriminals from exploiting vulnerabilities to install malicious software such as malware or spyware. Third-party applications, on the other hand, carry a larger security risk, as they may lack rigorous protection measures. In most cases, modern antivirus programs, firewalls, and other security measures will detect and block any potentially harmful programs. 

The security system will usually generate an alert when, as a result of an unauthorized or suspicious application trying to install on the device, users can take precautions to keep their devices safe. In the context of privacy, an individual is referred to as a person who has the right to remain free from unwarranted monitoring, surveillance, or interception. The concept of gathering data is not new; traditionally data was collected by traditional methods based on paper. 

It has also been proven that by making use of technological advancements, data can now be gathered through automated, computer-driven processes, providing vast amounts of information and analytical information for a variety of purposes every minute from millions of individuals in the world. Keeping a person's privacy is a fundamental right that is recognized as essential to their autonomy and their ability to protect their data. 

The need to safeguard this right is becoming increasingly important in the digital age because of the widespread collection and use of personal information, raising significant concerns about privacy and individual liberties. This evaluation included all of PCMag's Editors' Choices for antivirus and security suites, except AVG AntiVirus Free, which has been around for several years. However, since Avast acquired AVG in 2016, both have been using the same antivirus engine for several years now, so it is less necessary for them to be evaluated separately. 

It was determined that each piece of security software was evaluated based on five key factors: Data Collection, Data Sharing, Accessibility, Software & Process Control, and Transparency, of which a great deal of emphasis should be placed on Data Collection and Data Sharing. This assessment was performed by installing each antivirus program on a test system with network monitoring tools, which were then examined for their functionality and what data was transmitted to the company's parent company as a result of the assessment. In addition, the End User License Agreements (EULAs) for each product were carefully reviewed to determine if they disclosed what kind and how much data was collected. 

A comprehensive questionnaire was also sent to security companies to provide further insights into their capabilities beyond the technical analysis and contractual review. There may be discrepancies between the stated policies of a business and the actual details of its network activities, which can adversely affect its overall score. Some vendors declined to answer specific questions because there was a security concern. 

Moreover, the study highlights that while some data collection-such as payment information for licensing purposes-must be collected, reducing the amount of collected data generally results in a higher Data Collection score, a result that the study findings can explain. The collecting of data from individuals can provide valuable insights into their preferences and interests, for example, using information from food delivery apps can reveal a user's favourite dishes and the frequency with which they order food. 

In the same vein, it is common for targeted advertisements to be delivered using data derived from search queries, shopping histories, location tracking, and other digital interactions. Using data such as this helps businesses boost sales, develop products, conduct market analysis, optimize user experiences, and improve various functions within their organizations. It is data-driven analytics that is responsible for bringing us personalized advertisements, biometric authentication of employees, and content recommendations on streaming platforms such as Netflix and Amazon Prime.

Moreover, athletes' performance metrics in the field of sports are monitored and compared to previous records to determine progress and areas for improvement. It is a fact that systematic data collection and analysis are key to the development and advancement of the digital ecosystem. By doing so, businesses and industries can operate more efficiently, while providing their customers with better experiences. 

As part of the evaluation of these companies, it was also necessary to assess their ability to manage the data they collect as well as their ability to make the information they collect available to people. This information has an important role to play in ensuring consumer safety and freedom of choice. As a whole, companies that provide clear, concise language in their End User License Agreements (EULA) and privacy policies will receive higher scores for accessibility. 

Furthermore, if those companies provide a comprehensive FAQ that explains what data is collected and why it's used, they will further increase their marks. About three-quarters of the participants in the survey participating in the survey responded to the survey, constituting a significant share of those who received acknowledgement based on the transparency they demonstrated. The more detailed the answers, the greater the score was. Furthermore, the availability of third-party audits significantly influenced the rating. 

Even thought a company may handle its personal data with transparency and diligence, any security vulnerabilities introduced by its partners can undermine the company's efforts. As part of this study, researchers also examined the security protocols of the companies' third-party cloud storage services. Companies that have implemented bug bounty programs, which reward users for identifying and reporting security flaws, received a higher score in this category than those that did not. The possibility exists that a security company could be asked to provide data it has gathered on specific users by a government authority. 

Different jurisdictions have their own unique legal frameworks regarding this, so it is imperative to have an understanding of the location of the data. The General Data Protection Regulation (GDPR) in particular enforces a strict set of privacy protections, which are not only applicable to data that is stored within the European Union (EU) but also to data that concerns EU residents, regardless of where it may be stored. 

Nine of the companies that participated in the survey declined to disclose where their server farms are located. Of those that did provide answers, three chose to keep their data only within the EU, five chose to store the data in both the EU and the US, and two maintained their data somewhere within the US and India. Despite this, Kaspersky has stated that it stores data in several different parts of the world, including Europe, Canada, the United States, and Russia. In some cases, government agencies may even instruct security companies to issue a "special" update to a specific user ID to monitor the activities of certain suspects of terrorist activity. 

In response to a question regarding such practices, the Indian company eScan confirmed that they are involved in such activities, as did McAfee and Microsoft. Eleven of the companies that responded affirmed that they do not distribute targeted updates of this nature. Others chose not to respond, raising concerns about transparency in the process. `

GDPR Violation by EU: A Case of Self-Accountability

 


There was a groundbreaking decision by the European Union General Court on Wednesday that the EU Commission will be held liable for damages incurred by a German citizen for not adhering to its own data protection legislation. 

As a result of the court's decision that the Commission transferred the citizen's personal data to the United States without adequate safeguards, the citizen received 400 euros ($412) in compensation. During the hearing conducted by the EU General Court, the EU General Court found that the EU had violated its own privacy rules, which are governed by the General Data Protection Regulation (GDPR). 

According to the ruling, the EU has to pay a fine for the first time in history. German citizens who were registering for a conference through a European Commission webpage used the "Sign in with Facebook" option, which resulted in a German citizen being a victim of the EU's brazen disregard for the law. 

The user clicked the button, which transferred information about their browser, device, and IP address through Amazon Web Services' content delivery network, ultimately finding its way to servers run by Facebook's parent company Meta Platforms located in the United States after they were pushed to the content delivery network. According to the court, this transfer of data was conducted without proper safeguards, which constitutes a breach of GDPR rules. 

The EU was ordered to pay a fine of €400 (about $412) directly to the plaintiff for breaching GDPR rules. It has been widely documented that the magnitude and frequency of fines imposed by different national data protection authorities (DPAs) have varied greatly since GDPR was introduced. This is due to both the severity and the rigour of enforcement. A total of 311 fines have been catalogued by the International Network of Privacy Law Professionals, and by analysing them, several key trends can be observed.

The Netherlands, Turkey, and Slovakia have been major focal points for GDPR enforcement, with the Netherlands leading in terms of high-value fines. Moreover, Romania and Slovakia frequently appear on the list of the lower fines, indicating that even less severe violations are being enforced. The implementation of the GDPR has been somewhat of a mixed bag since its introduction a year ago. There is no denying that the EU has captured the attention of the public with the major fines it has imposed on Silicon Valley giants. However, enforcement takes a very long time; even the EU's first self-imposed fine for violating one person's privacy took over two years to complete. 

Approximately three out of every four data protection authorities have stated that they lack the budget and personnel needed to investigate violations, and numerous examples illustrate that the byzantine collection of laws has not been able to curb the invasive practices of surveillance capitalism, despite their attempts. Perhaps the EU could begin by following its own rules and see if that will help. A comprehensive framework for data protection has been developed by the General Data Protection Regulation (GDPR). 

Established to protect and safeguard individuals' data and ensure their privacy, rigorous standards regarding the collection, processing, and storage of data were enacted. Nevertheless, in an unexpected development, the European Union itself was found to have violated these very laws, causing an unprecedented uproar. 

A recent internal audit revealed a serious weakness in data management practices within European institutions, exposing the personal information of EU citizens to the risk of misuse or access by unauthorized individuals. Ultimately, the European Court of Justice handed down a landmark decision stating that the EU failed to comply with its data protection laws due to this breach. 

As a result of the GDPR, implemented in 2018, organisations are now required to obtain user consent to collect or use personal data, such as cookie acceptance notifications, which are now commonplace. This framework has become the foundation for data privacy and a defining framework for data privacy. By limiting the amount of information companies can collect and making its use more transparent, GDPR aims to empower individuals while posing a significant compliance challenge for technology companies. 

It is worth mentioning that Meta has faced substantial penalties for non-compliance and is among those most negatively impacted. There was a notable case last year when Meta was fined $1.3 billion for failing to adequately protect European users' data during its transfer to U.S. servers. This left them vulnerable to American intelligence agencies since their data could be transferred to American servers, a risk that they did not manage adequately. 

The company also received a $417 million fine for violations involving Instagram's privacy practices and a $232 million fine for not being transparent enough regarding WhatsApp's data processing practices in the past. This is not the only issue Meta is facing concerning GDPR compliance, as Amazon was fined $887 million by the European Union in 2021 for similar violations. 

A Facebook login integration that is part of Meta's ecosystem was a major factor in the recent breach of the European Union's data privacy regulations. The incident illustrates the challenges that can be encountered even by the enforcers of the GDPR when adhering to its strict requirements.