Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label EU. Show all posts

Legal Battle Over Meta’s AI Training Likely to Reach Europe’s Top Court

 


The ongoing debate around Meta’s use of European data to train its artificial intelligence (AI) systems is far from over. While Meta has started training its large language models (LLMs) using public content from Facebook and Instagram, privacy regulators in Europe are still questioning whether this is lawful and the issue may soon reach the European Court of Justice (ECJ).

Meta began training its AI using public posts made by users in the EU shortly after getting the go-ahead from several privacy watchdogs. This approval came just before Meta launched AI-integrated products, including its smart glasses, which rely heavily on understanding cultural and regional context from online data.

However, some regulators and consumer groups are not convinced the approval was justified. A German consumer organization had attempted to block the training through an emergency court appeal. Although the request was denied, that was only a temporary decision. The core legal challenges, including one led by Hamburg’s data protection office, are still expected to proceed in court.

Hamburg’s commissioner, who initially supported blocking the training, later withdrew a separate emergency measure under Europe’s data protection law. He stated that while the training has been allowed to continue for now, it’s highly likely that the final ruling will come from the EU’s highest court.

The controversy centers on whether Meta has a strong enough legal reason, known as "legitimate interest" to use personal data for AI training. Meta’s argument was accepted by Irish regulators, who oversee Meta’s EU operations, on the condition that strict privacy safeguards are in place.


What Does ‘Legitimate Interest’ Mean Under GDPR?

Under the General Data Protection Regulation (GDPR), companies must have a valid reason to collect and use personal data. One of the six legal bases allowed is called “legitimate interest.” 

This means a company can process someone’s data if it’s necessary for a real business purpose, as long as it does not override the privacy rights of the individual.

In the case of AI model training, companies like Meta claim that building better products and improving AI performance qualifies as a legitimate interest. However, this is debated, especially when public data includes posts with personal opinions, cultural expressions, or identity-related content.

Data protection regulators must carefully balance:

1. The company’s business goals

2. The individual’s right to privacy

3. The potential long-term risks of using personal data for AI systems


Some experts argue that this sets a broader precedent. If Meta can train its AI using public data under the concept of legitimate interest, other companies may follow. This has raised hopes among many European AI firms that have felt held back by unclear or strict regulations.

Industry leaders say that regulatory uncertainty, specifically surrounding Europe’s General Data Protection Regulation (GDPR) and the upcoming AI Act has been one of the biggest barriers to innovation in the region. Others believe the current developments signal a shift toward supporting responsible AI development while protecting users’ rights.

Despite approval from regulators and support from industry voices, legal clarity is still missing. Many legal experts and companies agree that only a definitive ruling from the European Court of Justice can settle whether using personal data for AI training in this way is truly lawful.


WhatsApp Ads Delayed in EU as Meta Faces Privacy Concerns

 

Meta recently introduced in-app advertisements within WhatsApp for users across the globe, marking the first time ads have appeared on the messaging platform. However, this change won’t affect users in the European Union just yet. According to the Irish Data Protection Commission (DPC), WhatsApp has informed them that ads will not be launched in the EU until sometime in 2026. 

Previously, Meta had stated that the feature would gradually roll out over several months but did not provide a specific timeline for European users. The newly introduced ads appear within the “Updates” tab on WhatsApp, specifically inside Status posts and the Channels section. Meta has stated that the ad system is designed with privacy in mind, using minimal personal data such as location, language settings, and engagement with content. If a user has linked their WhatsApp with the Meta Accounts Center, their ad preferences across Instagram and Facebook will also inform what ads they see. 

Despite these assurances, the integration of data across platforms has raised red flags among privacy advocates and European regulators. As a result, the DPC plans to review the advertising model thoroughly, working in coordination with other EU privacy authorities before approving a regional release. Des Hogan, Ireland’s Data Protection Commissioner, confirmed that Meta has officially postponed the EU launch and that discussions with the company will continue to assess the new ad approach. 

Dale Sunderland, another commissioner at the DPC, emphasized that the process remains in its early stages and it’s too soon to identify any potential regulatory violations. The commission intends to follow its usual review protocol, which applies to all new features introduced by Meta. This strategic move by Meta comes while the company is involved in a high-profile antitrust case in the United States. The lawsuit seeks to challenge Meta’s ownership of WhatsApp and Instagram and could potentially lead to a forced breakup of the company’s assets. 

Meta’s decision to push forward with deeper cross-platform ad integration may indicate confidence in its legal position. The tech giant continues to argue that its advertising tools are essential for small business growth and that any restrictions on its ad operations could negatively impact entrepreneurs who rely on Meta’s platforms for customer outreach. However, critics claim this level of integration is precisely why Meta should face stricter regulatory oversight—or even be broken up. 

As the U.S. court prepares to issue a ruling, the EU delay illustrates how Meta is navigating regulatory pressures differently across markets. After initial reporting, WhatsApp clarified that the 2025 rollout in the EU was never confirmed, and the current plan reflects ongoing conversations with European regulators.

EU Sanctions Actors Involved in Russian Hybrid Warfare


EU takes action against Russian propaganda

The European Union (EU) announced sweeping new sanctions against 21 individuals and 6 entities involved in Russia’s destabilizing activities abroad, marking a significant escalation in the bloc’s response to hybrid warfare threats.

European Union announced huge sanctions against 6 entities and 21 individuals linked to Russia’s destabilizing activities overseas, highlighting the EU’s efforts to address hybrid warfare threats. 

The Council’s decision widens the scope of regulations to include tangible assets and brings new powers to block Russian media broadcasting licenses, showcasing the EU’s commitment to counter Moscow’s invading campaigns. The new approach now allows taking action against actors targeting vessels, real estate, aircraft, and physical components of digital networks and communications. 

Financial organizations and firms giving crypto-asset services that allow Russian disruption operations also fall under the new framework. 

The new step addresses systematic Russian media control and manipulation, the EU is taking authority to cancel the broadcasting licenses of Russian media houses run by the Kremlin and block their content distribution within EU countries. 

Experts describe this Russian tactic as an international campaign of media manipulation and fake news aimed at disrupting neighboring nations and the EU. 

Interestingly, the ban aligns with the Charter of Fundamental Rights, allowing select media outlets to do non-broadcasting activities such as interviews and research within the EU. 

Propaganda and Tech Companies

The EU has also taken action against StarkIndustries, a web hosting network. The company is said to have assisted various Russian state-sponsored players to do suspicious activities such as information manipulation, interference ops, and cyber attacks against the Union and third-world countries. 

The sanctions also affect Viktor Medvedchuk, an ex-Ukranian politician and businessman, who is said to control Ukranian media outlets to distribute pro-Russian propaganda. 

Hybrid Threats Framework

The sections are built upon a 2024 framework to address Russian interference actions compromising EU fundamental values, stability, independence, integrity, and stability. 

Designated entities and individuals face asset freezes, whereas neutral individuals will face travel bans blocking entry and transit through EU nations. This displays the EU’s commitment to combat hybrid warfare via sustained, proportionate actions.

Polish Space Agency "POLSA" Suffers Breach; System Offline

Polish Space Agency "POLSA" Suffers Breach; System Offline

Systems offline to control breach

The Polish Space Agency (POLSA) suffered a cyberattack last week, it confirmed on X. The agency didn’t disclose any further information, except that it “immediately disconnected” the agency network after finding that the systems were hacked. The social media post indicates the step was taken to protect data. 

US News said “Warsaw has repeatedly accused Moscow of attempting to destabilise Poland because of its role in supplying military aid to its neighbour Ukraine, allegations Russia has dismissed.” POLSA has been offline since to control the breach of its IT infrastructure. 

Incident reported to authorities

After discovering the attack, POLSA reported the breach to concerned authorities and started an investigation to measure the impact. Regarding the cybersecurity incident, POLSA said “relevant services and institutions have been informed.”  

POLSA didn’t reveal the nature of the security attack and has not attributed the breach to any attacker. "In order to secure data after the hack, the POLSA network was immediately disconnected from the Internet. We will keep you updated."

How did the attack happen?

While no further info has been out since Sunday, internal sources told The Register that the “attack appears to be related to an internal email compromise” and that the staff “are being told to use phones for communication instead.”

POLSA is currently working with the Polish Military Computer Security Incident Response Team (CSIRT MON) and the Polish Computer Security Incident Response Team (CSIRT NASK) to patch affected services. 

Who is responsible?

Commenting on the incident, Poland's Minister of Digital Affairs, Krzysztof Gawkowski, said the “systems under attack were secured. CSIRT NASK, together with CSIRT MON, supports POLSA in activities aimed at restoring the operational functioning of the Agency.” On finding the source, he said, “Intensive operational activities are also underway to identify who is behind the cyberattack. We will publish further information on this matter on an ongoing basis.”

About POLSA

A European Space Agency (ESA) member, POLSA was established in September 2014. It aims to support the Polish space industry and strengthen Polish defense capabilities via satellite systems. The agency also helps Polish entrepreneurs get funds from ESA and also works with the EU, other ESA members and countries on different space exploration projects.  

Third-Party Data Breaches Expose Cybersecurity Risks in EU's Largest Firms

A recent report by SecurityScorecard has shed light on the widespread issue of third-party data breaches among the European Union’s top companies. The study, which evaluated the cybersecurity health of the region’s 100 largest firms, revealed that 98% experienced breaches through external vendors over the past year. This alarming figure underscores the vulnerabilities posed by interconnected digital ecosystems.

Industry Disparities in Cybersecurity

While only 18% of the companies reported direct breaches, the prevalence of third-party incidents highlights hidden risks that could disrupt operations across multiple sectors. Security performance varied significantly by industry, with the transport sector standing out for its robust defenses. All companies in this sector received high cybersecurity ratings, reflecting strong proactive measures.

In contrast, the energy sector lagged behind, with 75% of firms scoring poorly, receiving cybersecurity grades of C or lower. Alarmingly, one in four energy companies reported direct breaches, further exposing their susceptibility to cyber threats.

Regional differences also emerged, with Scandinavian, British, and German firms demonstrating stronger cybersecurity postures. Meanwhile, French companies recorded the highest rates of third- and fourth-party breaches, reaching 98% and 100%, respectively.

Ryan Sherstobitoff, Senior Vice President of Threat Research and Intelligence at SecurityScorecard, stressed the importance of prioritizing third-party risk management. His remarks come as the EU prepares to implement the Digital Operational Resilience Act (DORA), a regulation designed to enhance the cybersecurity infrastructure of financial institutions.

“With regulations like DORA set to reshape cybersecurity standards, European companies must prioritise third-party risk management and leverage rating systems to safeguard their ecosystems,” Sherstobitoff stated in a media briefing.

Strengthening Cybersecurity Resilience

DORA introduces stricter requirements for banks, insurance companies, and investment firms to bolster their resilience against cyberattacks and operational disruptions. As organizations gear up for the rollout of this framework, addressing third-party risks will be crucial for maintaining operational integrity and adhering to evolving cybersecurity standards.

The findings from SecurityScorecard highlight the urgent need for EU businesses to fortify their digital ecosystems and prepare for regulatory demands. By addressing third-party vulnerabilities, organizations can better safeguard their operations and protect against emerging threats.

The Intersection of Travel and Data Privacy: A Growing Concern

 

The evolving relationship between travel and data privacy is sparking significant debate among travellers and experts. A recent Spanish regulation requiring hotels and Airbnb hosts to collect personal guest data has particularly drawn criticism, with some privacy-conscious tourists likening it to invasive surveillance. This backlash highlights broader concerns about the expanding use of personal data in travel.

Privacy Concerns Across Europe

This trend is not confined to Spain. Across the European Union, regulations now mandate biometric data collection, such as fingerprints, for non-citizens entering the Schengen zone. Airports and border control points increasingly rely on these measures to streamline security and enhance surveillance. Advocates argue that such systems improve safety and efficiency, with Chris Jones of Statewatch noting their roots in international efforts to combat terrorism, driven by UN resolutions and supported by major global powers like the US, China, and Russia.

Challenges with Biometric and Algorithmic Systems

Despite their intended benefits, systems leveraging Passenger Name Record (PNR) data and biometrics often fall short of expectations. Algorithmic misidentifications can lead to unjust travel delays or outright denials. Biometric systems also face significant logistical and security challenges. While they are designed to reduce processing times at borders, system failures frequently result in delays. Additionally, storing such sensitive data introduces serious risks. For instance, the 2019 Marriott data breach exposed unencrypted passport details of millions of guests, underscoring the vulnerabilities in large-scale data storage.

The EU’s Ambitious Biometric Database

The European Union’s effort to create the world’s largest biometric database has sparked concern among privacy advocates. Such a trove of data is an attractive target for both hackers and intelligence agencies. The increasing use of facial recognition technology at airports—from Abu Dhabi’s Zayed International to London Heathrow—further complicates the privacy landscape. While some travelers appreciate the convenience, others fear the long-term implications of this data being stored and potentially misused.

Global Perspectives on Facial Recognition

Prominent figures like Elon Musk openly support these technologies, envisioning their adoption in American airports. However, critics argue that such measures often prioritize efficiency over individual privacy. In the UK, stricter regulations have limited the use of facial recognition systems at airports. Yet, alternative tracking technologies are gaining momentum, with trials at train stations exploring non-facial data to monitor passengers. This reflects ongoing innovation by technology firms seeking to navigate legal restrictions.

Privacy vs. Security: A Complex Trade-Off

According to Gus Hosein of Privacy International, borders serve as fertile ground for experiments in data-driven travel technologies, often at the expense of individual rights. These developments point to the inevitability of data-centric travel but also emphasize the need for transparent policies and safeguards. Balancing security demands with privacy concerns remains a critical challenge as these technologies evolve.

The Choice for Travelers

For travelers, the trade-off between convenience and the protection of personal information grows increasingly complex with every technological advance. As governments and companies push forward with data-driven solutions, the debate over privacy and transparency will only intensify, shaping the future of travel for years to come.

ENISA’s Biennial Cybersecurity Report Highlights EU Threats and Policy Needs

 

The EU Agency for Cybersecurity (ENISA) has released its inaugural biennial report under the NIS 2 Directive, offering an analysis of cybersecurity maturity and capabilities across the EU. Developed in collaboration with all 27 EU Member States and the European Commission, the report provides evidence-based insights into existing vulnerabilities, strengths, and areas requiring improvement. Juhan Lepassaar, ENISA’s Executive Director, emphasized the importance of readiness in addressing increasing cybersecurity threats, technological advancements, and complex geopolitical dynamics. Lepassaar described the report as a collective effort to bolster security and resilience across the EU.

The findings draw on multiple sources, including the EU Cybersecurity Index, the NIS Investment reports, the Foresight 2030 report, and the ENISA Threat Landscape report. A Union-wide risk assessment identified significant cyber threats, with vulnerabilities actively exploited by threat actors. While Member States share common cybersecurity objectives, variations in critical sector sizes and complexities pose challenges to implementing uniform cybersecurity measures. At the individual level, younger generations have shown improvements in cybersecurity awareness, though disparities persist in the availability and maturity of education programs across Member States.

ENISA has outlined four priority areas for policy enhancement: policy implementation, cyber crisis management, supply chain security, and skills development. The report recommends providing increased financial and technical support to EU bodies and national authorities to ensure consistent implementation of the NIS 2 Directive. Revising the EU Blueprint for managing large-scale cyber incidents is also suggested, aiming to align with evolving policies and improve resilience. Tackling the cybersecurity skills gap is a key focus, with plans to establish a unified EU training framework, evaluate future skills needs, and introduce a European attestation scheme for cybersecurity qualifications.

Additionally, the report highlights the need for a coordinated EU-wide risk assessment framework to address supply chain vulnerabilities and improve preparedness in specific sectors. Proposed mechanisms, such as the Cybersecurity Emergency Mechanism under the Cyber Solidarity Act, aim to strengthen collective resilience.

Looking to the future, ENISA anticipates increased policy attention on emerging technologies, including Artificial Intelligence (AI) and Post-Quantum Cryptography. While the EU’s cybersecurity framework provides a solid foundation, evolving threats and expanding roles for authorities present ongoing challenges. To address these, ENISA underscores the importance of enhancing situational awareness and operational cooperation, ensuring the EU remains resilient and competitive in addressing cybersecurity challenges.

Irish Data Protection Commission Halts AI Data Practices at X

 

The Irish Data Protection Commission (DPC) recently took a decisive step against the tech giant X, resulting in the immediate suspension of its use of personal data from European Union (EU) and European Economic Area (EEA) users to train its AI model, “Grok.” This marks a significant victory for data privacy, as it is the first time the DPC has taken such substantial action under its powers granted by the Data Protection Act of 2018. 

The DPC initially raised concerns that X’s data practices posed a considerable risk to individuals’ fundamental rights and freedoms. The use of publicly available posts to train the AI model was viewed as an unauthorized collection of sensitive personal data without explicit consent. This intervention highlights the tension between technological innovation and the necessity of safeguarding individual privacy. 

Following the DPC’s intervention, X agreed to cease its current data processing activities and commit to adhering to stricter privacy guidelines. Although the company did not acknowledge any wrongdoing, this outcome sends a strong message to other tech firms about the importance of prioritizing data privacy when developing AI technologies. The immediate halt of Grok AI’s training on data from 60 million European users came in response to mounting regulatory pressure across Europe, with at least nine GDPR complaints filed during its short stint from May 7 to August 1. 

After the suspension, Dr. Des Hogan, Chairperson of the Irish DPC, emphasized that the regulator would continue working with its EU/EEA peers to ensure compliance with GDPR standards, affirming the DPC’s commitment to safeguarding citizens’ rights. The DPC’s decision has broader implications beyond its immediate impact on X. As AI technology rapidly evolves, questions about data ethics and transparency are increasingly urgent. This decision serves as a prompt for a necessary dialogue on the responsible use of personal data in AI development.  

To further address these issues, the DPC has requested an opinion from the European Data Protection Board (EDPB) regarding the legal basis for processing personal data in AI models, the extent of data collection permitted, and the safeguards needed to protect individual rights. This guidance is anticipated to set clearer standards for the responsible use of data in AI technologies. The DPC’s actions represent a significant step in regulating AI development, aiming to ensure that these powerful technologies are deployed ethically and responsibly. By setting a precedent for data privacy in AI, the DPC is helping shape a future where innovation and individual rights coexist harmoniously.