The ongoing debate around Meta’s use of European data to train its artificial intelligence (AI) systems is far from over. While Meta has started training its large language models (LLMs) using public content from Facebook and Instagram, privacy regulators in Europe are still questioning whether this is lawful and the issue may soon reach the European Court of Justice (ECJ).
Meta began training its AI using public posts made by users in the EU shortly after getting the go-ahead from several privacy watchdogs. This approval came just before Meta launched AI-integrated products, including its smart glasses, which rely heavily on understanding cultural and regional context from online data.
However, some regulators and consumer groups are not convinced the approval was justified. A German consumer organization had attempted to block the training through an emergency court appeal. Although the request was denied, that was only a temporary decision. The core legal challenges, including one led by Hamburg’s data protection office, are still expected to proceed in court.
Hamburg’s commissioner, who initially supported blocking the training, later withdrew a separate emergency measure under Europe’s data protection law. He stated that while the training has been allowed to continue for now, it’s highly likely that the final ruling will come from the EU’s highest court.
The controversy centers on whether Meta has a strong enough legal reason, known as "legitimate interest" to use personal data for AI training. Meta’s argument was accepted by Irish regulators, who oversee Meta’s EU operations, on the condition that strict privacy safeguards are in place.
What Does ‘Legitimate Interest’ Mean Under GDPR?
Under the General Data Protection Regulation (GDPR), companies must have a valid reason to collect and use personal data. One of the six legal bases allowed is called “legitimate interest.”
This means a company can process someone’s data if it’s necessary for a real business purpose, as long as it does not override the privacy rights of the individual.
In the case of AI model training, companies like Meta claim that building better products and improving AI performance qualifies as a legitimate interest. However, this is debated, especially when public data includes posts with personal opinions, cultural expressions, or identity-related content.
Data protection regulators must carefully balance:
1. The company’s business goals
2. The individual’s right to privacy
3. The potential long-term risks of using personal data for AI systems
Some experts argue that this sets a broader precedent. If Meta can train its AI using public data under the concept of legitimate interest, other companies may follow. This has raised hopes among many European AI firms that have felt held back by unclear or strict regulations.
Industry leaders say that regulatory uncertainty, specifically surrounding Europe’s General Data Protection Regulation (GDPR) and the upcoming AI Act has been one of the biggest barriers to innovation in the region. Others believe the current developments signal a shift toward supporting responsible AI development while protecting users’ rights.
Despite approval from regulators and support from industry voices, legal clarity is still missing. Many legal experts and companies agree that only a definitive ruling from the European Court of Justice can settle whether using personal data for AI training in this way is truly lawful.
Healthcare professionals in the UK are under scrutiny for using artificial intelligence tools that haven’t been officially approved to record and transcribe conversations with patients. A recent investigation has uncovered that several doctors and medical facilities are relying on AI software that does not meet basic safety and data protection requirements, raising serious concerns about patient privacy and clinical safety.
This comes despite growing interest in using artificial intelligence to help doctors with routine tasks like note-taking. Known as Ambient Voice Technology (AVT), these tools are designed to save time by automatically recording and summarising patient consultations. In theory, this allows doctors to focus more on care and less on paperwork. However, not all AVT tools being used in medical settings have passed the necessary checks set by national authorities.
Earlier this year, NHS England encouraged the use of AVT and outlined the minimum standards required for such software. But in a more recent internal communication dated 9 June, the agency issued a clear warning. It stated that some AVT providers are not following NHS rules, yet their tools are still being adopted in real-world clinical settings.
The risks associated with these non-compliant tools include possible breaches of patient confidentiality, financial liabilities, and disruption to the wider digital strategy of the NHS. Some AI programs may also produce inaccurate outputs— a phenomenon known as “hallucination”— which can lead to serious errors in medical records or decision-making.
The situation has left many general practitioners in a difficult position. While eager to embrace new technologies, many lack the technical expertise to determine whether a product is safe and compliant. Dr. David Wrigley, a senior representative of the British Medical Association, stressed the need for stronger guidance and oversight. He believes doctors should not be left to evaluate software quality alone and that central NHS support is essential to prevent unsafe usage.
Healthcare leaders are also concerned about the growing number of lesser-known AI companies aggressively marketing their tools to individual clinics and hospitals. With many different options flooding the market, there’s a risk that unsafe or poorly regulated tools might slip through the cracks.
Matthew Taylor, head of the NHS Confederation, called the situation a “turning point” and suggested that national authorities need to offer clearer recommendations on which AI systems are safe to use. Without such leadership, he warned, the current approach could become chaotic and risky.
Interestingly, the UK Health Secretary recently acknowledged that some doctors are already experimenting with AVT tools before receiving official approval. While not endorsing this behaviour, he saw it as a sign that healthcare workers are open to digital innovation.
On a positive note, some AVT software does meet current NHS standards. One such tool, Accurx Scribe, is being used successfully and is developed in close consultation with NHS leaders.
As AI continues to reshape healthcare, experts agree on one thing: innovation must go hand-in-hand with accountability and safety.
Christian Dior, the well-known luxury fashion brand, recently experienced a cyberattack that may have exposed customer information. The brand, owned by the French company LVMH, announced that an outsider had managed to break into part of its customer database. This has raised concerns about the safety of personal information, especially among shoppers in the UK.
Although no bank or card information was stolen, Dior said the hackers were able to access names, email addresses, phone numbers, mailing addresses, purchase records, and marketing choices of customers. Even though financial details remain safe, experts warn that this kind of personal data could still be used for scams that trick people into giving away more information.
How and When the Breach Happened
The issue was first noticed on May 7, 2025, when Dior’s online system in South Korea detected unusual activity involving customer records. Their technical team quickly responded by shutting down the affected servers to prevent more damage.
A week later, on May 14, French news sources reported the incident, and the following day, Dior publicly confirmed the breach on its websites. The company explained that while no payment data was involved, some customer details were accessed.
What Dior Is Doing Now
Following the European data protection rules, Dior acted quickly by resetting passwords, isolating the impacted systems, and hiring cybersecurity experts to investigate the attack. They also began informing customers where necessary and reassured the public that they are working on making their systems more secure.
Dior says it plans to improve security by increasing the use of two-factor login processes and monitoring accounts more closely for unusual behavior. The company says it takes customer privacy very seriously and is sorry for any trouble this may cause.
Why Luxury Brands Are Often Targeted
High-end brands like Dior are popular targets for cybercriminals because they cater to wealthy customers and run large digital operations. Earlier this month, other UK companies like Marks & Spencer and Co-op also reported customer data issues, showing that online attacks in the retail world are becoming more common.
What Customers Can Do to Stay Safe
If you’re a Dior customer, there are simple steps you can take to protect yourself:
1. Be careful with any messages that claim to be from Dior. Don’t click on links unless you are sure the message is real. Always visit Dior’s website directly.
2. Change your Dior account password to something new and strong. Avoid using the same password on other websites.
3. Turn on two-factor login for extra protection if available.
4. Watch your bank and credit card activity regularly for any unusual charges.
Be wary of fake ads or offers claiming big discounts from Dior, especially on social media.
Taking a few minutes now to secure your account could save you from a lot of problems later.
In an era where cybercriminals are increasingly targeting passwords through phishing attacks, data breaches, and other malicious tactics, securing online accounts has never been more important. Relying solely on single-factor authentication, such as a password, is no longer sufficient to protect sensitive information. Multi-factor authentication (MFA) has emerged as a vital tool for enhancing security by requiring verification from multiple sources. Among the most effective MFA methods are hardware security keys, which provide robust protection against unauthorized access.
A hardware security key is a small physical device designed to enhance account security using public key cryptography. This method generates a pair of keys: a public key that encrypts data and a private key that decrypts it. The private key is securely stored on the hardware device, making it nearly impossible for hackers to access or replicate. Unlike SMS-based authentication, which is vulnerable to interception, hardware security keys offer a direct, offline authentication method that significantly reduces the risk of compromise.
Hardware security keys are compatible with major online platforms, including Google, Microsoft, Facebook, GitHub, and many financial institutions. They connect to devices via USB, NFC, or Bluetooth, ensuring compatibility with a wide range of hardware. Popular options include Yubico’s YubiKey, Google’s Titan Security Key, and Thetis. Setting up a hardware security key is straightforward. Users simply register the key with an online account that supports security keys. For example, in Google’s security settings, users can enable 2-Step Verification and add a security key.
Once linked, logging in requires inserting or tapping the key, making the process both highly secure and faster than receiving verification codes via email or SMS. When selecting a security key, compatibility is a key consideration. Newer devices often require USB-C keys, while older ones may need USB-A or NFC options. Security certifications also matter—FIDO U2F provides basic security, while FIDO2/WebAuthn offers advanced protection against phishing and unauthorized access. Some security keys even include biometric authentication, such as fingerprint recognition, for added security.
Prices for hardware security keys typically range from $30 to $100. It’s recommended to purchase a backup key in case the primary key is lost. Losing a security key does not mean being locked out of accounts, as most platforms allow backup authentication methods, such as SMS or authentication apps. However, having a secondary security key ensures uninterrupted access without relying on less secure recovery methods.
While hardware security keys provide excellent protection, maintaining strong online security habits is equally important. This includes creating complex passwords, being cautious with email links and attachments, and avoiding oversharing personal information on social media. For those seeking additional protection, identity theft monitoring services can offer alerts and assistance in case of a security breach.
By using a hardware security key alongside other cybersecurity measures, individuals can significantly reduce their risk of falling victim to online attacks. These keys not only enhance security but also ensure convenient and secure access to their most important accounts. As cyber threats continue to evolve, adopting advanced tools like hardware security keys is a proactive step toward safeguarding your digital life.