Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label GDPR. Show all posts

Cookies Explained: Accept or Reject for Online Privacy

 

Online cookies sit at the centre of a trade-off between convenience and privacy, and those “accept all” or “reject all” pop-ups are how websites ask for your permission to track and personalise your experience.Understanding what each option means helps you decide how much data you are comfortable sharing.

Role of cookies 

Cookies are small files that websites store on your device to remember information about you and your activity. They can keep you logged in, remember your preferred settings, or help online shops track items in your cart. 
  • Session cookies are temporary and disappear when you close the browser or after inactivity, supporting things like active shopping carts. 
  • Persistent cookies remain for days to years, recognising you when you return and saving details like login credentials. 
  • Advertisers use cookies to track browsing behaviour and deliver targeted ads based on your profile.
Essential vs non-essential cookies

Most banners state that a site uses essential cookies that are required for core functions such as logging in or processing payments. These cannot usually be disabled because the site would break without them. 

Non-essential cookies generally fall into three groups:
  • Functional cookies personalise your experience, for example by remembering language or region.
  • Analytics cookies collect statistics on how visitors use the site, helping owners improve performance and content.
  • Advertising cookies, often from third parties, build cross-site, cross-device profiles to serve personalised ads.

Accept all or reject all?

Choosing accept all gives consent for the site and third parties to use every category of cookie and tracker. This enables full functionality and personalised features, including tailored advertising driven by your behaviour profile. 

Selecting reject all (or ignoring the banner) typically blocks every cookie except those essential for the site to work. You still access core services, but may lose personalisation and see fewer or less relevant embedded third-party elements.Your decision is stored in a consent cookie and many sites will ask you again after six to twelve months.

Privacy, GDPR and control

Under the EU’s GDPR, cookies that identify users count as personal data, so sites must request consent, explain what is being tracked, document that consent and make it easy to refuse or withdraw it. Many websites outside the EU follow similar rules because they handle European traffic.

To reduce consent fatigue, a specification called Global Privacy Control lets browsers send a built-in privacy signal instead of forcing users to click through banners on every site, though adoption remains limited and voluntary. If you regret earlier choices, you can clear cookies in your browser settings, which resets consent but also signs you out of most services.

EU Accuses Meta of Breaching Digital Rules, Raises Questions on Global Tech Compliance

 




The European Commission has accused Meta Platforms, the parent company of Facebook and Instagram, of violating the European Union’s Digital Services Act (DSA) by making it unnecessarily difficult for users to report illegal online content and challenge moderation decisions.

In its preliminary findings, the Commission said both platforms lack a user-friendly “Notice and Action” system — the mechanism that allows people to flag unlawful material such as child sexual abuse content or terrorist propaganda. Regulators noted that users face multiple steps and confusing options before they can file a report. The Commission also claimed that Meta’s interface relies on “dark patterns”, which are design features that subtly discourage users from completing certain actions, such as submitting reports.

According to the Commission, Meta’s appeal process also falls short of DSA requirements. The current system allegedly prevents users from adding explanations or submitting supporting evidence when disputing a moderation decision. This, the regulator said, limits users’ ability to express why they believe a decision was unfair and weakens the overall transparency of Meta’s content moderation practices.

The European Commission’s findings are not final, and Meta has the opportunity to respond before any enforcement action is taken. If the Commission confirms these violations, it could issue a non-compliance decision, which may result in penalties of up to 6 percent of Meta’s global annual revenue. The Commission may also impose recurring fines until the company aligns its operations with EU law.

Meta, in a public statement, said it “disagrees with any suggestion” that it breached the DSA. The company stated that it has already made several updates to comply with the law, including revisions to content reporting options, appeals procedures, and data access tools.

The European Commission also raised similar concerns about TikTok, saying that both companies have limited researchers’ access to public data on their platforms. The DSA requires large online platforms to provide sufficient data access so independent researchers can analyze potential harms — for example, whether minors are exposed to illegal or harmful content. The Commission’s review concluded that the data-access tools of Facebook, Instagram, and TikTok are burdensome and leave researchers with incomplete or unreliable datasets, which hinders academic and policy research.

TikTok responded that it has provided data to almost 1,000 research teams and remains committed to transparency. However, the company noted that the DSA’s data-sharing obligations sometimes conflict with the General Data Protection Regulation (GDPR), making it difficult to comply with both laws simultaneously. TikTok urged European regulators to offer clarity on how these two frameworks should be balanced.

Beyond Europe, the investigation may strain relations with the United States. American officials have previously criticized the EU for imposing regulatory burdens on U.S.-based tech firms. U.S. FTC Chairman Andrew Ferguson recently warned companies that censoring or modifying content to satisfy foreign governments could violate U.S. law. Former President Donald Trump has also expressed opposition to EU digital rules and even threatened tariffs against countries enforcing them.

For now, the Commission’s investigation continues. If confirmed, the case could set a major precedent for how global social media companies manage user safety, transparency, and accountability under Europe’s strict online governance laws.


Hackers Steal Medical Data of Nearly Half a Million Women in the Netherlands

Almost 500,000 women in the Netherlands have had their medical information stolen after hackers breached a clinical laboratory responsible for analyzing cervical cancer screening tests. The stolen records, dating from 2022 until now, include names, addresses, dates of birth, social security numbers, test results, and even doctors’ follow-up advice.

The data was taken from Clinical Diagnostics, a lab located in Rijswijk, near The Hague. The breach occurred early last month, but the women involved and the national screening bureau were only informed last week. This delay sparked outrage, as European privacy laws require authorities and affected individuals to be notified within 24 hours of a confirmed data breach.

Bevolkingsonderzoek Nederland (BVO NL), the agency overseeing national cancer screening programs, strongly criticized the lab for failing to alert women sooner. Its chair, Elza den Hertog, described the incident as a “nightmare scenario.” She explained that while the bureau had worked hard to encourage women to take the cervical screening test, those efforts were undermined when participants learned their sensitive medical details had fallen into the hands of cybercriminals.

As a result of the breach, BVO NL has suspended its cooperation with Clinical Diagnostics until the lab can guarantee stronger protections for patient data. Dutch Health Minister Danielle Jansen has also ordered an independent investigation.

Further reports suggest the situation may be even more serious than initially thought. In addition to cervical cancer screenings, other laboratory data — including tests from hospitals such as Leiden University Medical Centre and Amphia may also have been compromised.

The healthcare cybersecurity center, Z-Cert, confirmed that stolen data has already appeared on the dark web, with around 100 megabytes published so far. That portion alone represents more than 50,000 patients’ information. Investigators believe the total stolen data could reach 300 gigabytes.

According to local media, a cybercriminal group known as "Nova" has claimed responsibility for the attack. Reports also suggest that the lab’s parent company, Eurofins Scientific, may have paid a ransom worth millions of euros in an attempt to prevent the release of the stolen files, though this has not been officially confirmed.

Authorities are urging affected women to remain alert to possible fraud. Stolen personal details can be misused for scams, phishing attempts, or identity theft. Officials advise patients not to share information with unknown callers, avoid clicking suspicious links, and treat unusual messages with caution.

“This incident shows just how damaging cyberattacks can be when they target critical healthcare services,” den Hertog said. “Our focus now must be on restoring trust, supporting patients, and preventing this from ever happening again.”


Legal Battle Over Meta’s AI Training Likely to Reach Europe’s Top Court

 


The ongoing debate around Meta’s use of European data to train its artificial intelligence (AI) systems is far from over. While Meta has started training its large language models (LLMs) using public content from Facebook and Instagram, privacy regulators in Europe are still questioning whether this is lawful and the issue may soon reach the European Court of Justice (ECJ).

Meta began training its AI using public posts made by users in the EU shortly after getting the go-ahead from several privacy watchdogs. This approval came just before Meta launched AI-integrated products, including its smart glasses, which rely heavily on understanding cultural and regional context from online data.

However, some regulators and consumer groups are not convinced the approval was justified. A German consumer organization had attempted to block the training through an emergency court appeal. Although the request was denied, that was only a temporary decision. The core legal challenges, including one led by Hamburg’s data protection office, are still expected to proceed in court.

Hamburg’s commissioner, who initially supported blocking the training, later withdrew a separate emergency measure under Europe’s data protection law. He stated that while the training has been allowed to continue for now, it’s highly likely that the final ruling will come from the EU’s highest court.

The controversy centers on whether Meta has a strong enough legal reason, known as "legitimate interest" to use personal data for AI training. Meta’s argument was accepted by Irish regulators, who oversee Meta’s EU operations, on the condition that strict privacy safeguards are in place.


What Does ‘Legitimate Interest’ Mean Under GDPR?

Under the General Data Protection Regulation (GDPR), companies must have a valid reason to collect and use personal data. One of the six legal bases allowed is called “legitimate interest.” 

This means a company can process someone’s data if it’s necessary for a real business purpose, as long as it does not override the privacy rights of the individual.

In the case of AI model training, companies like Meta claim that building better products and improving AI performance qualifies as a legitimate interest. However, this is debated, especially when public data includes posts with personal opinions, cultural expressions, or identity-related content.

Data protection regulators must carefully balance:

1. The company’s business goals

2. The individual’s right to privacy

3. The potential long-term risks of using personal data for AI systems


Some experts argue that this sets a broader precedent. If Meta can train its AI using public data under the concept of legitimate interest, other companies may follow. This has raised hopes among many European AI firms that have felt held back by unclear or strict regulations.

Industry leaders say that regulatory uncertainty, specifically surrounding Europe’s General Data Protection Regulation (GDPR) and the upcoming AI Act has been one of the biggest barriers to innovation in the region. Others believe the current developments signal a shift toward supporting responsible AI development while protecting users’ rights.

Despite approval from regulators and support from industry voices, legal clarity is still missing. Many legal experts and companies agree that only a definitive ruling from the European Court of Justice can settle whether using personal data for AI training in this way is truly lawful.


Can AI Be Trusted With Sensitive Business Data?

 



As artificial intelligence becomes more common in businesses, from retail to finance to technology— it’s helping teams make faster decisions. But behind these smart predictions is a growing problem: how do you make sure employees only see what they’re allowed to, especially when AI mixes information from many different places?

Take this example: A retail company’s AI tool predicts upcoming sales trends. To do this, it uses both public market data and private customer records. The output looks clean and useful but what if that forecast is shown to someone who isn’t supposed to access sensitive customer details? That’s where access control becomes tricky.


Why Traditional Access Rules Don’t Work for AI

In older systems, access control was straightforward. Each person had certain permissions: developers accessed code, managers viewed reports, and so on. But AI changes the game. These systems pull data from multiple sources, internal files, external APIs, sensor feeds, and combine everything to create insights. That means even if a person only has permission for public data, they might end up seeing results that are based, in part, on private or restricted information.


Why It Matters

Security Concerns: If sensitive data ends up in the wrong hands even indirectly, it can lead to data leaks. A 2025 study showed that over two-thirds of companies had AI-related security issues due to weak access controls.

Legal Risks: Privacy laws like the GDPR require clear separation of data. If a prediction includes restricted inputs and is shown to the wrong person, companies can face heavy fines.

Trust Issues: When employees or clients feel their data isn’t safe, they lose trust in the system, and the business.


What’s Making This So Difficult?

1. AI systems often blend data so deeply that it’s hard to tell what came from where.

2. Access rules are usually fixed, but AI relies on fast-changing data.

3. Companies have many users with different roles and permissions, making enforcement complicated.

4. Permissions are often too broad, for example, someone allowed to "view reports" might accidentally access sensitive content.


How Can Businesses Fix This?

• Track Data Origins: Label data as "public" or "restricted" and monitor where it ends up.

• Flexible Access Rules: Adjust permissions based on user roles and context.

• Filter Outputs: Build AI to hide or mask parts of its response that come from private sources.

• Separate Models: Train different AI models for different user groups, each with its own safe data.

• Monitor Usage: Keep logs of who accessed what, and use alerts to catch suspicious activity.


As AI tools grow more advanced and rely on live data from many sources, managing access will only get harder. Businesses must modernize their security strategies to protect sensitive information without slowing down innovation.

EU Fines TikTok $600 Million for Data Transfers to China

EU Fines TikTok $600 Million for Data Transfers to China

Regulators said that the EU has fined TikTok 530 million euros (around $600 million). Chinese tech giant ByteDance owns TikTok, which has been found guilty of illegally sending the private data of EU users to China and lack of compliance to ensure the protection of data from potential access by Chinese authorities. According to an AFP news report, the penalty— one of the largest ever issued to date by EU’s data protection agencies— comes after a detailed inquiry into the legitimacy of TikTok’s data transfer rules. 

TikTok Fine and EU

TikTok’s lead regulator in Europe, Ireland’s Data Protection Commission (DPC) said that TikTok accepted during the probe about hosting European user data in China. DPC’s deputy commissioner Graham Doyle said that “TikTok failed to verify, guarantee, and demonstrate that the personal data of (European) users, remotely accessed by staff in China, was afforded a level of protection essentially equivalent to that guaranteed within the EU,”

Besides this, Doyle said that TikTok’s failure to address the dangers of possible access to Europeans’s private data by Chinese authorities under China’s anti-terrorism, counter-espionage, and other regulations, which TikTok itself found different than EU’s data protection standards. 

TikTok will contest the decision

TikTok has declared to contest the heavy EU fine, despite the findings. TikTok Europe’s Christine Grahn stressed that the company has “never received a request” from authorities in China for European users’ data and that “TikTok” has never given EU users’ data to Chinese authorities. “We disagree with this decision and intend to appeal it in full,” Christine said. 

TikTok boasts a massive 1.5 billion users worldwide. In recent years, the social media platform has been under tough pressure from Western governments due to worries about the misuse of data by Chinese actors for surveillance and propaganda aims. 

TikTok to comply with EU Rules

In 2023, the Ireland DPC fined TikTok 354 million euros for violating EU rules related to the processing of children’s information. The DPC’s recent judgment also revealed that TikTok violated requirements under the EU’s General Data Protection Regulation (GDPR) by sending user data to China. The decision includes a 530 million euro administrative penalty plus a mandate that TikTok aligns its data processing rules with EU practices within 6 months. 

Brave Browser’s New ‘Cookiecrumbler’ Tool Aims to Eliminate Annoying Cookie Consent Pop-Ups

 

While the General Data Protection Regulation (GDPR) was introduced with noble intentions—to protect user privacy and control over personal data—its practical side effects have caused widespread frustration. For many internet users, GDPR has become synonymous with endless cookie consent pop-ups and hours of compliance training. Now, Brave Browser is stepping up with a new solution: Cookiecrumbler, a tool designed to eliminate the disruptive cookie notices without compromising web functionality. 

Cookiecrumbler is not Brave’s first attempt at combating these irritating banners. The browser has long offered pop-up blocking capabilities. However, the challenge hasn’t been the blocking itself—it’s doing so while preserving website functionality. Many websites break or behave unexpectedly when these notices are blocked improperly. Brave’s new approach promises to fix that by taking cookie blocking to a new level of sophistication.  

According to a recent announcement, Cookiecrumbler combines large language models (LLMs) with human oversight to automate and refine the detection of cookie banners across the web. This hybrid model allows the tool to scale effectively while maintaining precision. By running on Brave’s backend servers, Cookiecrumbler crawls websites, identifies cookie notices, and generates custom rules tailored to each site’s layout and language. One standout feature is its multilingual capability. Cookie notices often vary not just in structure but in language and legal formatting based on the user’s location. 

Cookiecrumbler accounts for this by using geo-targeted vantage points, enabling it to view websites as a local user would, making detection far more effective. The developers highlight several reasons for using LLMs in this context: cookie banners typically follow predictable language patterns, the work is repetitive, and it’s relatively low-risk. The cost of each crawl is minimal, allowing the team to test different models before settling on smaller, efficient ones that provide excellent results with fine-tuning. Importantly, human reviewers remain part of the process. While AI handles the bulk detection, humans ensure that the blocking rules don’t accidentally interfere with important site functions. 

These reviewers refine and validate Cookiecrumbler’s suggestions before they’re deployed. Even better, Brave is releasing Cookiecrumbler as an open-source tool, inviting integration by other browsers and developers. This opens the door for tools like Vivaldi or Firefox to adopt similar capabilities. 

Looking ahead, Brave plans to integrate Cookiecrumbler directly into its browser, but only after completing thorough privacy reviews to ensure it aligns with the browser’s core principle of user-centric privacy. Cookiecrumbler marks a significant step forward in balancing user experience and privacy compliance—offering a smarter, less intrusive web.

Yoojo Exposes Millions of Sensitive Files Due to Misconfigured Database

 

Yoojo, a European service marketplace, accidentally left a cloud storage bucket unprotected online, exposing around 14.5 million files, including highly sensitive user data. The data breach was uncovered by Cybernews researchers, who immediately informed the company. Following the alert, Yoojo promptly secured the exposed archive.

The database contained a range of personally identifiable information (PII), including full names, passport details, government-issued IDs, user messages, and phone numbers. This level of detail, according to experts, could be exploited for phishing, identity theft, or even financial fraud.

Yoojo offers an online platform connecting users with service providers for tasks like cleaning, gardening, childcare, IT support, moving, and homecare. With over 500,000 downloads on Google Play, the app has gained significant traction in France, Spain, the Netherlands, and the UK.

Cybernews stated that the exposed database was publicly accessible for at least 10 days, though there's no current evidence of malicious exploitation. Still, researchers cautioned that unauthorized parties might have already accessed the data. Yoojo has yet to issue a formal comment on the incident.

“Leaked personal details enables attackers to create highly targeted phishing, vishing, and smishing campaigns. Fraudulent emails and SMS scams could involve impersonating Yoojo service providers asking for sensitive information like payment details or verification documents,” Cybernews researchers said.

The incident underscores how frequently misconfigured databases lead to data exposures. While many organizations rely on cloud services for storing confidential information, they often overlook the shared responsibility model that cloud infrastructure follows.

On a positive note, most companies act swiftly once made aware of such vulnerabilities—just as Yoojo did—by promptly restricting access to the exposed data.