Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label European Union. Show all posts

EU Accuses Meta of Breaching Digital Rules, Raises Questions on Global Tech Compliance

 




The European Commission has accused Meta Platforms, the parent company of Facebook and Instagram, of violating the European Union’s Digital Services Act (DSA) by making it unnecessarily difficult for users to report illegal online content and challenge moderation decisions.

In its preliminary findings, the Commission said both platforms lack a user-friendly “Notice and Action” system — the mechanism that allows people to flag unlawful material such as child sexual abuse content or terrorist propaganda. Regulators noted that users face multiple steps and confusing options before they can file a report. The Commission also claimed that Meta’s interface relies on “dark patterns”, which are design features that subtly discourage users from completing certain actions, such as submitting reports.

According to the Commission, Meta’s appeal process also falls short of DSA requirements. The current system allegedly prevents users from adding explanations or submitting supporting evidence when disputing a moderation decision. This, the regulator said, limits users’ ability to express why they believe a decision was unfair and weakens the overall transparency of Meta’s content moderation practices.

The European Commission’s findings are not final, and Meta has the opportunity to respond before any enforcement action is taken. If the Commission confirms these violations, it could issue a non-compliance decision, which may result in penalties of up to 6 percent of Meta’s global annual revenue. The Commission may also impose recurring fines until the company aligns its operations with EU law.

Meta, in a public statement, said it “disagrees with any suggestion” that it breached the DSA. The company stated that it has already made several updates to comply with the law, including revisions to content reporting options, appeals procedures, and data access tools.

The European Commission also raised similar concerns about TikTok, saying that both companies have limited researchers’ access to public data on their platforms. The DSA requires large online platforms to provide sufficient data access so independent researchers can analyze potential harms — for example, whether minors are exposed to illegal or harmful content. The Commission’s review concluded that the data-access tools of Facebook, Instagram, and TikTok are burdensome and leave researchers with incomplete or unreliable datasets, which hinders academic and policy research.

TikTok responded that it has provided data to almost 1,000 research teams and remains committed to transparency. However, the company noted that the DSA’s data-sharing obligations sometimes conflict with the General Data Protection Regulation (GDPR), making it difficult to comply with both laws simultaneously. TikTok urged European regulators to offer clarity on how these two frameworks should be balanced.

Beyond Europe, the investigation may strain relations with the United States. American officials have previously criticized the EU for imposing regulatory burdens on U.S.-based tech firms. U.S. FTC Chairman Andrew Ferguson recently warned companies that censoring or modifying content to satisfy foreign governments could violate U.S. law. Former President Donald Trump has also expressed opposition to EU digital rules and even threatened tariffs against countries enforcing them.

For now, the Commission’s investigation continues. If confirmed, the case could set a major precedent for how global social media companies manage user safety, transparency, and accountability under Europe’s strict online governance laws.


EU’s Child Sexual Abuse Regulation Risks Undermining Encryption and Global Digital Privacy

 

The European Union’s proposed Child Sexual Abuse Regulation (CSAR)—often referred to as Chat Control—is being criticized for creating an illusion of safety while threatening the very foundation of digital privacy. Experts warn that by weakening end-to-end encryption, the proposal risks exposing users worldwide to surveillance, exploitation, and cyberattacks. 

Encryption, which scrambles data to prevent unauthorized access, is fundamental to digital trust. It secures personal communications, financial data, and medical records, forming a critical safeguard for individuals and institutions alike. Yet, several democratic governments, including those within the EU, have begun questioning its use, framing strong encryption as an obstacle to law enforcement. This false dichotomy—between privacy and public safety—has led to proposals that inadvertently endanger both. 

At the center of the EU’s approach is client-side scanning, a technology that scans messages on users’ devices before encryption. Critics compare it to having someone read over your shoulder as you type a private letter. While intended to detect child sexual abuse material (CSAM), the system effectively eliminates confidentiality. Moreover, it can be easily circumvented—offenders can hide files by zipping, renaming, or converting them to other formats, undermining the entire purpose of the regulation. 

Beyond its inefficiency, client-side scanning opens the door to mass surveillance. Once such systems exist, experts fear they could be repurposed to monitor political dissent, activism, or journalism. By introducing backdoors—intentional weaknesses that allow access to encrypted data—governments risk repeating mistakes like those seen in the Salt Typhoon case, where a Chinese state-sponsored group exploited backdoors originally built for U.S. agencies. 

The consequences of weakened encryption are vast. Journalists would struggle to protect sources, lawyers could no longer guarantee client confidentiality, and businesses risk exposure of trade secrets. Even governments rely on encryption to protect national security. For individuals—especially victims of domestic abuse or marginalized groups—encrypted communication can literally be a matter of life and death. 

Ironically, encryption also protects children. Research from the UK’s Information Commissioner’s Office found that encrypted environments make it harder for predators to access private data for grooming. Weakening encryption, therefore, could expose children to greater harm rather than prevent it. 

Public opposition to similar policies has already shifted outcomes elsewhere. In Australia, controversial encryption laws passed in 2018 have yet to be enforced due to political backlash. In the UK, public resistance to the Online Safety Act led major tech companies to threaten withdrawal rather than compromise encryption.  

Within the EU, member states remain divided. Poland, Finland, the Netherlands, and the Czech Republic have opposed the CSAR for privacy and security reasons, while France, Denmark, and Hungary support it as a necessary tool against abuse. Whatever the outcome, the effects will extend globally—forcing tech companies to either weaken encryption standards or risk losing access to the European market. 

As the world marks Global Encryption Day, the debate surrounding CSAR highlights a broader truth: safeguarding the internet means preserving both safety and privacy. Rather than imposing blanket surveillance, policymakers should focus on targeted investigations, rapid CSAM takedown measures, and support for victims.  
Encryption remains the cornerstone of a secure, trustworthy, and free internet. If the EU truly aims to protect children and its citizens, it must ensure that this foundation remains unbroken—because once privacy is compromised, safety soon follows.

EU Border Security Database Found to Have Serious Cyber Flaws

 



A recent investigative report has revealed critical cybersecurity concerns in one of the European Union’s key border control systems. The system in question, known as the Second Generation Schengen Information System (SIS II), is a large-scale database used across Europe to track criminal suspects, unauthorized migrants, and missing property. While this system plays a major role in maintaining regional safety, new findings suggest its digital backbone may be weaker than expected.

According to a joint investigation by Bloomberg and Lighthouse Reports, SIS II contains a significant number of unresolved security issues. Though there is no confirmed case of data being stolen, experts warn that poor account management and delayed software fixes could leave the system open to misuse. One of the main issues flagged was the unusually high number of user accounts with access to the database; many of which reportedly had no clear purpose.

SIS II has been in use since 2013 and stores over 90 million records, most of which involve things like stolen vehicles and documents. However, about 1.7 million entries involve individuals. These personal records often remain unknown to those listed until they are stopped by police or immigration officers, raising concerns about privacy and oversight in the event of a breach.

One legal researcher familiar with European digital systems warned that a successful cyberattack could lead to wide-ranging consequences, potentially affecting millions of people across the EU.

Another growing concern is that SIS II is currently hosted on a closed, internal network—but that is about to change. The system is expected to be integrated with a new border management tool called the Entry/Exit System (EES), which will require travelers to provide fingerprints and facial images when entering or leaving countries in the Schengen zone. Since the EES will be accessible online, experts worry it could create a new path for hackers to reach SIS II, making the whole network more vulnerable.

The technical work behind SIS II is managed by a French company, but investigations show that fixing critical security problems has taken far longer than expected. Some fixes reportedly took several months or even years to implement, despite contractual rules that require urgent patches to be handled within two months.

The EU agency responsible for overseeing SIS II, known as EU-Lisa, contracts much of the technical work to private firms. Internal audits raised concerns that management wasn’t always informed about known security risks. In response, the agency claimed that it regularly tests and monitors all systems under its supervision.

As Europe prepares to roll out more connected security tools, experts stress the need for stronger safeguards to protect sensitive data and prevent future breaches.

WhatsApp Ads Delayed in EU as Meta Faces Privacy Concerns

 

Meta recently introduced in-app advertisements within WhatsApp for users across the globe, marking the first time ads have appeared on the messaging platform. However, this change won’t affect users in the European Union just yet. According to the Irish Data Protection Commission (DPC), WhatsApp has informed them that ads will not be launched in the EU until sometime in 2026. 

Previously, Meta had stated that the feature would gradually roll out over several months but did not provide a specific timeline for European users. The newly introduced ads appear within the “Updates” tab on WhatsApp, specifically inside Status posts and the Channels section. Meta has stated that the ad system is designed with privacy in mind, using minimal personal data such as location, language settings, and engagement with content. If a user has linked their WhatsApp with the Meta Accounts Center, their ad preferences across Instagram and Facebook will also inform what ads they see. 

Despite these assurances, the integration of data across platforms has raised red flags among privacy advocates and European regulators. As a result, the DPC plans to review the advertising model thoroughly, working in coordination with other EU privacy authorities before approving a regional release. Des Hogan, Ireland’s Data Protection Commissioner, confirmed that Meta has officially postponed the EU launch and that discussions with the company will continue to assess the new ad approach. 

Dale Sunderland, another commissioner at the DPC, emphasized that the process remains in its early stages and it’s too soon to identify any potential regulatory violations. The commission intends to follow its usual review protocol, which applies to all new features introduced by Meta. This strategic move by Meta comes while the company is involved in a high-profile antitrust case in the United States. The lawsuit seeks to challenge Meta’s ownership of WhatsApp and Instagram and could potentially lead to a forced breakup of the company’s assets. 

Meta’s decision to push forward with deeper cross-platform ad integration may indicate confidence in its legal position. The tech giant continues to argue that its advertising tools are essential for small business growth and that any restrictions on its ad operations could negatively impact entrepreneurs who rely on Meta’s platforms for customer outreach. However, critics claim this level of integration is precisely why Meta should face stricter regulatory oversight—or even be broken up. 

As the U.S. court prepares to issue a ruling, the EU delay illustrates how Meta is navigating regulatory pressures differently across markets. After initial reporting, WhatsApp clarified that the 2025 rollout in the EU was never confirmed, and the current plan reflects ongoing conversations with European regulators.

ProtectEU and VPN Privacy: What the EU Encryption Plan Means for Online Security

 

Texting through SMS is pretty much a thing of the past. Most people today rely on apps like WhatsApp and Signal to share messages, make encrypted calls, or send photos—all under the assumption that our conversations are private. But that privacy could soon be at risk in the EU.

On April 1, 2025, the European Commission introduced a new plan called ProtectEU. Its goal is to create a roadmap for “lawful and effective access to data for law enforcement,” particularly targeting encrypted platforms. While messaging apps are the immediate focus, VPN services might be next. VPNs rely on end-to-end encryption and strict no-log policies to keep users anonymous. However, if ProtectEU leads to mandatory encryption backdoors or expanded data retention rules, that could force VPN providers to change how they operate—or leave the EU altogether. 

Proton VPN’s Head of Public Policy, Jurgita Miseviciute, warns that weakening encryption won’t solve security issues. Instead, she believes it would put users at greater risk, allowing bad actors to exploit the same access points created for law enforcement. Proton is monitoring the plan closely, hoping the EU will consider solutions that protect encryption. Surfshark takes a more optimistic view. Legal Head Gytis Malinauskas says the strategy still lacks concrete policy direction and sees the emphasis on cybersecurity as a potential boost for privacy tools like VPNs. Mullvad VPN isn’t convinced. 

Having fought against earlier EU proposals to scan private chats, Mullvad criticized ProtectEU as a rebranded version of old policies, expressing doubt it will gain wide support. One key concern is data retention. If the EU decides to require VPNs to log user activity, it could fundamentally conflict with their privacy-first design. Denis Vyazovoy of AdGuard VPN notes that such laws could make no-log VPNs unfeasible, prompting providers to exit the EU market—much like what happened in India in 2022. NordVPN adds that the more data retained, the more risk users face from breaches or misuse. 

Even though VPNs aren’t explicitly targeted yet, an EU report has listed them as a challenge to investigations—raising concerns about future regulations. Still, Surfshark sees the current debate as a chance to highlight the legitimate role VPNs play in protecting everyday users. While the future remains uncertain, one thing is clear: the tension between privacy and security is only heating up.

AI Technology is Helping Criminal Groups Grow Stronger in Europe, Europol Warns

 



The European Union’s main police agency, Europol, has raised an alarm about how artificial intelligence (AI) is now being misused by criminal groups. According to their latest report, criminals are using AI to carry out serious crimes like drug dealing, human trafficking, online scams, money laundering, and cyberattacks.

This report is based on information gathered from police forces across all 27 European Union countries. Released every four years, it helps guide how the EU tackles organized crime. Europol’s chief, Catherine De Bolle, said cybercrime is growing more dangerous as criminals use advanced digital tools. She explained that AI is giving criminals more power, allowing them to launch precise and damaging attacks on people, companies, and even governments.

Some crimes, she noted, are not just about making money. In certain cases, these actions are also designed to cause unrest and weaken countries. The report explains that criminal groups are now working closely with some governments to secretly carry out harmful activities.

One growing concern is the rise in harmful online content, especially material involving children. AI is making it harder to track and identify those responsible because fake images and videos look very real. This is making the job of investigators much more challenging.

The report also highlights how criminals are now able to trick people using technology like voice imitation and deepfake videos. These tools allow scammers to pretend to be someone else, steal identities, and threaten people. Such methods make fraud, blackmail, and online theft harder to spot.

Another serious issue is that countries are now using criminal networks to launch cyberattacks against their rivals. Europol noted that many of these attacks are aimed at important services like hospitals or government departments. For example, a hospital in Poland was recently hit by a cyberattack that forced it to shut down for several hours. Officials said the use of AI made this attack more severe.

The report warns that new technology is speeding up illegal activities. Criminals can now carry out their plans faster, reach more people, and operate in more complex ways. Europol urged countries to act quickly to tackle this growing threat.

The European Commission is planning to introduce a new security policy soon. Magnus Brunner, the EU official in charge of internal affairs, said Europe needs to stay alert and improve safety measures. He also promised that Europol will get more staff and better resources in the coming years to fight these threats.

In the end, the report makes it clear that AI is making crime more dangerous and harder to stop. Stronger cooperation between countries and better cyber defenses will be necessary to protect people and maintain safety across Europe.

Why European Regulators Are Investigating Chinese AI firm DeepSeek

 


European authorities are raising concerns about DeepSeek, a thriving Chinese artificial intelligence (AI) company, due to its data practices. Italy, Ireland, Belgium, Netherlands, France regulators are examining the data collection methods of this firm, seeing whether they comply with the European General Data Protection Regulation or, if they also might consider that personal data is anyway transferred unlawfully to China.

Hence, due to these issues, the Italian authority has released a temporary restrainment to access the DeepSeek chatbot R1 for the time-being under which investigation will be conducted on what and how data get used, and how much has affected training in the AI model.  


What Type of Data Does DeepSeek Actually Collect? 

DeepSeek collects three main forms of information from the user: 

1. Personal data such as names and emails.  

2. Device-related data, including IP addresses.  

3. Data from third parties, such as Apple or Google logins.  

Moreover, there is an action that an app would be able to opt to take if at all that user was active elsewhere on those devices for "Community Security." Unlike many companies I have said where there are actual timelines or limits on data retention, it is stated that retention of data can happen indefinitely by DeepSeek. This can also include possible sharing with others-advertisers, analytics firms, governments, and copyright holders.  

Noting that most AI companies like the case of OpenAI's ChatGPT and Anthropic's Claude have met such privacy issues, experts would observe that DeepSeek doesn't expressly provide users the rights to deletion or restrictions on its use of their data as mandated requirement in the GDPR.  


The Collected Data Where it Goes  

One of major problems of DeepSeek is that it saves user data in China. Supposedly, the company has secure security measures in place for the data set and observes local laws for data transfer, but from a legal perspective, there is no valid basis being presented by DeepSeek concerning the storing of data from its European users outside the EU.  

According to the EDPB, privacy laws in China lay more importance on "stability of community than that of individual privacy," thus permitting broadly-reaching access to personal data for purposes such as national security or criminal investigations. Yet it is not clear whether that of foreign users will be treated differently than that of Chinese citizens. 


Cybersecurity and Privacy Threats 

As accentuated by cyber crime indices in 2024, China is one of the countries most vulnerable to cyberattacks. Cisco's latest report shows that DeepSeek's AI model does not have such strong security against hacking attempts. Other AI models can block at least some "jailbreak" cyberattacks, while DeepSeek turned out to be completely vulnerable to such assaults, which have made it softer for manipulation. 


Should Users Worry? 

According to experts, users ought to exercise caution when using DeepSeek and avoid sharing highly sensitive personal details. The uncertain policies of the company with respect to data protection, storage in China, and relatively weak security defenses could avail pretty heavy risks to users' privacy and as such warrant such caution. 

European regulators will then determine whether DeepSeek will be allowed to conduct business in the EU as investigations continue. Until then, users should weigh risks against their possible exposure when interacting with the platform. 



EU Officially Announce USB-C as Global Charging Standard

 


For tech enthusiasts and environmentalists in the European Union (EU), December 28, 2024, marked a major turning point as USB-C officially became the required standard for electronic gadgets.

The new policy mandates that phones, tablets, cameras, and other electronic devices marketed in the EU must have USB-C connectors. This move aims to minimise e-waste and make charging more convenient for customers. Even industry giants like Apple are required to adapt, signaling the end of proprietary charging standards in the region.

Apple’s Transition to USB-C

Apple has been slower than most Android manufacturers in adopting USB-C. The company introduced USB-C connectors with the iPhone 15 series in 2023, while older models, such as the iPhone 14 and the iPhone SE (3rd generation), continued to use the now-outdated Lightning connector.

To comply with the new EU regulations, Apple has discontinued the iPhone 14 and iPhone SE in the region, as these models include Lightning ports. While they remain available through third-party retailers until supplies run out, the regulation prohibits brands from directly selling non-USB-C devices in the EU. However, outside the EU, including in major markets like the United States, India, and China, these models are still available for purchase.

Looking Ahead: USB-C as the Future

Apple’s decision aligns with its broader strategy to phase out the Lightning connection entirely. The transition is expected to culminate in early 2025 with the release of a USB-C-equipped iPhone SE. This shift not only ensures compliance with EU regulations but also addresses consumer demands for a more streamlined charging experience.

The European Commission (EC) celebrated the implementation of this law with a playful yet impactful tweet, highlighting the benefits of a universal charging standard. “Today’s the day! USB-C is officially the common standard for electronic devices in the EU! It means: The same charger for all new phones, tablets & cameras; Harmonised fast-charging; Reduced e-waste; No more ‘Sorry, I don’t have that cable,’” the EC shared on X (formerly Twitter).

Environmental and Consumer Benefits

This law aims to alleviate the frustration of managing multiple chargers while addressing the growing environmental issues posed by e-waste. By standardising charging technology, the EU hopes to:

  • Simplify consumer choices
  • Extend the lifespan of accessories like cables and adapters
  • Reduce the volume of electronic waste

With the EU leading this shift, other regions may follow suit, further promoting sustainability and convenience in the tech industry.