Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Regulation. Show all posts

Danish Developer’s Website Sparks EU Debate on Online Privacy and Child Protection

 



In August, a 30-year-old developer from Aalborg, identified only as Joachim, built a platform called Fight Chat Control to oppose a proposed European Union regulation aimed at tackling the spread of child sexual abuse material (CSAM) online. The EU bill seeks to give law enforcement agencies new tools to identify and remove illegal content, but critics argue it would compromise encrypted communication and pave the way for mass surveillance.

Joachim’s website allows visitors to automatically generate and send emails to European officials expressing concerns about the proposal. What began as a weekend project has now evolved into a continent-wide campaign, with members of the European Parliament and national representatives receiving hundreds of emails daily. Some offices in Brussels have even reported difficulties managing the flood of messages, which has disrupted regular communication with advocacy groups and policymakers.

The campaign’s influence has extended beyond Brussels. In Denmark, a petition supported by Fight Chat Control gained more than 50,000 signatures, qualifying it for parliamentary discussion. Similar debates have surfaced across Europe, with lawmakers in countries such as Ireland and Poland referencing the controversy in national assemblies. Joachim said his website has drawn over 2.5 million visitors, though he declined to disclose his full name or employer to avoid associating his workplace with the initiative.

While privacy advocates applaud the campaign for sparking public awareness, others believe the mass email tactic undermines productive dialogue. Some lawmakers described the influx of identical messages as “one-sided communication,” limiting space for constructive debate. Child rights organisations, including Eurochild, have also voiced frustration, saying their outreach to officials has been drowned out by the surge of citizen emails.

Meanwhile, the European Union continues to deliberate the CSAM regulation. The European Commission first proposed the law in 2022, arguing that stronger detection measures are vital as online privacy technologies expand and artificial intelligence generates increasingly realistic harmful content. Denmark, which currently holds the rotating presidency of the EU Council, has introduced a revised version of the bill and hopes to secure support at an upcoming ministerial meeting in Luxembourg.

Danish Justice Minister Peter Hummelgaard maintains that the new draft is more balanced than the initial proposal, stating that content scanning would only be used as a last resort. However, several EU member states remain cautious, citing privacy concerns and the potential misuse of surveillance powers.

As European nations prepare to vote, the controversy continues to reflect a broader struggle: finding a balance between protecting children from online exploitation and safeguarding citizens’ right to digital privacy.



Public Wary of AI-Powered Data Use by National Security Agencies, Study Finds

 

A new report released alongside the Centre for Emerging Technology and Security (CETaS) 2025 event sheds light on growing public unease around automated data processing in national security. Titled UK Public Attitudes to National Security Data Processing: Assessing Human and Machine Intrusion, the research reveals limited public awareness and rising concern over how surveillance technologies—especially AI—are shaping intelligence operations.

The study, conducted by CETaS in partnership with Savanta and Hopkins Van Mil, surveyed 3,554 adults and included insights from a 33-member citizens’ panel. While findings suggest that more people support than oppose data use by national security agencies, especially when it comes to sensitive datasets like medical records, significant concerns persist.

During a panel discussion, investigatory powers commissioner Brian Leveson, who chaired the session, addressed the implications of fast-paced technological change. “We are facing new and growing challenges,” he said. “Rapid technological developments, especially in AI [artificial intelligence], are transforming our public authorities.”

Leveson warned that AI is shifting how intelligence gathering and analysis is performed. “AI could soon underpin the investigatory cycle,” he noted. But the benefits also come with risks. “AI could enable investigations to cover far more individuals than was ever previously possible, which raises concerns about privacy, proportionality and collateral intrusion.”

The report shows a divide in public opinion based on how and by whom data is used. While people largely support the police and national agencies accessing personal data for security operations, that support drops when it comes to regional law enforcement. The public is particularly uncomfortable with personal data being shared with political parties or private companies.

Marion Oswald, co-author and senior visiting fellow at CETaS, emphasized the intrusive nature of data collection—automated or not. “Data collection without consent will always be intrusive, even if the subsequent analysis is automated and no one sees the data,” she said.

She pointed out that predictive data tools, in particular, face strong opposition. “Panel members, in particular, had concerns around accuracy and fairness, and wanted to see safeguards,” Oswald said, highlighting the demand for stronger oversight and regulation of technology in this space.

Despite efforts by national security bodies to enhance public engagement, the study found that a majority of respondents (61%) still feel they understand “slightly” or “not at all” what these agencies actually do. Only 7% claimed a strong understanding.

Rosamund Powell, research associate at CETaS and co-author of the report, said: “Previous studies have suggested that the public’s conceptions of national security are really influenced by some James Bond-style fictions.”

She added that transparency significantly affects public trust. “There’s more support for agencies analysing data in the public sphere like posts on social media compared to private data like messages or medical data.”

Australian Government Plans Privacy Overhaul after Attacks on Multiple Organizations

 

Two weeks after the Medibank hack, the Australian government has decided to introduce legislative reforms on cybersecurity regulation that would increase penalties for companies that fail to guard customers’ personal data. 

Australia’s largest health insurer said on Wednesday a hacker accessed the data of all its 4 million customers which included personal information like names, dates of birth, addresses, and gender identities, as well as Medicare numbers and health claims. 

The malicious actor claimed to have extracted nearly 200GB of files and has provided 1,000 records to the insurer to prove they have the data claimed. The hacker also threatened to leak the diagnoses and treatments of high-profile customers if the insurer fails to pay the ransom. 

According to the Health insurer, its priority was to discover the specific data stolen in relation to each customer and to share that information with those customers. 

The company had previously said the breach was thought to be limited to its subsidiary AHM and foreign students. 

“Our investigation has now established that this criminal has accessed all our private health insurance customers' personal data and significant amounts of their health claims data,” Medibank chief executive David Koczkar stated. This is a terrible crime – this is a crime designed to cause maximum harm to the most vulnerable members of our community.” 

Legislative reform 

Cyberattacks on Optus, Medibank, and MyDeal have forced the Australian government to introduce legislative reforms on cybersecurity regulation. Last month on September 21, the hackers stole the personal data of almost 10 million current and former customers of Optus, the country’s second-biggest telecom. 

Two weeks later, the hackers targeted MyDeal, an online retail intermediary that lost the data of 2.2 million customers. 

“As the Optus, Medibank, and MyDeal cyberattacks have recently highlighted, data breaches have the potential to cause serious financial and emotional harm to Australians, and this is unacceptable. Governments, businesses, and other organizations have an obligation to protect Australians’ personal data, not to treat it as a commercial asset,” Attorney-General Mark Dreyfus stated during the introduction of amendments to the Privacy Act to Parliament. 

The government is keeping a close eye on firms that collect more customer data than necessary to make money from it in ways unrelated to the services for which the information was provided. The penalties for serious breaches of the Privacy Act would increase from 2.2 million Australian dollars ($1.4 million) now to AU$50 million ($32 million) under the proposed amendments, Dreyfus added.