Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Ethics. Show all posts

Public Wary of AI-Powered Data Use by National Security Agencies, Study Finds

 

A new report released alongside the Centre for Emerging Technology and Security (CETaS) 2025 event sheds light on growing public unease around automated data processing in national security. Titled UK Public Attitudes to National Security Data Processing: Assessing Human and Machine Intrusion, the research reveals limited public awareness and rising concern over how surveillance technologies—especially AI—are shaping intelligence operations.

The study, conducted by CETaS in partnership with Savanta and Hopkins Van Mil, surveyed 3,554 adults and included insights from a 33-member citizens’ panel. While findings suggest that more people support than oppose data use by national security agencies, especially when it comes to sensitive datasets like medical records, significant concerns persist.

During a panel discussion, investigatory powers commissioner Brian Leveson, who chaired the session, addressed the implications of fast-paced technological change. “We are facing new and growing challenges,” he said. “Rapid technological developments, especially in AI [artificial intelligence], are transforming our public authorities.”

Leveson warned that AI is shifting how intelligence gathering and analysis is performed. “AI could soon underpin the investigatory cycle,” he noted. But the benefits also come with risks. “AI could enable investigations to cover far more individuals than was ever previously possible, which raises concerns about privacy, proportionality and collateral intrusion.”

The report shows a divide in public opinion based on how and by whom data is used. While people largely support the police and national agencies accessing personal data for security operations, that support drops when it comes to regional law enforcement. The public is particularly uncomfortable with personal data being shared with political parties or private companies.

Marion Oswald, co-author and senior visiting fellow at CETaS, emphasized the intrusive nature of data collection—automated or not. “Data collection without consent will always be intrusive, even if the subsequent analysis is automated and no one sees the data,” she said.

She pointed out that predictive data tools, in particular, face strong opposition. “Panel members, in particular, had concerns around accuracy and fairness, and wanted to see safeguards,” Oswald said, highlighting the demand for stronger oversight and regulation of technology in this space.

Despite efforts by national security bodies to enhance public engagement, the study found that a majority of respondents (61%) still feel they understand “slightly” or “not at all” what these agencies actually do. Only 7% claimed a strong understanding.

Rosamund Powell, research associate at CETaS and co-author of the report, said: “Previous studies have suggested that the public’s conceptions of national security are really influenced by some James Bond-style fictions.”

She added that transparency significantly affects public trust. “There’s more support for agencies analysing data in the public sphere like posts on social media compared to private data like messages or medical data.”

OpenAI, the Maker of ChatGPT, Sued for Allegedly Exploiting "Stolen Private Information"

OpenAI lawsuit

Northern District of California files lawsuit

OpenAI, the artificial intelligence company behind ChatGPT, is being sued for allegedly collecting millions of customers' data to train its algorithms. The Northern District of California lawsuit claims that OpenAI used "stolen private information, including personally identifiable information," from hundreds of millions of internet users to construct its AI products, such as chatbot ChatGPT and picture generator Dall-E.

The lawsuit indicates OpenAI grew from a non-profit research facility to a firm that unlawfully steals millions of users' personal information to train its tools. The lawsuit accuses OpenAI of posing a "potentially catastrophic risk to humanity."

Lawsuit claims OpenAI violated ethics

It claims that OpenAI chose "to pursue profit at the expense of privacy, security, and ethics" and "doubled down on a strategy to secretly harvest massive amounts of personal data from the internet, including private information and private conversations, medical data, information about children — every piece of data exchanged on the internet it could take-without notice to the public."

According to the lawsuit, "[OpenAI's] Products would not be the multibillion-dollar business they are today without this unprecedented theft of private and copyrighted information belonging to real people, communicated to unique communities, for specific purposes, targeting specific audiences."

OpenAI is accused of stealing all inputs into its AI tools, including prompts people feed ChatGPT; users' account information, including their names, contact details, and login credentials; payment information; data pulled from users' browsers, including their physical locations; chat and search data; keystroke data, and more.

Microsoft listed in the lawsuit

Microsoft, another OpenAI partner listed in action, did not respond. OpenAI did not immediately reply to a request for comment from CBS MoneyWatch. This lawsuit raises serious concerns regarding the ethics of data collecting and use in artificial intelligence research. 

As AI advances and becomes more integrated into our daily lives, firms must be clear about their data collection practices and ensure that individuals' privacy and rights are respected. The case against OpenAI emphasizes the importance of increased accountability and openness in artificial intelligence. Corporations creating AI tools must be accountable for their activities and prioritize individual privacy and security.