Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label personal data security. Show all posts

AI Can Create Deepfake Videos of Children Using Just 20 Images, Expert Warns

 

Parents are being urged to rethink how much they share about their children online, as experts warn that criminals can now generate realistic deepfake videos using as few as 20 images. This alarming development highlights the growing risks of digital identity theft and fraud facing children due to oversharing on social media platforms.  

According to Professor Carsten Maple of the University of Warwick and the Alan Turing Institute, modern AI tools can construct highly realistic digital profiles, including 30-second deepfake videos, from a small number of publicly available photos. These images can be used not only by criminal networks to commit identity theft, open fraudulent accounts, or claim government benefits in a child’s name but also by large tech companies to train their algorithms, often without the user’s full awareness or consent. 

New research conducted by Perspectus Global and commissioned by Proton surveyed 2,000 UK parents of children under 16. The findings show that on average, parents upload 63 images to social media every month, with 59% of those being family-related. A significant proportion of parents—21%—share these photos multiple times a week, while 38% post several times a month. These frequent posts not only showcase images but also often contain sensitive data like location tags and key life events, making it easier for bad actors to build a detailed online profile of the child. Professor Maple warned that such oversharing can lead to long-term consequences. 

Aside from potential identity theft, children could face mental distress or reputational harm later in life from having a permanent digital footprint that they never consented to create. The problem is exacerbated by the fact that many parents are unaware of how their data is being used. For instance, 48% of survey respondents did not realize that cloud storage providers can access the data stored on their platforms. In fact, more than half of the surveyed parents (56%) store family images on cloud services such as Google Drive or Apple iCloud. On average, each parent had 185 photos of their children stored digitally—images that may be accessed or analyzed under vaguely worded terms and conditions.  

Recent changes to Instagram’s user agreement, which now allows the platform to use uploaded images to train its AI systems, have further heightened privacy concerns. Additionally, experts have warned about the use of personal images by other Big Tech firms to enhance facial recognition algorithms and advertising models. To protect their children, parents are advised to implement a range of safety measures. These include using secure and private cloud storage, adjusting privacy settings on social platforms, avoiding public Wi-Fi when sharing or uploading data, and staying vigilant against phishing scams. 

Furthermore, experts recommend setting boundaries with children regarding online activity, using parental controls, antivirus tools, and search filters, and modeling responsible digital behavior. The growing accessibility of AI-based image manipulation tools underscores the urgent need for greater awareness and proactive digital hygiene. What may seem like harmless sharing today could expose children to significant risks in the future.

The Privacy Risks of ChatGPT and AI Chatbots

 


AI chatbots like ChatGPT have captured widespread attention for their remarkable conversational abilities, allowing users to engage on diverse topics with ease. However, while these tools offer convenience and creativity, they also pose significant privacy risks. The very technology that powers lifelike interactions can also store, analyze, and potentially resurface user data, raising critical concerns about data security and ethical use.

The Data Behind AI's Conversational Skills

Chatbots like ChatGPT rely on Large Language Models (LLMs) trained on vast datasets to generate human-like responses. This training often includes learning from user interactions. Much like how John Connor taught the Terminator quirky catchphrases in Terminator 2: Judgment Day, these systems refine their capabilities through real-world inputs. However, this improvement process comes at a cost: personal data shared during conversations may be stored and analyzed, often without users fully understanding the implications.

For instance, OpenAI’s terms and conditions explicitly state that data shared with ChatGPT may be used to improve its models. Unless users actively opt-out through privacy settings, all shared information—from casual remarks to sensitive details like financial data—can be logged and analyzed. Although OpenAI claims to anonymize and aggregate user data for further study, the risk of unintended exposure remains.

Real-World Privacy Breaches

Despite assurances of data security, breaches have occurred. In May 2023, hackers exploited a vulnerability in ChatGPT’s Redis library, compromising the personal data of around 101,000 users. This breach underscored the risks associated with storing chat histories, even when companies emphasize their commitment to privacy. Similarly, companies like Samsung faced internal crises when employees inadvertently uploaded confidential information to chatbots, prompting some organizations to ban generative AI tools altogether.

Governments and industries are starting to address these risks. For instance, in October 2023, President Joe Biden signed an executive order focusing on privacy and data protection in AI systems. While this marks a step in the right direction, legal frameworks remain unclear, particularly around the use of user data for training AI models without explicit consent. Current practices are often classified as “fair use,” leaving consumers exposed to potential misuse.

Protecting Yourself in the Absence of Clear Regulations

Until stricter regulations are implemented, users must take proactive steps to safeguard their privacy while interacting with AI chatbots. Here are some key practices to consider:

  1. Avoid Sharing Sensitive Information
    Treat chatbots as advanced algorithms, not confidants. Avoid disclosing personal, financial, or proprietary information, no matter how personable the AI seems.
  2. Review Privacy Settings
    Many platforms offer options to opt out of data collection. Regularly review and adjust these settings to limit the data shared with AI

DNA Testing Firm Atlas Biomed Vanishes: Concerns Over Sensitive Data

 

DNA-testing company Atlas Biomed appears to have halted operations without notifying customers about the fate of their sensitive genetic data. Based in London, the firm provided insights into users' genetic profiles and potential health risks. Customers report being unable to access their reports online, and the company has not responded to inquiries from the BBC.

Disgruntled clients describe the situation as "very alarming," expressing fears about the handling of their "most personal information." The Information Commissioner’s Office (ICO) confirmed receiving a complaint about the company. A spokesperson stated: "People have the right to expect that organisations will handle their personal information securely and responsibly." Experts warn that users of DNA-testing services are often "completely at the mercy" of companies when it comes to safeguarding sensitive data.

Lisa Topping from Essex, who paid £100 for a genetic report, described her frustration after the company’s website vanished. "I don’t know what someone else could do with [the data], but it’s the most personal information… I don’t know how comfortable I feel that they have just disappeared," she said.

Another customer, Kate Lake from Kent, paid £139 in 2023 but never received her report. Despite promises of a refund, the company went silent. "It’s like no-one was at home," she explained, demanding answers about the fate of her data.

Attempts by the BBC to contact the firm have been unsuccessful. Phone numbers are inactive, the London office appears abandoned, and social media accounts have been dormant since mid-2023. Online comments reveal widespread customer complaints.

Atlas Biomed remains registered with Companies House but has not filed accounts since December 2022. Notably, two active officers are listed at a Moscow address linked to a Russian billionaire, who has since resigned from the company.

Cybersecurity expert Prof. Alan Woodward remarked on the "odd" connections: "If people knew the provenance of this company and how it operates, they might not be quite so ready to trust them with their DNA."

While no misuse of customer data has been confirmed, the lack of transparency raises concerns. Prof. Carissa Veliz, author of Privacy is Power, emphasized the unique sensitivity of DNA: "It is uniquely yours, you can’t change it, and it reveals your – and your family’s – biological strengths and weaknesses."

She added, "When you give your data to a company, you are completely at their mercy. We shouldn’t have to wait until something happens."

Atlas Biomed’s silence leaves its customers uncertain and alarmed about the safety of their most personal information.