Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Privacy. Show all posts

PocketPal AI Brings Offline AI Chatbot Experience to Smartphones With Full Data Privacy

 

In a digital world where most AI chatbots rely on cloud computing and constant internet connectivity, PocketPal AI takes a different approach by offering an entirely offline, on-device chatbot experience. This free app brings AI processing power directly onto your smartphone, eliminating the need to send data back and forth across the internet. Conventional AI chatbots typically transmit your interactions to distant servers, where the data is processed before a response is returned. That means even sensitive or routine conversations can be stored remotely, raising concerns about privacy, data usage, and the potential for misuse.

PocketPal AI flips this model by handling all computation on your device, ensuring your data never leaves your phone unless you explicitly choose to save or share it. This local processing model is especially useful in areas with unreliable internet or no access at all. Whether you’re traveling in rural regions, riding the metro, or flying, PocketPal AI works seamlessly without needing a connection. 

Additionally, using an AI offline helps reduce mobile data consumption and improves speed, since there’s no delay waiting for server responses. The app is available on both iOS and Android and offers users the ability to interact with compact but capable language models. While you do need an internet connection during the initial setup to download a language model, once that’s done, PocketPal AI functions completely offline. To begin, users select a model from the app’s library or upload one from their device or from the Hugging Face community. 

Although the app lists models without detailed descriptions, users can consult external resources to understand which model is best for their needs—whether it’s from Meta, Microsoft, or another developer. After downloading a model—most of which are several gigabytes in size—users simply tap “Load” to activate the model, enabling conversations with their new offline assistant. 

For those more technically inclined, PocketPal AI includes advanced settings for switching between models, adjusting inference behavior, and testing performance. While these features offer great flexibility, they’re likely best suited for power users. On high-end devices like the Pixel 9 Pro Fold, PocketPal AI runs smoothly and delivers fast responses. 

However, older or budget devices may face slower load times or stuttering performance due to limited memory and processing power. Because offline models must be optimized for device constraints, they tend to be smaller in size and capabilities compared to cloud-based systems. As a result, while PocketPal AI handles common queries, light content generation, and basic conversations well, it may not match the contextual depth and complexity of large-scale models hosted in the cloud. 

Even with these trade-offs, PocketPal AI offers a powerful solution for users seeking AI assistance without sacrificing privacy or depending on an internet connection. It delivers a rare combination of utility, portability, and data control in today’s cloud-dominated AI ecosystem. 

As privacy awareness and concerns about centralized data storage continue to grow, PocketPal AI represents a compelling alternative—one that puts users back in control of their digital interactions, no matter where they are.

Elon Musk Introduces XChat: Could This Be the Future of Private Messaging?

 


Elon Musk has recently introduced a new messaging tool for X, the platform formerly known as Twitter. This new feature, called XChat, is designed to focus on privacy and secure communication.

In a post on X, Musk shared that XChat will allow users to send disappearing messages, make voice and video calls, and exchange all types of files safely. He also mentioned that this system is built using new technology and referred to its security as having "Bitcoin-style encryption." However, he did not provide further details about how this encryption works.

Although the phrase sounds promising, Musk has not yet explained what makes the encryption similar to Bitcoin’s technology. In simple terms, Bitcoin uses very strong methods to protect data and keep user identities hidden. If XChat is using a similar security system, it could offer serious privacy protections. Still, without exact information, it is difficult to know how strong or reliable this protection will actually be.

Many online communities, especially those interested in cryptocurrency and secure communication, quickly reacted to the announcement. Some users believe that if XChat really provides such a high level of security, it could become a competitor to other private messaging apps like Signal and Telegram. People in various online groups also discussed the possibility that this feature could change how users share sensitive information safely.

This update is part of Musk’s ongoing plan to turn X into more than just a social media platform. He has often expressed interest in creating an "all-in-one" application where users can chat, share files, and even manage payments in a secure space.

Just last week, Musk introduced another feature called X Money. This payment system is expected to be tested with a small number of users later this year. Musk highlighted that when it comes to managing people’s money, safety and careful testing are essential.

By combining private messaging and payment services, X seems to be following the model of platforms like China’s WeChat, which offers many services in one place.

At this time, there are still many unanswered questions. It is not clear when XChat will be fully available to all users or exactly how its security will work. Until more official information is released, people will need to wait and see whether XChat can truly deliver the level of privacy it promises.

Unimed AI Chatbot Exposes Millions of Patient Messages in Major Data Leak

 

iA significant data exposure involving Unimed, one of the world’s largest healthcare cooperatives, has come to light after cybersecurity researchers discovered an unsecured database containing millions of sensitive patient-doctor communications.

The discovery was made by cybersecurity experts at Cybernews, who traced the breach to an unprotected Kafka instance. According to their findings, the exposed logs were generated from patient interactions with “Sara,” Unimed’s AI-driven chatbot, as well as conversations with actual healthcare professionals.

Researchers revealed that they intercepted more than 140,000 messages, although logs suggest that over 14 million communications may have been exchanged through the chat system.

“The leak is very sensitive as it exposed confidential medical information. Attackers could exploit the leaked details for discrimination and targeted hate crimes, as well as more standard cybercrime such as identity theft, medical and financial fraud, phishing, and scams,” said Cybernews researchers.

The compromised data included uploaded images and documents, full names, contact details such as phone numbers and email addresses, message content, and Unimed card numbers.

Experts warn that this trove of personal data, when processed using advanced tools like Large Language Models (LLMs), could be weaponized to build in-depth patient profiles. These could then be used to orchestrate highly convincing phishing attacks and fraud schemes.

Fortunately, the exposed system was secured after Cybernews alerted Unimed. The organization issued a statement confirming it had resolved the issue:

“Unimed do Brasil informs that it has investigated an isolated incident, identified in March 2025, and promptly resolved, with no evidence, so far, of any leakage of sensitive data from clients, cooperative physicians, or healthcare professionals,” the notification email stated. “An in-depth investigation remains ongoing.”

Healthcare cooperatives like Unimed are nonprofit entities owned by their members, aimed at delivering accessible healthcare services. This incident raises fresh concerns over data security in an increasingly AI-integrated medical landscape.

How Biometric Data Collection Affects Workers

 


Modern workplaces are beginning to track more than just employee hours or tasks. Today, many employers are collecting very personal information about workers' bodies and behaviors. This includes data like fingerprints, eye scans, heart rates, sleeping patterns, and even the way someone walks or types. All of this is made possible by tools like wearable devices, security cameras, and AI-powered monitoring systems.

The reason companies use these methods varies. Some want to increase workplace safety. Others hope to improve employee health or get discounts from insurance providers. Many believe that collecting this kind of data helps boost productivity and efficiency. At first glance, these goals might sound useful. But there are real risks to both workers and companies that many overlook.

New research shows that being watched in such personal ways can lead to fear and discomfort. Employees may feel anxious or unsure about their future at the company. They worry their job might be at risk if the data is misunderstood or misused. This sense of insecurity can impact mental health, lower job satisfaction, and make people less motivated to perform well.

There have already been legal consequences. In one major case, a railway company had to pay millions to settle a lawsuit after workers claimed their fingerprints were collected without consent. Other large companies have also faced similar claims. The common issue in these cases is the lack of clear communication and proper approval from employees.

Even when health programs are framed as helpful, they can backfire. For example, some workers are offered lower health insurance costs if they participate in screenings or share fitness data. But not everyone feels comfortable handing over private health details. Some feel pressured to agree just to avoid being judged or left out. In certain cases, those who chose not to participate were penalized. One university faced a lawsuit for this and later agreed to stop the program after public backlash.

Monitoring employees’ behavior can also affect how they work. For instance, in one warehouse, cameras were installed to track walking routes and improve safety. However, workers felt watched and lost the freedom to help each other or move around in ways that felt natural. Instead of making the workplace better, the system made workers feel less trusted.

Laws are slowly catching up, but in many places, current rules don’t fully protect workers from this level of personal tracking. Just because something is technically legal does not mean it is ethical or wise.

Before collecting sensitive data, companies must ask a simple but powerful question: is this really necessary? If the benefits only go to the employer, while workers feel stressed or powerless, the program might do more harm than good. In many cases, choosing not to collect such data is the better and more respectful option.


Balancing Consumer Autonomy and Accessibility in the Age of Universal Opt-Outs

 


The Universal Opt-Out Mechanism (UOOM) has emerged as a crucial tool that streamlines consumers' data rights exercise in a time when digital privacy concerns continue to rise. Through the use of this mechanism, individuals can express their preferences regarding the collection, sharing, and use of their personal information automatically, especially in the context of targeted advertising campaigns. 

Users will not have to deal with complex and often opaque opt-out procedures on a site-by-site basis when using UOOM to communicate their privacy preferences to businesses through a clear, consistent signal. With the rise of comprehensive privacy legislation implemented in more states across the country, UOOM is becoming increasingly important as a tool for consumer protection and regulatory compliance. 

A privacy law can be enforced by transferring the burden of action away from consumers and onto companies, so that individuals will not be required to repeatedly opt out across a variety of digital platforms. The UOOM framework is a crucial step toward the creation of a more equitable, user-centric digital environment since it not only enhances user transparency and control but also encourages businesses to adopt more responsible data practices. 

Throughout the evolution of privacy frameworks, UOOM represents a critical contribution to achieving this goal. Today, consumers do not have to worry about unsubscribing to endless email lists or deciphering deliberately complex cookie consent banners on almost every website they visit, as they do not have to deal with them painstakingly anymore. In just one action, the Universal Opt-Out Mechanism (UOOM) promises that data brokers—entities that harvest and trade personal information to generate profits—will not be able to collect and sell personal data anymore. 

There has been a shift in data autonomy over the past decade, with tools like California's upcoming Delete Request and Opt-out Platform (DROP) and the widely supported Global Privacy Control (GPC) signalling a new era in which privacy can be asserted with minimal effort. The goal of UOOMs is to streamline and centralize the opt-out process by streamlining and centralizing it, so that users will not have to navigate convoluted privacy settings across multiple digital platforms in order to opt out. 

In the process of automating the transmission of a user's preferences regarding privacy, these tools provide a more accessible and practical means of exercising data rights by enabling users to do so. The goal of this project is to reduce the friction often associated with protecting one's digital footprint, thus allowing individuals to regain control over who can access, use, and share their personal information. In this manner, UOOMs represent a significant step towards rebalancing the power dynamic between consumers and data-driven businesses. 

In spite of the promising potential of UOOMs, real-world implementation raises serious concerns, particularly regarding the evolving ambiguity of consent that exists in the digital age in the context of their implementation. In order to collect any personal information, individuals must expressly grant their consent in advance, such as through the “Notice and Opt-In” framework, which is embedded in European Union regulations such as the General Data Protection Regulation. This model assumes that personal data is off-limits unless the user decides otherwise.

As a result, widespread reliance on opt-out mechanisms might inadvertently normalise a more permissive environment, whereby data collection is assumed to be acceptable unless it is proactively blocked. As a result of this change, the foundational principle that users, and not corporations, should have the default authority over their personal information could be undermined. As the name implies, a Universal Opt-Out Mechanism (UOOM) is a technological framework for ensuring consumer privacy preferences are reflected across a wide range of websites and digital services in an automated manner. 

UOOMs automate this process, which is a standardised and efficient method for protecting personal information in the digital environment by removing the need for people to opt out of data collection on each platform they visit manually. A privacy-focused extension on a browser, or an integrated tool that transmits standard signals to websites and data processors that are called "Do Not Sell" or "Do Not Share", can be used to implement these mechanisms. 

The defining characteristic of UOOMs is the fact that they are able to communicate the preferences of their users universally, eliminating the repetitive and time-consuming chore of setting preferences individually on a plethora of websites, which eliminates this burden. As soon as the system has been configured, the user's data rights will be respected consistently across all participating platforms, thereby increasing efficiency as well as increasing the accessibility of privacy protection, which is one of the main advantages of this automation.

Furthermore, UOOMs are also an important compliance tool in jurisdictions that have robust data protection laws, since they facilitate the management of personal data for individuals. It has been established that several state-level privacy laws in the United States require businesses to recognise and respect opt-out signals, reinforcing the legal significance of adopting UOOM.

In addition to providing legal compliance, these tools are also intended to empower users by making it more transparent and uniform how privacy preferences are communicated and respected, as well as empowering them in their privacy choices. As a major example of such an opt-out mechanism, the Global Privacy Control (GPC) is one of the most widely supported opt-out options supported by a number of web browsers and privacy advocacy organisations. 

It illustrates how technology, regulators, and civil society can work together to operationalise consumer rights in a way that is both scalable and impactful through collaborative efforts. Hopefully, UOOMs such as GPC will become foundational elements of the digital privacy landscape as awareness and regulatory momentum continue to grow as a result of the increasing awareness and regulatory momentum. 

With the emergence of Universal Opt-Out Mechanisms (UOOMs), consumers have an unprecedented opportunity to assert control over their personal data in a way that was never possible before, marking a paradigm shift in the field of digital privacy. A UOOM is essentially a system that allows individuals to express their privacy preferences universally across numerous websites and online services through the use of one automated action. In essence, a UOOM represents an overarching concept whose objective is to allow individuals to express their privacy preferences universally. 

By streamlining the opt-out process for data collection and sharing, UOOMs significantly reduce the burden on users, as they do not need to have to manually adjust privacy settings across all the digital platforms with which they interact. This shift reflects a broader movement toward user-centred data governance, driven by the growing desire to be transparent and autonomous in the digital space by the general public. It is known that the Global Privacy Control (GPC) is one of the most prominent and well-known implementations of this concept. 

A GPC is a technical specification for communicating privacy preferences to users via their web browsers or browser extensions. The GPC system communicates, through HTTP headers, that a user wishes to opt out of having their personal information sold or shared to websites when enabled. By automating this communication, GPC simplifies the enforcement of privacy rights and offers a seamless, scalable solution to what was formerly a fragmented and burdensome process by offering an effective, scalable solution. 

The GPC is gaining legal acceptance in several U.S. states as a result of the constant evolution of legislation. For instance, businesses are now required to acknowledge and honour such signals under state privacy laws in California, Colorado, and Connecticut. It is evident from the implications that are clear for businesses operating in these jurisdictions: complying with universal opt-out signals isn't an option anymore - it is a legal necessity. 

It is estimated that by the year 2025, more and more states will have adopted or are in the process of enacting privacy laws that require the recognition of UOOMs, setting new standards for corporate data practices that will set new standards for corporate data usage. Companies that fail to comply with these regulations may be subject to regulatory penalties, reputational damage, or even lose consumers' trust in the process. 

Conversely, organisations that are proactive and embrace UOOM compliance early and integrate tools such as GPC into their privacy infrastructure will not only meet legal obligations, but they will also show a commitment to ethical data stewardship as well. In an era in which consumer trust is paramount, this approach not only enhances transparency but also strengthens consumer confidence. In the upcoming years, universal opt-out mechanisms will play a significant role in redefining the relationship between businesses and consumers by placing user rights and consent at the core of digital experiences, as they become an integral part of modern data governance frameworks. 

As the digital ecosystem becomes more complex and data-driven, regulating authorities, technologists, and businesses alike must become more focused on implementing and refining universal opt-out mechanisms (UOOMs) as a strategic priority. The tools are more than just tools that satisfy legal requirements. They offer a chance to rebuild consumer trust, set new standards for data stewardship, and make privacy protection more accessible to all citizens. 

Despite these challenges, their success depends on thoughtful implementation, one that does not just favour the technologically savvy or financially secure, but one that ensures everyone has equitable access and usability, regardless of their socioeconomic status. There are a number of critical challenges that need to be addressed head-on for UOOMs to achieve their full potential: user education, standardising technical protocols and ensuring cross-platform interoperability. 

In order for regulatory bodies to provide clearer guidance regarding the enforcement of privacy rights and digital consent, they must also invest in public awareness campaigns that de-mystify them. Meanwhile, platform providers and developers have a responsibility to ensure the privacy tools are not only functional but are also intuitive and accessible to as wide a range of users as possible by focusing on inclusive design. 

Businesses, on their part, must make a cultural shift, as they move from looking at privacy as a compliance burden to seeing it as an ethical imperative and competitive advantage. It is important to note that in the long run, the value of universal opt-out tools is not only determined by their legal significance, but also by their ability to empower individuals to navigate the digital world in a confident, dignified, and controlled manner. 

In a world where the lines between digital convenience and data exploitation are increasingly blurring, UOOMs provide a clear path forward - one that is grounded in a commitment to transparency, fairness, and respect for individual liberty. In order to stay ahead of today's digital threat, collective action is needed. To move beyond reactive compliance and to promote a proactive and privacy-first paradigm that places users at the heart of digital innovation, one must take action collectively.

Google’s AI Virtual Try-On Tool Redefines Online Shopping Experience

 

At the latest Google I/O developers conference, the tech giant introduced an unexpected innovation in online shopping: an AI-powered virtual try-on tool. This new feature lets users upload a photo of themselves and see how clothing items would appear on their body. By merging the image of the user with that of the garment, Google’s custom-built image generation model creates a realistic simulation of the outfit on the individual. 

While the concept seems simple, the underlying AI technology is advanced. In a live demonstration, the tool appeared to function seamlessly. The feature is now available in the United States and is part of Google’s broader efforts to enhance the online shopping experience through AI integration. It’s particularly useful for people who often struggle to visualize how clothing will look on their body compared to how it appears on models.  

However, the rollout of this tool raised valid questions about user privacy. AI systems that involve personal images often come with concerns over data usage. Addressing these worries, a Google representative clarified that uploaded photos are used exclusively for the try-on experience. The images are not stored for AI training, are not shared with other services or third parties, and users can delete or update their photos at any time. This level of privacy protection is notable in an industry where user data is typically leveraged to improve algorithms. 

Given Google’s ongoing development of AI-driven tools, some expected the company to utilize this photo data for model training. Instead, the commitment to user privacy in this case suggests a more responsible approach. Virtual fitting technology isn’t entirely new. Retail and tech companies have been exploring similar ideas for years. Amazon, for instance, has experimented with AI tools in its fashion division. Google, however, claims its new tool offers a more in-depth understanding of diverse body types. 

During the presentation, Vidhya Srinivasan, Google’s VP of ads and commerce, emphasized the system’s goal of accommodating different shapes and sizes more effectively. Past AI image tools have faced criticism for lacking diversity and realism. It’s unclear whether Google’s new tool will be more reliable across the board. Nevertheless, their assurance that user images won’t be used to train models helps build trust. 

Although the virtual preview may not always perfectly reflect real-life appearances, this development points to a promising direction for AI in retail. If successful, it could improve customer satisfaction, reduce returns, and make online shopping a more personalized experience.

AI is Accelerating India's Healthtech Revolution, but Data Privacy Concerns Loom Large

 

India’s healthcare, infrastructure, is undergoing a remarkable digital transformation, driven by emerging technologies like artificialintelligence (AI), machinelearning, and bigdata. These advancements are not only enhancing accessibility and efficiency but also setting the foundation for a more equitable health system. According to the WorldEconomicForum (WEF), AI is poised to account for 30% of new drug discoveries by 2025 — a major leap for the pharmaceutical industry.

As outlined in the Global Outlook and Forecast 2025–2030, the market for AI in drugdiscovery is projected to grow from $1.72 billion in 2024 to $8.53 billion by 2030, clocking a CAGR of 30.59%. Major tech players like IBMWatson, NVIDIA, and GoogleDeepMind are partnering with pharmaceutical firms to fast-track AI-led breakthroughs.

Beyond R&D, AI is transforming clinical workflows by digitising patientrecords and decentralising models to improve diagnostic precision while protecting privacy.

During an interview with AnalyticsIndiaMagazine (AIM), Rajan Kashyap, Assistant Professor at the National Institute of Mental Health and Neuro Sciences (NIMHANS), shared insights into the government’s push toward innovation: “Increasing the number of seats in medical and paramedical courses, implementing mandatory rural health services, and developing Indigenous low-cost MRI machines are contributing significantly to hardware development in the AI innovation cycle.”

Tech-Driven Healthcare Innovation

Kashyap pointed to major initiatives like the GenomeIndia project, cVEDA, and the AyushmanBharatDigitalMission as critical steps toward advancing India’s clinical research capabilities. He added that initiatives in genomics, AI, and ML are already improving clinicaloutcomes and streamlining operations.

He also spotlighted BrainSightAI, a Bengaluru-based startup that raised $5 million in a Pre-Series A round to scale its diagnostic tools for neurological conditions. The company aims to expand across Tier 1 and 2 cities and pursue FDA certification to access global healthcaremarkets.

Another innovator, Niramai Health Analytics, offers an AI-based breast cancer screening solution. Their product, Thermalytix, is a portable, radiationfree, and cost-effective screening device that is compatible with all age groups and breast densities.

Meanwhile, biopharma giant Biocon is leveraging AI in biosimilar development. Their work in predictivemodelling is reducing formulation failures and expediting regulatory approvals. One of their standout contributions is Semglee, the world’s first interchangeablebiosimilar insulin, now made accessible through their tie-up with ErisLifesciences.

Rising R&D costs have pushed pharma companies to adopt AI solutions for innovation and costefficiency.

Data Security Still a Grey Zone

While innovation is flourishing, there are pressing concerns around dataprivacy. A report by Netskope Threat Labs highlighted that doctors are increasingly uploading sensitive patient information to unregulated platforms like ChatGPT and Gemini.

Kashyap expressed serious concerns about lax data practices:

“During my professional experience at AI labs abroad, I observed that organisations enforced strict data protection regulations and mandatory training programs…The use of public AI tools like ChatGPT or Gemini was strictly prohibited, with no exceptions or shortcuts allowed.”

He added that anonymised data is still vulnerable to hacking or reidentification. Studies show that even brainscans like MRIs could potentially reveal personal or financial information.

“I strongly advocate for strict adherence to protected data-sharing protocols when handling clinical information. In today’s landscape of data warfare, where numerous companies face legal action for breaching data privacy norms, protecting health data is no less critical than protecting national security,” he warned.

Policy Direction and Regulatory Needs

The Netskope report recommends implementing approved GenAI tools in healthcare to reduce “shadow AI” usage and enhance security. It also urges deploying datalossprevention (DLP) policies to regulate what kind of data can be shared on generative AI platforms.

Although the usage of personal GenAI tools has declined — from 87% to 71% in one year — risks remain.

Kashyap commented on the pace of India’s regulatory approach:

“India is still in the process of formulating a comprehensive data protection framework. While the pace may seem slow, India’s approach has traditionally been organic, carefully evolving with consideration for its unique context.”

He also pushed for developing interdisciplinary medtech programs that integrate AIeducation into medicaltraining.

“Misinformation and fake news pose a significant threat to progress. In a recent R&D project I was involved in, public participation was disrupted due to the spread of misleading information. It’s crucial that legal mechanisms are in place to counteract such disruptions, ensuring that innovation is not undermined by false narratives,” he concluded.

Florida Scraps Controversial Law That Threatened Online Privacy

 



A proposed law in Florida that raised concerns about online privacy has now been officially dropped. The bill, called “Social Media Use by Minors,” aimed to place tighter controls on how children use social media. While it was introduced to protect young users, many experts argued it would have done more harm than good — not just for kids, but for all internet users.

One major issue with the bill was its demand for social media platforms to change how they protect users’ messages. Apps like WhatsApp, Signal, iMessage, and Instagram use something called end-to-end encryption. This feature makes messages unreadable to anyone except the person you're talking to. Not even the app itself can access the content.

The bill, however, would have required these platforms to create a special way for authorities to unlock private messages if they had a legal order. But cybersecurity professionals have long said that once such a "backdoor" exists, it can't be safely limited to just the police. Criminals, hackers, or even foreign spies could find and misuse it. Creating a backdoor for some means weakening protection for all.

The bill also included other rules, like banning temporary or disappearing messages for children and letting parents view everything their child does on social media. Critics worried this would put young users at greater risk, especially those needing privacy in situations like abuse or bullying.

Even though the Florida Senate passed the bill, the House of Representatives refused to approve it. On May 3, 2025, the bill was officially removed from further discussion. Digital privacy advocates, such as the Electronic Frontier Foundation, welcomed this move, calling it a step in the right direction for protecting online privacy.

This isn’t the first time governments have tried and failed to weaken encryption. Similar efforts have been blocked in other parts of the world, like France and the European Union, for the same reason: once secure messaging is weakened, it puts everyone at risk.

For now, users in Florida can breathe a sigh of relief. The bill’s failure shows growing recognition of how vital strong encryption is in keeping our personal information safe online.

Safeguarding Personal Privacy in the Age of AI Image Generators

 


A growing trend of artificial intelligence-powered image creation tools has revolutionised the way users interact with digital creativity, providing visually captivating transformations in just a matter of clicks. The ChatGPT and Grok 3 platforms, which use artificial intelligence, offer users the chance to convert their own photos into stunning illustrations that are reminiscent of the iconic Ghibli animation style. These services offer this feature completely free of charge to users. 

A technological advancement of this sort has sparked excitement among users who are eager to see themselves reimagined in artistic forms, yet it raises some pressing concerns that need to be addressed carefully. Despite the user-friendly interfaces of these AI image generators, deep learning technology underlies them, processing and analysing each picture they receive. 

In doing so, they are not only able to produce aesthetically pleasing outputs, but they are also able to collect visual data, which can be used to continuously improve their models. Therefore, when individuals upload their personal images to artificial intelligence systems, they unknowingly contribute to the training of these systems, compromising their privacy in the process, and also, the ethical implications of data ownership, consent, and long-term usage remain ambiguous in many situations. 

With the growing use of AI-generated imagery, it is becoming increasingly important to examine all the risks and responsibilities associated with sharing personal photos with these tools that go unnoticed by the average person. Despite the appeal of artificial intelligence-generated images based on their creative potential, experts are now increasingly advising users about the deeper risks associated with data privacy and misuse. 

There is more to AI image generators than merely processing and storing photographs submitted by users. They may also be collecting other, potentially sensitive information related to the user, such as IP addresses, email addresses, or metadata that describes the user's activities. The Mimikama organisation, which aims to expose online fraud and misinformation, claims that users are often revealing far more than they intended and, as a result, relinquish control over their digital identities. 

Katharina Grasl, an expert in digital security from the Consumer Centre of Bavaria, shares these concerns. She points out that, depending on the type of input that is provided, the user may inadvertently reveal details regarding their full name, geographical location, interests, and lifestyle habits, among others. These platforms utilise artificial intelligence systems that can analyse a wide variety of factors in addition to facial features – they can interpret a broad range of variables, ranging from age to emotional state to body language to subtle behavioural cues. 

It is concerning to note organisations like Mimikama warn that such content could be misused for unethical or criminal purposes in a variety of ways, going well beyond artistic transformation. An image uploaded by someone on a website may be manipulated to create a deepfake, inserted into a misleading narrative, or — more concerningly — may be used for explicit or pornographic purposes. The potential of harm increases dramatically when the subjects of these images are minors. 

In addition to increasing public awareness around data rights, responsible usage, and the dangers associated with unintended exposure as AI technology continues to expand, so is also a need to increase public awareness of these issues. It may seem harmless and entertaining on the surface to transform personal photographs into whimsical 'Ghibli-style' illustrations, but digital privacy experts caution that there is much more to the implications of this trend than just creative fun. Once a user uploads an image to an AI generator, their level of control over that content is frequently greatly diminished. 
Byh Proton, a platform which specialises in data privacy and security, personal photos that were shared with AI tools were absorbed into the large datasets used to train machine learning models without the user's explicit consent. The implications of this are that images can be reused in unintended and sometimes harmful ways, thus creating the potential for unintended reuse of images. According to Proton's public advisory on X (formerly Twitter), uploaded images may be exploited to create misleading, defamatory and even harassing content, which can lead to misleading behaviour. In this case, the main concern lies in the fact that once users have submitted their image, it is no longer in their possession. 

The image becomes an integral part of a larger digital ecosystem, which is frequently free of accountability and transparency when it is altered, repurposed, or redistributed. The British futurist Elle Farrell-Kingsley contributed to the discussion by pointing out the danger of exposing sensitive data through these platforms. In his article, he noted that if images are uploaded to AI tools, they can be unintentionally revealed with information such as the user's location, device data, or even the child's identity, which can lead to identifying information being revealed. 

It is important to be vigilant if something is free, he wrote, reinforcing the need for increased monitoring. In light of these warnings, it is important to realise that participation in AI-generated content can come at an extremely high cost as opposed to what may appear initially. To participate in responsible digital engagement, it is crucial to be aware of these trade-offs. Once users upload an image into an artificial intelligence-generated image generator, they are unable to regain full control of that image, if not impossible at all. 

Despite a user’s request for deletion, images may have already been processed, copied, or stored across multiple systems even though the user has submitted a deletion request. Especially if the provider is located outside of jurisdictions that adhere to strong data protection laws, such as the EU’s data protection laws. As this issue becomes more critical, AI platforms that grant extended, sometimes irrevocable access to user-submitted content through their terms of service may become increasingly problematic.

It has been pointed out by the State Data Protection Officer for Rhineland-Palatinate that despite the protections provided by the EU's General Data Protection Regulation (GDPR), it is practically impossible to ensure that such images are completely removed from the digital landscape despite these protections. Aside from that, if a user uploads a photo that features a family member, friend or acquaintance without their explicit consent, the legal and ethical stakes are even higher. Taking such actions might be a direct violation of the individual's right to control their own image, a right that is recognised under privacy laws as well as media laws in many countries. 

It is also important to note that there is a grey area regarding how copyrighted or trademarked elements might be used in AI-generated images. Taking an image and editing it to portray oneself as a character in a popular franchise, such as Star Wars, and posting it to social media can constitute an infringement of intellectual property rights if done wrong. Digital safety advocacy group Mimikama warns that claims such content is "just for fun" do not provide any protection against a possible cease-and-desist order or legal action from rights holders if it becomes subject to cease-and-desist orders. 

A time when the line between creativity and consent is becoming more and more blurry due to the advances of artificial intelligence technologies, users should take such tools more seriously and with increased awareness. Before uploading any image, it is important to understand its potential consequences—legal, ethical, and personal—and to take precautions against the consequences. Although Ghibli-style AI-generated images can be an enjoyable and artistic way to interact with technology, it is important to ensure that one's privacy is safeguarded. 

It is crucial to follow a few essential best practices to reduce the risk of misuse and unwanted exposure of data. For starters, one needs to carefully review the platform's privacy policy and terms of service before uploading any images. When a platform's intentions and safeguards are clearly understood by understanding how data is collected, stored, shared, or trained on an artificial intelligence model, one gets a clearer understanding. 

To ensure that users are not in violation of the terms and conditions, users should not upload photos that contain identifiable features, private settings, or sensitive data like financial documents or images of children. If possible, use anonymised alternatives, such as stock images or artificial intelligence avatars, so that users can enjoy the creative features without compromising their personal information. Moreover, exploring offline AI tools that run locally on a device might be a more secure option, since they do not require internet access and do not typically transmit data to external servers, making them a more secure choice. 

If users use online platforms, they should look for opt-out options that allow them to decline the use of their data to train or store artificial intelligence. These options are often overlooked but can provide a layer of control that is often lacking in online platforms. Nowadays, in a fast-paced, digital world, creativity and caution are both imperative. It is important for individuals to remain vigilant and make privacy-conscious choices to take advantage of the wonders of artificial intelligence-generated art without compromising the security of their personal information. 

Users need to be cautious when using these tools, as they are becoming increasingly mainstream. In spite of the striking results of these technologies, the risks associated with privacy, data ownership, and misuse are real and often underestimated. When it comes to the internet, individuals should be aware of what they’re agreeing to, avoid sharing information that is identifiable or sensitive, and look for platforms that have transparent data policies and user control. 

The first line of defence in today's digital age, when personal data is becoming increasingly public, is awareness. It is important to be aware of what is shared and where, using AI to create new products. Users should keep this in mind to ensure that their digital safety and privacy are not compromised.

Google’s New Android Security Update Might Auto-Reboot Your Phone After 3 Days

 

In a recent update to Google Play Services, the tech giant revealed a new security feature that could soon reboot your Android smartphone automatically — and this move could actually boost your device’s safety.

According to the update, Android phones left unused for three consecutive days will automatically restart. While this might sound intrusive at first, the reboot comes with key security benefits.

There are two primary reasons why this feature is important:

First, after a reboot, the only way to unlock a phone is by entering the PIN — biometric options like fingerprint or facial recognition won’t work until the PIN is input manually. This ensures added protection, especially for users who haven’t set up any screen lock. A forced PIN entry makes it much harder for unauthorized individuals to access your device or the data on it.

Second, the update enhances encryption security. Android devices operate in two states: Before First Unlock (BFU) and After First Unlock (AFU). In the BFU state, your phone’s contents are completely encrypted, meaning that even advanced tools can’t extract the data.

This security measure also affects how law enforcement and investigative agencies handle seized phones. Since the BFU state kicks in automatically after a reboot, authorities have a limited window to access a device before it locks down data access completely.

“A BFU phone remains connected to Wi-Fi or mobile data, meaning that if you lose your phone and it reboots, you'll still be able to use location-finding services.”

The feature is listed in Google’s April 2025 System release notes, and while it appears to extend to Android tablets, it won’t apply to wearables like the Pixel Watch, Android Auto, or Android TVs.

As of now, Google hasn’t clarified whether users will have the option to turn off this feature or customize the three-day timer.

Because it’s tied to Google Play Services, users will receive the feature passively — there’s no need for a full system update to access it.

Google Now Scans Screenshots to Identify Geographic Locations

 


With the introduction of a new feature within Google Maps that is already getting mixed reviews from users, this update is already making headlines around the world. Currently available on iPhones, this update allows users to scan screenshots and photographs to identify and suggest places in real time. 

However, even though the technology has the potential to make life easier, such as helping users remember places they have visited or discovering new destinations, it raises valid privacy concerns as well. Even though it is a feature powered by artificial intelligence, there is not much clarity about its exact mechanism. However, it is known that the system analyses the content of images, which is a process involving the transmission of user data across multiple servers, including personal photos. 

Currently, the feature is exclusive to iOS devices, with Android support coming shortly afterwards -- an interesting delaybecauset Android is Google's native operating system. In fact, iPhone users have already been able to test it out by uploading older images to Google Maps and comparing them with locations that they know. 

As the update gains traction, it is likely to generate a lot of discussion regarding its usefulness and ethical implications. It has been reported that Google Maps has launched a new feature to streamline the user experience by making it easier for users to search for locations. Its functionality is currently available only for iPhones, but will soon be available for other devices as well. It allows the app to automatically detect and extract location information from screenshots taken from the device, such as the name of the place or address. 

As the location is identified, it is added to a personalised list, which allows users to return to those locations without having to type in the details manually. There is a particular benefit to using this enhancement for users who frequently take screenshots of places they intend to visit in the future, either from messages they receive from their friends and family, or from social media. As a result of this update, users no longer have to switch between their phone’s gallery and the Maps app manually to search for these places. This makes the process more seamless and intuitive since this friction has been eliminated. 

It is worth mentioning that Google's approach reflects a larger push toward automated convenience, but it also comes with several concerns about data usage and user consent in regards to this approach. In Google's opinion, the motivation behind this feature stems from a problem that many travellers face when researching destinations: keeping track of screenshots they have taken from travel blogs, social media posts, or news articles.

In most cases, these valuable bits of information get lost in the camera roll, which makes it difficult to recall or retrieve them when they are needed. This is an issue that Google is looking to address by improving the efficiency and stress of trip planning. Added to Google Maps' seamless way of surface-saving spots, users will never miss out on a must-visit location simply because it was buried among other images. 

When users authorize the app to access their photo library, it scans screenshots for recognizable place names and addresses and instantly lets them know if they do. If a location is able to be recognized successfully, the app will prompt the user, telling them that the place can now be reviewed and saved if the user wants to. With the advent of this functionality, what once existed only as a passive image archive will soon become an interactive tool for travel planning, providing both convenience and personal navigation. 

Firstly, users must ensure that on their iPhone or iPad device that Google Maps is updated to its latest version if they intend to take full advantage of this innovative feature. Once the app is up to date, the transition from screenshot to saved destination becomes almost effortless. By simply capturing screenshots of websites mentioning travel destinations, whether they come from blogs, articles, or curated recommendations lists, the app can recognize and retrieve valuable location information by simply taking screenshots. 

At the bottom of the screen, there is a new "Screenshots" tab, which appears prominently under the "You" section within the updated interface. The first time a user accesses this section for the first time, they are given a short overview of the functionality, and then they are prompted to grant the app access to the photo gallery of the device. Allowing the app to access this photo gallery allows it to use intelligence to scan for place names and landmarks in the images. 

Once Google Maps has been able to recognise a location within a screenshot, it will begin analysing the images to identify relevant geographical information embedded within them. By simply clicking on the “Scan screenshots” button, Google Maps immediately begins analysing your stored screenshots. Recognised locations are neatly organised into a list, allowing for selective curation. Once confirmed, these places are added to a custom list within the app once confirmed. 

Each additional location becomes a central hub, offering visuals, descriptions, navigation options, and the possibility of organising them into favourites or themed lists, allowing for easy organisation. The static images that once resided in the camera roll are now transformed into a dynamic planning tool that can be accessed from anywhere. The clever combination of AI and user behavior illustrates how technology can elevate everyday habits into smarter, more intuitive experiences by incorporating AI into everyday experiences.

As a further step toward making Google Maps a hands-off experience, you can also set it up so that it will automatically scan any future screenshots you take. With access to the entire photo library, the app continuously monitors and detects new screenshots that contain location information. It then places them directly into the “Screenshots” section without having to do any manual work at all. 

Users have the option of switching this feature on or off at any time, thus giving them complete control over how much of their photo content is being analyzed at any given time. Additionally, if you would rather take a more selective approach, the feature allows you to choose specific images to scan manually, which allows you to make the most of the available features.

In this way, users can take advantage of the convenience of the tool while maintaining their level of privacy and control as artificial intelligence continues to shape our everyday digital experiences, and the new Google Maps feature stands out as one of the best examples of how to combine automation with the ability to control it. 

A smarter, more intuitive way of planning and exploring is created by turning passive screenshots into actionable insights – allowing you to discover, save, and revisit locations in a more intuitive way. This latest update marks a significant milestone in Google Maps' evolution toward smarter, more intuitive navigation tools. By bridging visual content with location intelligence, it creates an enhanced user experience that aligns with the rising demand for efficiency and automation in the industry. 

The rise of artificial intelligence continues to shape the way digital platforms function. Features like screenshot scanning emphasize the necessity to maintain user control while enhancing convenience as a result of thoughtful innovation. As a result of this upgrade, both users and industry professionals will be able to enjoy seamless, context-aware travel planning, which is a reflection of the future.

Audio and Video Chat Recording Could Be Part of Nintendo Switch 2


 

Audio and Video Chat Recording Could Be Part of Nintendo Switch 2.    In an official announcement from Nintendo, a new in-game communication system known as GameChat will be included in the Nintendo Switch 2 console, which is due to release in May. GameChat is an innovative way in which players can share their screens and engage in live audio and video conversations with one another during gameplay. 

As the company has stated, these GameChat sessions may be recorded and monitored, including both voice and video content, to promote a safe and respectful gaming environment. Throughout its online ecosystem, Nintendo is committed to its users' safety and ensuring responsible interactions with it. This announcement was made during a special Direct livestream in early April, during which the company also revealed important details about the console's launch. 

The Nintendo Switch 2 is scheduled for release on June 5 and will cost $449.99 for its standard edition, while the premium Mario Kart World bundle will cost $499.99. As far as the console itself is concerned, it introduces significant improvements over its predecessor, including 4K resolution with HDR, 256GB of internal storage, and a bigger display weighing 7.9 inches. 

There are also notable performance improvements, with 1080p visuals and frame rates up to 120 frames a second, offering a smoother, more immersive experience compared to the original Switch. There is growing excitement surrounding the next-generation hybrid system, and Nintendo is doing everything it can to ensure both the excitement of the next-generation hybrid system and the responsibility of its users are addressed at the same time.

It aligns the company with the industry by implementing recording capabilities for its new GameChat feature, which has already been implemented across many other major gaming platforms. Both Microsoft and Sony have implemented similar safety measures into their respective ecosystems. To enforce community standards, Microsoft, for instance, actively collects data from Xbox voice chats and uses it to enforce them. 

PS4 records audio from party sessions for security and compliance monitoring. As voice and video communication have become an essential component of modern online gaming, the platform holders have responded by developing sophisticated systems for capturing, analysing, and acting upon this content that has been created by users. There are several purposes of these systems, the primary one being to ensure that players are legally compliant with all privacy, security, and safety regulations while at the same time maintaining a safe and respectful environment for themselves.

The complexity and sensitivity of this kind of data collection have led to the development of an entirely different segment of businesses that offer real-time voice and video moderation services. Currently, Nintendo has not released any specific details regarding how it plans to manage, store, or utilise GameChat recordings for enforcement purposes. However, their monitoring systems remain unclear as to what their scope and nature will be. 

The upcoming GameChat feature has been widely advertised in recent weeks, which will allow players and parents alike to become familiar with it and to seek out information on it from Nintendo's official resources as the Switch 2's release date approaches on June 5. Many observers in the gaming industry anticipated that the Nintendo Switch 2 pre-order phase would be characterised by overwhelming demand and logistical challenges because it is the first time this console has been released. Several retailers had difficulty releasing the product in late April, resulting in instant sellouts and delays in availability. 

It appears that Target customers appear to have the most consistent success with orders placed through their website, while their experiences with other major retailers, such as Best Buy and Walmart, vary widely. There was a lot of confusion online about ordering consoles, which caused many fans to go to physical retail locations to obtain a console. As of recently, a report by French retailer Frandroid indicated that early sales of Nintendo's next-generation system were hitting “historical levels,” underscoring Nintendo's high demand. 

There are several high-profile titles in the pack, including Mario Kart World, Bravely Default: Flying Fairy HD Remaster, and Survival Kids, just to name a few. On July 17th, Donkey Kong Bananza is also expected to be released, as well as Kirby Air Riders, another highly anticipated game, expected to be released in 2025. 

With Nintendo's commitment to safety and responsible use, Nintendo has already taken steps to ensure that the new GameChat system for the Switch 2 prioritises safe and responsible usage when using online services. To maintain user security, GameChat is only available to individuals on a player's friend list, which has been implemented to prevent the user from being able to communicate with anyone outside the friend list. 

Additionally, a text message verification step is required during the initial setup, and parents must approve the use of the feature by their children under the age of 16. The Microsoft and Sony platforms have both stated that they do not continuously record voice chats, but instead allow users to submit clips as a method of reporting misconduct. However, Nintendo's approach with GameChat seems to emphasise transparency as well as preventative safety measures. 

Amid ongoing discussions about online privacy, including Discord, Nintendo's clearly defined rules and controls might provide users with a better understanding of how their online interactions are monitored. Nintendo's approach to the GameChat feature on the Switch 2 appears to be both thoughtful and proactive when it comes to user safety and user protection. In contrast to the open-ended nature of GameChat, it is purposefully limited to those individuals who have been added to an individual's friend list and approved for voice and video communication. 

There is no need for uninvited conversations with strangers in today's online gaming landscape, and as a result, this ensures that GameChat sessions are kept within trusted contact circles, thus reducing the likelihood of unsolicited interactions. To further reinforce these safeguards, Nintendo has announced that all players aged 15 or younger must obtain explicit parental approval to use GameChat. 

The Nintendo Switch Parental Controls app is designed to allow guardians to closely monitor and manage their children's online interactions through a mobile application called Nintendo Switch Parental Controls. In adding user verification to parental oversight, Nintendo is clearly prioritising a safe and controlled digital environment for all users, especially for younger audiences, while at the same time embracing the desire for immersive and social experiences within videogames.

Massive Data Leak Exposes 520,000+ Ticket Records from Resale Platform 'Ticket to Cash'

 

A critical security lapse at online ticket resale platform Ticket to Cash has led to a major data breach, exposing over 520,000 records, according to a report by vpnMentor. The leak was first uncovered by cybersecurity researcher Jeremiah Fowler, who found the unsecured and unencrypted database without any password protection.

The database, weighing in at a massive 200 GB, contained a mix of PDFs, images, and JSON files. Among the leaked files were thousands of concert and live event tickets, proof of transfers, and receipt screenshots. Alarmingly, many documents included personally identifiable information (PII) such as full names, email addresses, physical addresses, and partial credit card details.

Using the internal structure and naming conventions within the files, Fowler traced the data back to Ticket to Cash, a company that facilitates ticket resale through over 1,000 partner websites. “Despite contacting TicketToCash.com through a responsible disclosure notice,” Fowler reported, “I initially received no response, and the database remained publicly accessible.” It wasn’t until four days later, following a second notice, that the data was finally secured. By then, an additional 2,000+ files had been exposed.

The responsible party behind maintaining the database—whether Ticket to Cash or a third-party contractor—remains uncertain. It’s also unknown how long the database was left open or whether it had been accessed by malicious actors. “Only a thorough internal forensic investigation could provide further clarity,” Fowler emphasized.

Ticket to Cash enables users to list tickets without upfront fees, taking a cut only when sales occur. However, the company has faced criticism over customer service, particularly regarding payment delays via PayPal and difficulty reaching support. Fowler also noted the lack of prompt communication during the disclosure process.

This breach raises serious concerns over data privacy and cybersecurity practices in the digital ticketing world. Leaked PII and partial financial information are prime targets for identity theft and fraud, posing risks well beyond the original ticketed events. As online ticketing becomes more widespread, this incident serves as a stark reminder of the need for strong security protocols and rapid response mechanisms to safeguard user data.

EU Fines TikTok $600 Million for Data Transfers to China

EU Fines TikTok $600 Million for Data Transfers to China

Regulators said that the EU has fined TikTok 530 million euros (around $600 million). Chinese tech giant ByteDance owns TikTok, which has been found guilty of illegally sending the private data of EU users to China and lack of compliance to ensure the protection of data from potential access by Chinese authorities. According to an AFP news report, the penalty— one of the largest ever issued to date by EU’s data protection agencies— comes after a detailed inquiry into the legitimacy of TikTok’s data transfer rules. 

TikTok Fine and EU

TikTok’s lead regulator in Europe, Ireland’s Data Protection Commission (DPC) said that TikTok accepted during the probe about hosting European user data in China. DPC’s deputy commissioner Graham Doyle said that “TikTok failed to verify, guarantee, and demonstrate that the personal data of (European) users, remotely accessed by staff in China, was afforded a level of protection essentially equivalent to that guaranteed within the EU,”

Besides this, Doyle said that TikTok’s failure to address the dangers of possible access to Europeans’s private data by Chinese authorities under China’s anti-terrorism, counter-espionage, and other regulations, which TikTok itself found different than EU’s data protection standards. 

TikTok will contest the decision

TikTok has declared to contest the heavy EU fine, despite the findings. TikTok Europe’s Christine Grahn stressed that the company has “never received a request” from authorities in China for European users’ data and that “TikTok” has never given EU users’ data to Chinese authorities. “We disagree with this decision and intend to appeal it in full,” Christine said. 

TikTok boasts a massive 1.5 billion users worldwide. In recent years, the social media platform has been under tough pressure from Western governments due to worries about the misuse of data by Chinese actors for surveillance and propaganda aims. 

TikTok to comply with EU Rules

In 2023, the Ireland DPC fined TikTok 354 million euros for violating EU rules related to the processing of children’s information. The DPC’s recent judgment also revealed that TikTok violated requirements under the EU’s General Data Protection Regulation (GDPR) by sending user data to China. The decision includes a 530 million euro administrative penalty plus a mandate that TikTok aligns its data processing rules with EU practices within 6 months. 

Public Wary of AI-Powered Data Use by National Security Agencies, Study Finds

 

A new report released alongside the Centre for Emerging Technology and Security (CETaS) 2025 event sheds light on growing public unease around automated data processing in national security. Titled UK Public Attitudes to National Security Data Processing: Assessing Human and Machine Intrusion, the research reveals limited public awareness and rising concern over how surveillance technologies—especially AI—are shaping intelligence operations.

The study, conducted by CETaS in partnership with Savanta and Hopkins Van Mil, surveyed 3,554 adults and included insights from a 33-member citizens’ panel. While findings suggest that more people support than oppose data use by national security agencies, especially when it comes to sensitive datasets like medical records, significant concerns persist.

During a panel discussion, investigatory powers commissioner Brian Leveson, who chaired the session, addressed the implications of fast-paced technological change. “We are facing new and growing challenges,” he said. “Rapid technological developments, especially in AI [artificial intelligence], are transforming our public authorities.”

Leveson warned that AI is shifting how intelligence gathering and analysis is performed. “AI could soon underpin the investigatory cycle,” he noted. But the benefits also come with risks. “AI could enable investigations to cover far more individuals than was ever previously possible, which raises concerns about privacy, proportionality and collateral intrusion.”

The report shows a divide in public opinion based on how and by whom data is used. While people largely support the police and national agencies accessing personal data for security operations, that support drops when it comes to regional law enforcement. The public is particularly uncomfortable with personal data being shared with political parties or private companies.

Marion Oswald, co-author and senior visiting fellow at CETaS, emphasized the intrusive nature of data collection—automated or not. “Data collection without consent will always be intrusive, even if the subsequent analysis is automated and no one sees the data,” she said.

She pointed out that predictive data tools, in particular, face strong opposition. “Panel members, in particular, had concerns around accuracy and fairness, and wanted to see safeguards,” Oswald said, highlighting the demand for stronger oversight and regulation of technology in this space.

Despite efforts by national security bodies to enhance public engagement, the study found that a majority of respondents (61%) still feel they understand “slightly” or “not at all” what these agencies actually do. Only 7% claimed a strong understanding.

Rosamund Powell, research associate at CETaS and co-author of the report, said: “Previous studies have suggested that the public’s conceptions of national security are really influenced by some James Bond-style fictions.”

She added that transparency significantly affects public trust. “There’s more support for agencies analysing data in the public sphere like posts on social media compared to private data like messages or medical data.”

Ascension Faces New Security Incident Involving External Vendor

 


There has been an official disclosure from Ascension Healthcare, one of the largest non-profit healthcare systems in the United States, that there has been a data breach involving patient information due to a cybersecurity incident linked to a former business partner. Ascension, which has already faced mounting scrutiny for its data protection practices, is facing another significant cybersecurity challenge with this latest breach, proving the company's commitment to security.

According to the health system, the recently disclosed incident resulted in the compromise of personal identifiable information (PII), including protected health information (PHI) of the patient. A cyberattack took place in December 2024 that was reported to have stolen data from a former business partner, a breach that was not reported publicly until now. This was the second major ransomware attack that Ascension faced since May of 2024, when critical systems were taken offline as a result of a major ransomware attack. 

A breach earlier this year affected approximately six million patients and resulted in widespread disruptions of operations. It caused ambulance diversions in a number of regions, postponements of elective procedures, and temporary halts of access to essential healthcare services in several of these regions. As a result of such incidents recurring repeatedly within the healthcare sector, concerns have been raised about the security posture of third-party vendors and also about the potential risks to patient privacy and continuity of care that can arise. 

According to Ascension's statement, the organisation is taking additional steps to evaluate and strengthen its cybersecurity infrastructure, including the relationship with external software and partner providers. The hospital chain, which operates 105 hospitals in 16 states and Washington, D.C., informed the public that the compromised data was "likely stolen" after being inadvertently disclosed to the third-party vendor, which, subsequently, experienced a breach as a result of an external software vulnerability. 

In a statement issued by Ascension Healthcare System, it was reported that the healthcare system first became aware of a potential security incident on December 5, 2024. In response to the discovery of the breach, the organisation initiated a thorough internal investigation to assess the extent of the breach. An investigation revealed that patient data had been unintentionally shared with an ex-business partner, who then became the victim of a cybersecurity attack as a result of the data being shared. 

In the end, it appeared that the breach was caused by a vulnerability in third-party software used by the vendor. As a result of the analysis concluded in January 2025, it was determined that some of the information disclosed had likely been exfiltrated during this attack. 

In spite of Ascension failing to disclose the specific types of data that were impacted by the attack, the organization did acknowledge that multiple care sites in Alabama, Michigan, Indiana, Tennessee, and Texas have been affected by the attack. In a statement released by Ascension, the company stressed that it continues to collaborate with cybersecurity experts and legal counsel to better understand the impact of the breach and to inform affected individuals as necessary. 

In addition, the company has indicated that in the future it will take additional steps to improve data sharing practices as well as third party risk management protocols. There is additional information released by Ascension that indicates that the threat actors who are suspected of perpetrating the December 2024 incident likely gained access to and exfiltrated sensitive medical and personal information. 

There are several types of compromised information in this file, including demographics, Social Security numbers, clinical records, and details about visits such as names of physicians, names, diagnoses, medical record numbers, and insurance provider details. Although Ascension has not provided a comprehensive estimate of how many people were affected nationwide, the organization did inform Texas state officials that 114,692 people were affected by the breach here in Texas alone, which was the number of individuals affected by the breach. 

The healthcare system has still not confirmed whether this incident is related to the ransomware attack that occurred in May 2024 across a number of states and affected multiple facilities. It has been reported that Ascension Health's operations have been severely disrupted since May, resulting in ambulances being diverted, manual documentation being used instead of electronic records, and non-urgent care being postponed. 

It took several weeks for the organization to recover from the attack, and the cybersecurity vulnerabilities in its digital infrastructure were revealed during the process. In addition to revealing that 5,599,699 individuals' personal and health-related data were stolen in the attack, Ascension later confirmed this information. 

Only seven of the system's 25,000 servers were accessed by the ransomware group responsible, but millions of records were still compromised. The healthcare and insurance industries continue to be plagued by data breaches. It has been reported this week that a data breach involving 4,052,972 individuals, resulting from a cyberattack in February 2024, has affected 4,052,972 individuals, according to a separate incident reported by VeriSource Services, a company that manages employee administration. 

A number of these incidents highlight the growing threat that organisations dealing with sensitive personal and medical data are facing. Apparently, the December 2024 breach involving Ascension's systems and networks was not caused by an internal compromise of its electronic health records, but was caused by an external attack. Neither the health system nor the former business partner with whom the patient information was disclosed has been publicly identified, nor has the health system identified the particular third-party software vulnerability exploited by the attackers.

Ascension has also recently announced two separate third-party security incidents that are separate from this one. A notice was posted by the organisation on April 14, 2025, concerning a breach that took place involving Scharnhorst Ast Kennard Gryphon, a law firm based in Missouri. The organisation reported that SAKG had detected suspicious activity on August 1, 2024, and an investigation later revealed that there had been unauthorised access between the 17th and the 6th of August 2024. 

Several individuals affiliated with the Ascension health system were notified by SAKG on February 14, 2025, about the breach. In that incident, there were compromised records including names, phone numbers, date of birth and death, Social Security numbers, driver's license numbers, racial data, and information related to medical treatment. 

A number of media inquiries have been received regarding the broader scope of the incident, including whether or not other clients were affected by the breach, as well as how many individuals were affected in total. Separately, Ascension announced another data security incident on March 3, 2025 that involved Access Telecare, a third-party provider of telehealth services in the area of Ascension Seton in Texas. 

As with previous breaches, the Ascension Corporation clarified that the breach did not compromise its internal systems or electronic health records, a report filed with the U.S. Department of Health and Human Services Office for Civil Rights (HHS OCR) confirmed on March 8, 2025, that Access Telecare had experienced a breach of its email system, which was reported on March 8, 2025. It is estimated that approximately 62,700 individuals may have been affected by the breach. 

In light of these successive disclosures, it is becoming increasingly apparent that the healthcare ecosystem is at risk of third-party relationships, as organisations continue to face the threat of cybercriminals attempting to steal sensitive medical and personal information from the internet. As a response to the recent security breach involving a former business partner, Ascension has offered two years of complimentary identity protection services to those who have been affected. This company offers credit monitoring services, fraud consultations, identity theft restoration services, aimed at mitigating potential harm resulting from unauthorized access to personal and health information, including credit monitoring, fraud consultation, and identity theft restoration services. 

Even though Ascension has not provided any further technical details about the breach, the timeline and nature of the incident suggest that it may be related to the Clop ransomware group's widespread campaign against data theft. There was a campaign in late 2024 that exploited a zero-day security vulnerability in the Cleo secure file transfer software and targeted multiple organisations. The company has not officially confirmed any connection between the breach and the Clop group, and a spokesperson has not responded to BleepingComputer's request for comment. 

Ascension has not encountered any major cybersecurity incidents in the past, so it is not surprising that this is not the first time they have experienced one. According to Ascension Healthcare's official report from May 2024, approximately 5.6 million patients and employees were affected by a separate ransomware infection attributed to the Black Basta group of hackers. Several hospitals were adversely affected by a security breach that occurred due to the inadvertent download of a malicious file on a company device by an employee. 

A number of data sets were exposed as a result of that incident, including both personal and health-related information, illustrating how the healthcare industry faces ongoing risks due to both internal vulnerabilities and external cyber threats. Despite the ongoing threat of cybersecurity in the healthcare industry, the string of data breaches involving Ascension illustrates the need to be more vigilant and accountable when managing third-party relationships. 

Even in the case of uncompromised internal systems, vulnerabilities in external networks can still result in exposing sensitive patient information to significant risks, even in cases of uncompromised internal systems. To ensure that healthcare organisations are adequately able to manage vendor risk, implement strong data governance protocols, and implement proactive threat detection and response strategies, organisations need to prioritise robust vendor risk management. 

A growing number of regulatory bodies and industry leaders are beginning to realize that they may need to revisit standards that govern network sharing, third-party oversight, and breach disclosure in an effort to ensure the privacy of patients in the increasingly interconnected world of digital health.