Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label data security. Show all posts

Blackpool Credit Union Cyberattack Exposes Customer Data in Cork

 

A Cork-based credit union has issued a warning to its customers after a recent cyberattack exposed sensitive personal information. Blackpool Credit Union confirmed that the breach occurred late last month and subsequently notified members through a formal letter. Investigators determined that hackers may have gained access to personal records, including names, contact information, residential addresses, dates of birth, and account details. While there is no evidence that any funds were stolen or PIN numbers compromised, concerns remain that the stolen data could be misused. 

The investigation raised the possibility that cybercriminals may publish the stolen records on underground marketplaces such as the dark web. This type of exposure increases the risk of identity theft or secondary scams, particularly phishing attacks in which fraudsters impersonate trusted organizations to steal additional details from unsuspecting victims. Customers were urged to remain vigilant and to treat any unsolicited communication requesting personal or financial information with caution. 

The Central Bank of Ireland has been briefed on the situation and is monitoring developments. It has advised any members with concerns to reach out directly to Blackpool Credit Union through its official phone line. Meanwhile, a spokesperson for the credit union assured the public that services remain operational and that members can continue to access assistance in person, by phone, or through email. The organization emphasized that safeguarding customer data remains a priority and expressed regret over the incident. Impacted individuals will be contacted directly for follow-up support. 

The Irish League of Credit Unions reinforced the importance of caution, noting that legitimate credit unions will never ask members to verify accounts through text messages or unsolicited communications. Fraudsters often exploit publicly available details to appear convincing, setting up sophisticated websites and emails to lure individuals into disclosing confidential information. Customers were reminded to independently verify the authenticity of any suspicious outreach and to rely on official registers when dealing with financial services.  

Experts warn that people who have already fallen victim to scams are more likely to be targeted again. Attackers often pressure individuals into making hasty decisions, using the sense of urgency to trick them into disclosing sensitive information or transferring money. Customers were encouraged to take their time before responding to unexpected requests and to trust their instincts if something feels unusual or out of place.

The Central Bank reiterated its awareness of the breach and confirmed that it is in direct communication with Blackpool Credit Union regarding the response measures. Members seeking clarification were again directed to the credit union’s official helpline for assistance.

GitHub Supply Chain Attack ‘GhostAction’ Exposes Over 3,000 Secrets Across Ecosystems

 

A newly uncovered supply chain attack on GitHub, named GhostAction, has compromised more than 3,300 secrets across multiple ecosystems, including PyPI, npm, DockerHub, GitHub, Cloudflare, and AWS. The campaign was first identified by GitGuardian researchers, who traced initial signs of suspicious activity in the FastUUID project on September 2, 2025. The attack relied on compromised maintainer accounts, which were used to commit malicious workflow files into repositories. These GitHub Actions workflows were configured to trigger automatically on push events or manual dispatch, enabling the attackers to extract sensitive information. 

Once executed, the malicious workflow harvested secrets from GitHub Actions environments and transmitted them to an attacker-controlled server through a curl POST request. In FastUUID’s case, the attackers accessed the project’s PyPI token, although no malicious package versions were published before the compromise was detected and contained. Further investigation revealed that the attack extended well beyond a single project. Researchers found similar workflow injections across at least 817 repositories, all exfiltrating data to the same domain. To maximize impact, the attackers enumerated secret variables from existing legitimate workflows and embedded them into their own files, ensuring multiple types of secrets could be stolen. 

GitGuardian publicly disclosed the findings on September 5, raising issues in 573 affected repositories and notifying security teams at GitHub, npm, and PyPI. By that time, about 100 repositories had already identified the unauthorized commits and reverted them. Soon after the disclosures, the exfiltration endpoint used by the attackers went offline, halting further data transfers. 

The scope of the incident is significant, with researchers estimating that roughly 3,325 secrets were exposed. These included API tokens, access keys, and database credentials spanning several major platforms. At least nine npm packages and 15 PyPI projects remain directly affected, with the risk that compromised tokens could allow the release of malicious or trojanized versions if not revoked. GitGuardian noted that some companies had their entire SDK portfolios compromised, with repositories in Python, Rust, JavaScript, and Go impacted simultaneously. 

While the attack bears some resemblance to the s1ngularity campaign reported in late August, GitGuardian stated that it does not see a direct connection between the two. Instead, GhostAction appears to represent a distinct, large-scale attempt to exploit open-source ecosystems through stolen maintainer credentials and poisoned automation workflows. The findings underscore the growing challenges in securing supply chains that depend heavily on public code repositories and automated build systems.

Czechia Warns of Chinese Data Transfers and Espionage Risks to Critical Infrastructure

 

Czechia’s National Cyber and Information Security Agency (NÚKIB) has issued a stark warning about rising cyber espionage campaigns linked to China and Russia, urging both government institutions and private companies to strengthen their security measures. The agency classified the threat as highly likely, citing particular concerns over data transfers to China and remote administration of assets from Chinese territories, including Hong Kong and Macau. According to the watchdog, these operations are part of long-term efforts by foreign states to compromise critical infrastructure, steal sensitive data, and undermine public trust. 

The agency’s concerns are rooted in China’s legal and regulatory framework, which it argues makes private data inherently insecure. Laws such as the National Intelligence Law of 2017 require all citizens and organizations to assist intelligence services, while the 2015 National Security Law and the 2013 Company Law provide broad avenues for state interference in corporate operations. Additionally, regulations introduced in 2021 obligate technology firms to report software vulnerabilities to government authorities within two days while prohibiting disclosure to foreign organizations. NÚKIB noted that these measures give Chinese state actors sweeping access to sensitive information, making foreign businesses and governments vulnerable if their data passes through Chinese systems. 

Hong Kong and Macau also fall under scrutiny in the agency’s assessment. In Hong Kong, the 2024 Safeguarding National Security Ordinance integrates Chinese security laws into its own legal system, broadening the definition of state secrets. Macau’s 2019 Cybersecurity Law grants authorities powers to monitor data transmissions from critical infrastructure in real time, with little oversight to prevent misuse. NÚKIB argues that these developments extend the Chinese government’s reach well beyond its mainland jurisdiction. 

The Czech warning gains credibility from recent attribution efforts. Earlier this year, Prague linked cyberattacks on its Ministry of Foreign Affairs to APT31, a group tied to China’s Ministry of State Security, in a campaign active since 2022. The government condemned the attacks as deliberate attempts to disrupt its institutions and confirmed a high degree of certainty about Chinese involvement, based on cooperation among domestic and international intelligence agencies. 

These warnings align with broader global moves to limit reliance on Chinese technologies. Countries such as Germany, Italy, and the Netherlands have already imposed restrictions, while the Five Eyes alliance has issued similar advisories. For Czechia, the implications are serious: NÚKIB highlighted risks across devices and systems such as smartphones, cloud services, photovoltaic inverters, and health technology, stressing that disruptions could have wide-reaching consequences. The agency’s message reflects an ongoing effort to secure its digital ecosystem against foreign influence, particularly as geopolitical tensions deepen in Europe.

Massive database of 250 million data leaked online for public access


Around a quarter of a billion identity records were left publicly accessible, exposing people located in seven countries- Saudi Arabia, the United Arab Emirates, Canada, Mexico, South Africa, Egypt, and Turkey. 

According to experts from Cybernews, three misconfigured servers, registered in the UAE and Brazil, hosting IP addresses, contained personal information such as “government-level” identity profiles. The leaked data included contact details, dates of birth, ID numbers, and home addresses. 

Cybernews experts who found the leak said the databases seemed to have similarities with the naming conventions and structure, which hinted towards the same source. But they could not identify the actor who was responsible for running the servers. 

“These databases were likely operated by a single party, due to the similar data structures, but there’s no attribution as to who controlled the data, or any hard links proving that these instances belonged to the same party,” they said. 

The leak is particularly concerning for citizens in South Africa, Egypt, and Turkey, as the databases there contained full-spectrum data. 

The leak would have exposed the database to multiple threats, such as phishing campaigns, scams, financial fraud, and abuses.

Currently, the database is not publicly accessible (a good sign). 

This is not the first incident where a massive database holding citizen data (250 million) has been exposed online. Cybernews’ research revealed that the entire Brazilian population might have been impacted by the breach.

Earlier, a misconfigured Elasticsearch instance included the data with details such as sex,  names, dates of birth, and Cadastro de Pessoas Físicas (CPF) numbers. This number is used to identify taxpayers in Brazil. 

Browser-Based Attacks in 2025: Key Threats Security Teams Must Address

 

In 2025, the browser has become one of the primary battlefields for cybercriminals. Once considered a simple access point to the internet, it now serves as the main gateway for employees into critical business applications and sensitive data. This shift has drawn attackers to target browsers directly, exploiting them as the weakest link in a highly connected and decentralized work environment. With enterprises relying heavily on SaaS platforms, online collaboration tools, and cloud applications, the browser has transformed into the focal point of modern cyberattacks, and security teams must rethink their defenses to stay ahead. 

The reason attackers focus on browsers is not because of the technology itself, but because of what lies beyond them. When a user logs into a SaaS tool, an ERP system, or a customer database, the browser acts as the entryway. Incidents such as the Snowflake customer data breach and ongoing attacks against Salesforce users demonstrate that attackers no longer need to compromise entire networks; they simply exploit the session and gain direct access to enterprise assets. 

Phishing remains one of the most common browser-driven threats, but it has grown increasingly sophisticated. Attackers now rely on advanced Attacker-in-the-Middle kits that steal not only passwords but also active sessions, rendering multi-factor authentication useless. These phishing campaigns are often cloaked with obfuscation and hosted on legitimate SaaS infrastructure, making them difficult to detect. In other cases, attackers deliver malicious code through deceptive mechanisms such as ClickFix, which disguises harmful commands as verification prompts. Variants like FileFix are spreading across both Windows and macOS, frequently planting infostealer malware designed to harvest credentials and session cookies. 

Another growing risk comes from malicious OAuth integrations, where attackers trick users into approving third-party applications that secretly provide them with access to corporate systems. This method proved devastating in recent Salesforce-related breaches, where hackers bypassed strong authentication and gained long-term access to enterprise environments. Similarly, compromised or fraudulent browser extensions represent a silent but dangerous threat. These can capture login details, hijack sessions, or inject malicious scripts, as highlighted in the Cyberhaven incident in late 2024. 

File downloads remain another effective attack vector. Malware-laced documents, often hidden behind phishing portals, continue to slip past traditional defenses. Meanwhile, stolen credentials still fuel account takeovers in cases where multi-factor authentication is weak, absent, or improperly enforced. Attackers exploit these gaps using ghost logins and bypass techniques, highlighting the need for real-time browser-level monitoring. 

As attackers increasingly exploit the browser as a central point of entry, organizations must prioritize visibility and control at this layer. By strengthening browser security, enterprises can reduce identity exposure, close MFA gaps, and limit the risks of phishing, malware delivery, and unauthorized access. The browser has become the new endpoint of enterprise defense, and protecting it is no longer optional.

Disney to Pay $10 Million Fine in FTC Settlement Over Child Data Collection on YouTube

 

Disney has agreed to pay millions of dollars in penalties to resolve allegations brought by the Federal Trade Commission (FTC) that it unlawfully collected personal data from young viewers on YouTube without securing parental consent. Federal law under the Children’s Online Privacy Protection Act (COPPA) requires parental approval before companies can gather data from children under the age of 13. 

The case, filed by the U.S. Department of Justice on behalf of the FTC, accused Disney Worldwide Services Inc. and Disney Entertainment Operations LLC of failing to comply with COPPA by not properly labeling Disney videos on YouTube as “Made for Kids.” This mislabeling allegedly allowed the company to collect children’s data for targeted advertising purposes. 

“This case highlights the FTC’s commitment to upholding COPPA, which ensures that parents, not corporations, control how their children’s personal information is used online,” said FTC Chair Andrew N. Ferguson in a statement. 

As part of the settlement, Disney will pay a $10 million civil penalty and implement stricter mechanisms to notify parents and obtain consent before collecting data from underage users. The company will also be required to establish a panel to review how its YouTube content is designated. According to the FTC, these measures are intended to reshape how Disney manages child-directed content on the platform and to encourage the adoption of age verification technologies. 

The complaint explained that Disney opted to designate its content at the channel level rather than individually marking each video as “Made for Kids” or “Not Made for Kids.” This approach allegedly enabled the collection of data from child-directed videos, which YouTube then used for targeted advertising. Disney reportedly received a share of the ad revenue and, in the process, exposed children to age-inappropriate features such as autoplay.  

The FTC noted that YouTube first introduced mandatory labeling requirements for creators, including Disney, in 2019 following an earlier settlement over COPPA violations. Despite these requirements, Disney allegedly continued mislabeling its content, undermining parental safeguards. 

“The order penalizes Disney’s abuse of parental trust and sets a framework for protecting children online through mandated video review and age assurance technology,” Ferguson added. 

The settlement arrives alongside an unrelated investigation launched earlier this year by the Federal Communications Commission (FCC) into alleged hiring practices at Disney and its subsidiary ABC. While separate, the two cases add to the regulatory pressure the entertainment giant is facing. 

The Disney case underscores growing scrutiny of how major media and technology companies handle children’s privacy online, particularly as regulators push for stronger safeguards in digital environments where young audiences are most active.

Jaguar Land Rover Cyberattack Breaches Data and Halts Global Production

Jaguar Land Rover (JLR), the UK’s largest automaker and a subsidiary of Tata Motors, has confirmed that the recent cyberattack on its systems has not only disrupted global operations but also resulted in a data breach. The company revealed during its ongoing investigation that sensitive information had been compromised, although it has not yet specified whether the data belonged to customers, suppliers, or employees. JLR stated that it will directly contact anyone impacted once the scope of the breach is confirmed. 

The incident has forced JLR to shut down its IT systems across the globe in an effort to contain the ransomware attack. Production has been halted at its Midlands and Merseyside factories in the UK, with workers told they cannot return until at least next week. Other plants outside the UK have also been affected, with some industry insiders warning that it could take weeks before operations return to normal. The disruption has spilled over to suppliers and retailers, some of whom are unable to access databases used for registering vehicles or sourcing spare parts. 

The automaker has reported the breach to all relevant authorities, including the UK’s Information Commissioner’s Office. A JLR spokesperson emphasized that third-party cybersecurity experts are assisting in forensic investigations and recovery efforts, while the company works “around the clock” to restore services safely. The spokesperson also apologized for the ongoing disruption and reiterated JLR’s commitment to transparency as the inquiry continues. 

Financial pressure is mounting as the costs of the prolonged shutdown escalate. Shares of Tata Motors dropped 0.9% in Mumbai following the disclosure, reflecting investor concerns about the impact on the company’s bottom line. The disruption comes at a challenging time for JLR, which is already dealing with falling profits and delays in the launch of new electric vehicle models. 

The attack appears to be part of a growing trend of aggressive cyber campaigns targeting global corporations. A group of English-speaking hackers, linked to previously documented attacks on retailers such as Marks & Spencer, has claimed responsibility for the JLR breach. Screenshots allegedly showing the company’s internal IT systems were posted on a Telegram channel associated with hacker groups including Scattered Spider, Lapsus$, and ShinyHunters. 

Cybersecurity analysts warn that the automotive industry is becoming a prime target due to its reliance on connected systems and critical supply chains. Attacks of this scale not only threaten operations but also risk exposing valuable intellectual property and sensitive personal data. As JLR races to restore its systems, the incident underscores the urgent need for stronger resilience measures in the sector.

AI Image Attacks: How Hidden Commands Threaten Chatbots and Data Security

 



As artificial intelligence becomes part of daily workflows, attackers are exploring new ways to exploit its weaknesses. Recent research has revealed a method where seemingly harmless images uploaded to AI systems can conceal hidden instructions, tricking chatbots into performing actions without the user’s awareness.


How hidden commands emerge

The risk lies in how AI platforms process images. To reduce computing costs, most systems shrink images before analysis, a step known as downscaling. During this resizing, certain pixel patterns, deliberately placed by an attacker can form shapes or text that the model interprets as user input. While the original image looks ordinary to the human eye, the downscaled version quietly delivers instructions to the system.

This technique is not entirely new. Academic studies as early as 2020 suggested that scaling algorithms such as bicubic or bilinear resampling could be manipulated to reveal invisible content. What is new is the demonstration of this tactic against modern AI interfaces, proving that such attacks are practical rather than theoretical.


Why this matters

Multimodal systems, which handle both text and images, are increasingly connected to calendars, messaging apps, and workplace tools. A hidden prompt inside an uploaded image could, in theory, request access to private information or trigger actions without explicit permission. One test case showed that calendar data could be forwarded externally, illustrating the potential for identity theft or information leaks.

The real concern is scale. As organizations integrate AI assistants into daily operations, even one overlooked vulnerability could compromise sensitive communications or financial data. Because the manipulation happens inside the preprocessing stage, traditional defenses such as firewalls or antivirus tools are unlikely to detect it.


Building safer AI systems

Defending against this form of “prompt injection” requires layered strategies. For users, simple precautions include checking how an image looks after resizing and confirming any unusual system requests. For developers, stronger measures are necessary: restricting image dimensions, sanitizing inputs before models interpret them, requiring explicit confirmation for sensitive actions, and testing models against adversarial image samples.

Researchers stress that piecemeal fixes will not be enough. Only systematic design changes such as enforcing secure defaults and monitoring for hidden instructions can meaningfully reduce the risks.

Images are no longer guaranteed to be safe when processed by AI systems. As attackers learn to hide commands where only machines can read them, users and developers alike must treat every upload with caution. By prioritizing proactive defenses, the industry can limit these threats before they escalate into real-world breaches.



Smart Glasses Face Opposition as Gen Z Voices Privacy Concerns

 


The debate over technology and privacy is intensifying as Meta prepares to announce a third generation of its Ray-Ban smart glasses, a launch that will hold both excitement and unease in the tech community at the same time. In the new model, which will be marketed as Meta Ray-Ban Glasses Gen 3, the features that have already attracted more than two million buyers since they were introduced in 2023 will be refined. 

Even though Meta's success is a testament to the increasing popularity of wearable technology, the company is currently facing significant scrutiny due to discussions regarding potential facial recognition capabilities, which raise significant privacy and data security concerns. 

There has been an increasing trend in smart glass adoption over the past couple of years, and observers believe that the addition-or even the prospect- of such a feature may alter not only the trajectory of smart glasses, but also the public's willingness to embrace them as well. An industry-wide surge in wearable innovation has seen the introduction of some controversial developments, including glasses powered by artificial intelligence, which have been developed by two Harvard dropouts who recently raised $1 million in funding to advance their line of AI-powered smart glasses. 

It was originally known as a company that experimented with covert face recognition, but today the entrepreneurs are focusing their efforts on eyewear that records audio, processes conversations in real time, and provides instant insights. 

The technology demonstrates striking potential to transform human interaction, but it has also caused a wave of criticism over the risks of unchecked surveillance, which has prompted a wave of criticism. It has become increasingly evident that social media platforms are becoming a platform where widespread unease is being expressed, with many users warning of a future in which privacy will be compromised through constant surveillance.

Comparisons with the ill-fated Google Glass project are becoming increasingly common, and critics argue that such innovations could ultimately lead to dystopian territory without adequate safeguards and explicit consent mechanisms. The regulation and advocacy groups for digital rights are also attempting to establish clearer ethical frameworks, emphasising the delicate balance between fostering technological development and protecting individual freedoms. 

It is no secret that most members of Generation Z are sceptical about smart glasses owing to concerns about privacy, trust, and social acceptance, as well as other social issues. Even though most models come equipped with small LED indicators to indicate when the camera is activated, online tutorials have already demonstrated that these safeguards can be easily bypassed by anyone in order to conceal a camera. 

There are numerous examples of such “hacks” on platforms like TikTok, fuelling fears of being unknowingly filmed in the classroom, public space, or private gatherings on platforms like TikTok. These anxieties are compounded by a broader mistrust of Big Tech, with companies like Meta, maker of Ray-Ban Stories, still struggling with reputational damage as a result of past data abuse scandals. 

Since Gen Z has grown up with a much more aware awareness of how personal information is gathered and monetised than older generations, they have developed heightened suspicions about devices that could function as portable surveillance tools, as opposed to older generations. There are, however, cultural challenges beyond regulation. 

Wearing glasses on the face places recording technology directly in front of the eye, which is a situation many find invasive. Some establishments, such as restaurants, gyms, and universities, have acted to restrict their use, signalling resistance at a social level. Furthermore, critics note a generational clash over values, where Gen Z values authenticity and spontaneity in their digital expression, while the discreet recording capabilities of smart glasses risk creating a sense of distrust and eradicating genuine human connections as a result. 

According to analysts, manufacturers should prioritise transparency, enforce tamper-proof privacy indicators and shift towards apps that emphasise accessibility or productivity. If manufacturers do not do these things, the technology is likely to remain a niche novelty and not a mainstream necessity, particularly among the very demographic it aims to reach out to. 

It is MTA's policy to emphasise that safeguards have been built into its devices, and a spokesperson for the company, Maren Thomas, stated that Ray-Ban smart glasses are equipped with an external light that indicates when recording is active as well as a sensor that detects if the light is blocked. According to her, the user agreement of the company prohibits disabling the light. 

Although these assurances are present, younger consumers remain sceptical of the effectiveness of such measures, even though such assurances remain high. Critics point out that online tutorials already circulate showing how to bypass recording alerts, which raises concerns that the system could be misused in the workplace, classroom, or any other public setting. As a result of their concern that they will be covertly filmed, people in customer-facing positions are especially vulnerable. 

Researchers contend that these concerns stem from a generational gap in attitudes towards digital privacy: millennials tend to share personal content more freely, whereas Generation Z tends to think about the consequences of exposure, especially as social media footprints become increasingly influential in job opportunities and college selections. 

There is a growing movement within this generation to establish informal boundaries with their peers and families about what information should be shared and what information should not be shared, and wearable technology poses the potential to upend these unspoken rules in an instant. 

It is important to note, however, that despite the controversy, the demand for Meta Ray-Ban sunglasses in the United States is forecasted to reach almost four million units by the end of this year, a sharp increase from 1.2 million units in 2024, and the results of social media monitoring by Sprout Social show that, despite most online mentions remaining positive or neutral, younger users are disproportionately concerned about privacy. 

It is believed by industry experts that the future of smart glasses may not hinge purely on technological innovation, but instead on the ability of companies to navigate the ethical and social dimensions of their products effectively. Although privacy concerns dominate the current conversation, advocates maintain that the technology can also be very beneficial if deployed responsibly as well. 

In addition to assisting with visual impairments in navigating the world, smart glasses could also provide real-time language translation as well as hands-free communication in healthcare and industry settings. Smart glasses would provide meaningful improvements to accessibility and productivity as well. There is no doubt that manufacturers will need to demonstrate transparency, build trust through non-negotiable safeguards, and work closely with regulators to develop clear consent and data usage standards to reach that point. 

Social acceptance will require a cultural shift as well, one that will reassure people that innovation and respect for individual rights can coexist. In particular, Gen Z, a generation that values authenticity and accountability, will require the industry to design products that empower, not monitor, and connect, rather than alienate. The test will be whether the company can achieve this goal. Achieving that balance will perhaps enable smart glasses to evolve from a polarising novelty into a universally adopted tool that will have a profound impact on the way people see the world, interact with it, and process information.

Salesforce Launches AI Research Initiatives with CRMArena-Pro to Address Enterprise AI Failures

 

Salesforce is doubling down on artificial intelligence research to address one of the toughest challenges for enterprises: AI agents that perform well in demonstrations but falter in complex business environments. The company announced three new initiatives this week, including CRMArena-Pro, a simulation platform described as a “digital twin” of business operations. The goal is to test AI agents under realistic conditions before deployment, helping enterprises avoid costly failures.  

Silvio Savarese, Salesforce’s chief scientist, likened the approach to flight simulators that prepare pilots for difficult situations before real flights. By simulating challenges such as customer escalations, sales forecasting issues, and supply chain disruptions, CRMArena-Pro aims to prepare agents for unpredictable scenarios. The effort comes as enterprises face widespread frustration with AI. A report from MIT found that 95% of generative AI pilots does not reach production, while Salesforce’s research indicates that large language models succeed only about a third of the time in handling complex cases.  

CRMArena-Pro differs from traditional benchmarks by focusing on enterprise-specific tasks with synthetic but realistic data validated by business experts. Salesforce has also been testing the system internally before making it available to clients. Alongside this, the company introduced the Agentic Benchmark for CRM, a framework for evaluating AI agents across five metrics: accuracy, cost, speed, trust and safety, and sustainability. The sustainability measure stands out by helping companies match model size to task complexity, balancing performance with reduced environmental impact. 

A third initiative highlights the importance of clean data for AI success. Salesforce’s new Account Matching feature uses fine-tuned language models to identify and merge duplicate records across systems. This improves data accuracy and saves time by reducing the need for manual cross-checking. One major customer achieved a 95% match rate, significantly improving efficiency. 

The announcements come during a period of heightened security concerns. Earlier this month, more than 700 Salesforce customer instances were affected in a campaign that exploited OAuth tokens from a third-party chat integration. Attackers were able to steal credentials for platforms like AWS and Snowflake, underscoring the risks tied to external tools. Salesforce has since removed the compromised integration from its marketplace. 

By focusing on simulation, benchmarking, and data quality, Salesforce hopes to close the gap between AI’s promise and its real-world performance. The company is positioning its approach as “Enterprise General Intelligence,” emphasizing the need for consistency across diverse business scenarios. These initiatives will be showcased at Salesforce’s Dreamforce conference in October, where more AI developments are expected.

Nearly Two Billion Discord Messages Scraped and Sold on Dark Web Forums

 

Security experts have raised alarms after discovering that a massive collection of Discord data is being offered for sale on underground forums. According to researchers at Cybernews, who reviewed the advertisement, the archive reportedly contains close to two billion messages scraped from the platform, alongside additional sensitive information. The dataset allegedly includes 1.8 billion chat messages, records of 35 million users, 207 million voice sessions, and data from 6,000 servers, all available to anyone willing to pay. 

Discord, a platform widely used for gaming, social communities, and professional groups, enables users to connect via text, voice, and video across servers organized around different interests. Many of these servers are open to the public, meaning their content—including usernames, conversations, and community activity—can be accessed by anyone who joins. While much of this information is publicly visible, the large-scale automated scraping of data still violates Discord’s Terms of Service and could potentially breach data protection regulations such as the EU’s General Data Protection Regulation (GDPR) or California’s Consumer Privacy Act (CCPA).

The true sensitivity of the dataset remains unclear, as no full forensic analysis has been conducted. It is possible that a significant portion of the messages and voice records were collected from publicly accessible servers, which would reduce—but not eliminate—the privacy concerns. However, the act of compiling, distributing, and selling this information at scale introduces new risks, such as the misuse of user data for surveillance, targeted phishing, or identity exploitation. 

Discord has faced similar challenges before. In April 2024, a service known as Spy.Pet attempted to sell billions of archived chat logs from the platform. That operation was swiftly shut down by Discord, which banned the associated accounts and confirmed that the activity violated its rules. At the time, the company emphasized that automated scraping and self-botting were not permitted under its Terms of Service and stated it was exploring possible legal action against offenders. 

The recurrence of large-scale scraping attempts highlights the ongoing tension between the open nature of platforms like Discord and the privacy expectations of their users. While public servers are designed for accessibility and community growth, they can also be exploited by malicious actors seeking to harvest data en masse. Even if the information being sold in the latest case is largely public, the potential to cross-reference user activity across communities raises broader concerns about surveillance and abuse. 

As of now, Discord has not issued an official statement on this latest incident, but based on previous responses, it is likely the company will take steps to disrupt the sale and enforce its policies against scraping. The incident serves as another reminder that users on open platforms should remain mindful of the visibility of their activity and that service providers must continue to balance openness with strong protections against data misuse.

SquareX Warns Browser Extensions Can Steal Passkeys Despite Phishing-Resistant Security

 

The technology industry has long promoted passkeys as a safer, phishing-resistant alternative to passwords. Major firms such as Microsoft, Google, Amazon, and Meta are encouraging users to abandon traditional login methods in favor of this approach, which ties account security directly to a device. In theory, passkeys make it almost impossible for attackers to gain access without physically having an unlocked device. However, new research suggests that this system may not be as unbreakable as promised. 

Cybersecurity firm SquareX has demonstrated that browser-based attacks can undermine the integrity of passkeys. According to the research team, malicious extensions or injected scripts are capable of manipulating the passkey setup and login process. By hijacking this step, attackers can trick users into registering credentials controlled by the attacker, undermining the entire security model. SquareX argues that this development challenges the belief that passkeys cannot be stolen, calling the finding an important “wake-up call” for the security community. 

The proof-of-concept exploit works by taking advantage of the fact that browsers act as the intermediary during passkey creation and authentication. Both the user’s device and the online service must rely on the browser to transmit authentication requests accurately. If the browser environment is compromised, attackers can intercept WebAuthn calls and replace them with their own code. SquareX researchers demonstrated how a seemingly harmless extension could activate during a passkey registration process, generate a new attacker-controlled key pair, and secretly send a copy of the private key to an external server. Although the private key remains on the victim’s device, the duplicate allows the attacker to authenticate into the victim’s accounts elsewhere. 

This type of attack could also be refined to sabotage existing passkeys and force users into creating new ones, which are then stolen during setup. SquareX co-founder Vivek Ramachandran explained that although enterprises are adopting passkeys at scale, many organizations lack a full understanding of how the underlying mechanisms work. He emphasized that even the FIDO Alliance, which develops authentication standards, acknowledges that passkeys require a trusted environment to remain secure. Without ensuring that browsers are part of that trusted environment, enterprise users may remain vulnerable to identity-based attacks. 

The finding highlights a larger issue with browser extensions, which remain one of the least regulated parts of the internet ecosystem. Security professionals have long warned that extensions can be malicious from the outset or hijacked after installation, providing attackers with direct access to sensitive browser activity. Because an overwhelming majority of users rely on add-ons in Chrome, Edge, and other browsers, the potential for exploitation is significant. 

SquareX’s warning comes at a time when passkey adoption is accelerating rapidly, with estimates suggesting more than 15 billion passkeys are already in use worldwide. The company stresses that despite their benefits, passkeys are not immune to the same types of threats that have plagued passwords and authentication codes for decades. As the technology matures, both enterprises and individual users are urged to remain cautious, limit browser extensions to trusted sources, and review installed add-ons regularly to minimize exposure.

Malicious Go Package Disguised as SSH Tool Steals Credentials via Telegram

 

Researchers have uncovered a malicious Go package disguised as an SSH brute-force tool that secretly collects and transmits stolen credentials to an attacker-controlled Telegram bot. The package, named golang-random-ip-ssh-bruteforce, first appeared on June 24, 2022, and was linked to a developer under the alias IllDieAnyway. Although the GitHub profile tied to this account has since been removed, the package is still accessible through Go’s official registry, raising concerns about supply chain security risks for developers who might unknowingly use it. 

The module is designed to scan random IPv4 addresses in search of SSH services operating on TCP port 22. Once it detects a running service, it attempts brute-force login using only two usernames, “root” and “admin,” combined with a list of weak and commonly used passwords. These include phrases such as “root,” “test,” “password,” “admin,” “12345678,” “1234,” “qwerty,” “webadmin,” “webmaster,” “techsupport,” “letmein,” and “Passw@rd.” If login succeeds, the malware immediately exfiltrates the target server’s IP address, username, and password through Telegram’s API to a bot called @sshZXC_bot, which forwards the stolen information to a user identified as @io_ping. Since Telegram communications are encrypted via HTTPS, the credential theft blends into ordinary web traffic, making detection much more difficult. 

The design of the tool helps it remain stealthy while maximizing efficiency. To bypass host identity checks, the module disables SSH host key verification by setting ssh.InsecureIgnoreHostKey as its callback. It continuously generates IPv4 addresses while attempting concurrent logins in an endless loop, increasing the chances of finding vulnerable servers. Interestingly, once it captures valid credentials for the first time, the malware terminates itself. This tactic minimizes its exposure, helping it avoid detection by defenders monitoring for sustained brute-force activity. 

Archival evidence suggests that the creator of this package has been active in the underground hacking community for years. Records link the developer to the release of multiple offensive tools, including an IP port scanner, an Instagram parser, and Selica-C2, a PHP-based botnet for command-and-control operations. Associated videos show tutorials on exploiting Telegram bots and launching SMS bomber attacks on Russian platforms. Analysts believe the attacker is likely of Russian origin, based on the language, platforms, and content of their activity. 

Security researchers warn that this Trojanized Go module represents a clear supply chain risk. Developers who unknowingly integrate it into their projects could unintentionally expose sensitive credentials to attackers, since the exfiltration traffic is hidden within legitimate encrypted HTTPS connections. This case underscores the growing threat of malicious open-source packages being planted in widely used ecosystems, where unsuspecting developers become conduits for large-scale credential theft.

New Forensic System Tracks Ghost Guns Made With 3D Printing Using SIDE

 

The rapid rise of 3D printing has transformed manufacturing, offering efficient ways to produce tools, spare parts, and even art. But the same technology has also enabled the creation of “ghost guns” — firearms built outside regulated systems and nearly impossible to trace. These weapons have already been linked to crimes, including the 2024 murder of UnitedHealthcare CEO Brian Thompson, sparking concern among policymakers and law enforcement. 

Now, new research suggests that even if such weapons are broken into pieces, investigators may still be able to extract critical identifying details. Researchers from Washington University in St. Louis, led by Netanel Raviv, have developed a system called Secure Information Embedding and Extraction (SIDE). Unlike earlier fingerprinting methods that stored printer IDs, timestamps, or location data directly into printed objects, SIDE is designed to withstand tampering. 

Even if an object is deliberately smashed, the embedded information remains recoverable, giving investigators a powerful forensic tool. The SIDE framework is built on earlier research presented at the 2024 IEEE International Symposium on Information Theory, which introduced techniques for encoding data that could survive partial destruction. This new version adds enhanced security mechanisms, creating a more resilient system that could be integrated into 3D printers. 

The approach does not rely on obvious markings but instead uses loss-tolerant mathematical embedding to hide identifying information within the material itself. As a result, even fragments of plastic or resin may contain enough data to help reconstruct its origin. Such technology could help reduce the spread of ghost guns and make it more difficult for criminals to use 3D printing for illicit purposes. 

However, the system also raises questions about regulation and personal freedom. If fingerprinting becomes mandatory, even hobbyist printers used for harmless projects may be subject to oversight. This balance between improving security and protecting privacy is likely to spark debate as governments consider regulation. The potential uses of SIDE go far beyond weapons tracing. Any object created with a 3D printer could carry an invisible signature, allowing investigators to track timelines, production sources, and usage. 

Combined with artificial intelligence tools for pattern recognition, this could give law enforcement powerful new forensic capabilities. “This work opens up new ways to protect the public from the harmful aspects of 3D printing through a combination of mathematical contributions and new security mechanisms,” said Raviv, assistant professor of computer science and engineering at Washington University. He noted that while SIDE cannot guarantee protection against highly skilled attackers, it significantly raises the technical barriers for criminals seeking to avoid detection.

Worker Sentenced to Four Years for Compromising Company IT Infrastructure


 

It is the case of a Chinese-born software developer who has been sentenced to four years in federal prison after hacking into the internal systems of his former employer, in a stark warning of the dangers of insider threats that corporations across the globe should be aware of. Known as Davis (David) Lu, 55, of Houston, Texas, the disgruntled employee allegedly committed one of the most devastating forms of digital retaliation, embedding hidden malicious code into Eaton Corporation's computer network that crippled their operations. 

In 2019, after Lu had been demoted and suspended, the attack disrupted global operations, locked out thousands of employees, and caused severe financial losses that resulted in the demotion and suspension being followed by the attack. As reported by the Department of Justice, Lu’s actions illustrate how even the most resilient enterprises can face crippling risks when they are mistrustful and unchecked with insider access. 

According to Lu's investigation, after he was cut off from his responsibilities in 2018 as a result of a corporate reorganisation, his dissatisfaction began in 2018. A professional setback, prosecutors argued, was the inspiration for a carefully orchestrated sabotage campaign. By planting malicious Java code within Eaton's production environment, he planted the code to wreak maximum havoc once it was activated. 

It was the logic bomb labeled IsDLEnabledinAD that was the most detrimental element of this scheme. This logic bomb was designed to remain dormant until Eaton terminated his employment on September 9, 2019 by disabling his account and then executing on that day, causing Eaton to terminate his employment as a result of the logic bomb.

In the instant after it exploded, thousands of employees across global systems were locked out of their offices, widespread disruptions were caused, and a cascading series of failures were set off across corporate networks, showing the devastating impact of a single insider on the company. According to court filings, Lu's actions went far beyond just a single sabotage attack. Eventually, he had injected routines into the code that was designed to overload the infrastructure by mid-2019.

These routines included infinite loops in the source code that forced Java virtual machines to create threads indefinitely, ultimately leading to the crash of production servers as a result of resource exhaustion, and also the deletion of employee profiles within the Active Directory directory. This further destabilized the company's workforce. t was his intention to carefully engineer his plan, which was evident in the embedded kill switch activating when it was revoked in September, demonstrating that his plan had been carefully devised for many years. 

In short, the result was swift and severe: thousands of employees were locked out of their systems, key infrastructure came to a complete halt, and losses quickly soared into the hundreds of thousands. In a later investigation, it became evident that Lu was not only intent on disrupting production, but also implementing a sabotage campaign. 

Logs of his malicious execution drew attention to a unique user ID and a Kentucky-based machine, revealing the extent to which he attempted to conceal the attack. During the course of investigating Lu's code, officials learned that portions were named Hakai—the Japanese word for destruction—and HunShui—the Chinese word for sleep and lethargy. These are clear signals that Lu's intention was destructive. 

Lu escalated his retaliation on the very same day he was instructed to return his company-issued laptop by trying to delete encrypted volumes, wipe Linux directories, and erase two separate projects in his attempt to evade the company's demands. The search history of the individual documents a meticulous effort on the part of the man to find ways to obstruct recovery efforts, demonstrating his determination to escalating privileges, concealing processes, and erasing digital evidence.

There is a strong belief among federal authorities that the losses incurred were in the millions of dollars, with the FBI stating that the case serves as a reminder of how much damage insiders can cause in systems that do not have the appropriate safeguards in place. Lu's actions were strongly condemned by the Justice Department, describing it as a grave betrayal of professional trust by Lu. He was credited with technical expertise that used to serve as an asset to the organization at one point, but ultimately was weaponized against that very infrastructure he was supposed to protect, according to officials. 

According to the prosecutors in court, the sabotage was a clear example of insider threats circumventing traditional cybersecurity protections by exploiting privileges and bypassing traditional cybersecurity defenses in order to deliver maximum disruptions. In their view, the sentencing reflects the seriousness with which the United States takes corporate sabotage as a threat that destabilizes operations and undermines trust within critical industries. 

In an era of increased digital dependence, Davis Lu's convictions reinforce a broader lesson for businesses that are in business today. There is no doubt that firewalls, encryption standards, and intrusion detection systems remain essential; however, the case emphasizes that the most dangerous risks are often not the result of faceless hackers in the outside, but rather of individuals with privileged access within a organization. 

As a central component of an organization's cybersecurity strategy, insider threat detection must be considered as a central pillar to mitigate such risks. To minimize exposure, continuous monitoring systems need to be implemented, user activity audits conducted on a regular basis, stricter access controls must be implemented, and role-based privileges need to be adopted. 

Aside from the technical measures, experts emphasize how important it is to build work cultures rooted in accountability, transparency, and communication, which will reduce the likelihood that professional grievances will escalate into retaliation if they occur. According to cybersecurity analysts, companies need to prioritize behavioral analytics and employee training programs to be able to detect subtle warning signs before they spiral into damaging actions. 

In order to be proactive in security, companies need to recognize and address vulnerabilities that have been found within their organization and address them before they are exploited by external adversaries. Technology continues to become increasingly integrated into every aspect of a global organization, so the ability to remain resilient depends on establishing a strong security infrastructure that is backed up by sound governance and a culture of vigilance. 

In addition to being a sobering example of what one insider can create, the Lu case also serves as a reminder that it takes foresight, diligence, and a relentless commitment to safeguarding trust to build digital resilience.

FreeVPN.One Chrome Extension Caught Secretly Spying on Users With Unauthorized Screenshots

 

Security researchers are warning users against relying on free VPN services after uncovering alarming surveillance practices linked to a popular Chrome extension. The extension in question, FreeVPN.One, has been downloaded over 100,000 times from the Chrome Web Store and even carried a “featured” badge, which typically indicates compliance with recommended standards. Despite this appearance of legitimacy, the tool was found to be secretly spying on its users.  

FreeVPN.One was taking screenshots just over a second after a webpage loaded and sending them to a remote server. These screenshots also included the page URL, tab ID, and a unique identifier for each user, effectively allowing the developers to monitor browsing activity in detail. While the extension’s privacy policy referenced an AI threat detection feature that could upload specific data, Koi’s analysis revealed that the extension was capturing screenshots indiscriminately, regardless of user activity or security scanning. 

The situation became even more concerning when the researchers found that FreeVPN.One was also collecting geolocation and device information along with the screenshots. Recent updates to the extension introduced AES-256-GCM encryption with RSA key wrapping, making the transmission of this data significantly more difficult to detect. Koi’s findings suggest that this surveillance behavior began in April following an update that allowed the extension to access every website a user visited. By July 17, the silent screenshot feature and location tracking had become fully operational. 

When contacted, the developer initially denied the allegations, claiming the screenshots were part of a background feature intended to scan suspicious domains. However, Koi researchers reported that screenshots were taken even on trusted sites such as Google Sheets and Google Photos. Requests for additional proof of legitimacy, such as company credentials or developer profiles, went unanswered. The only trace left behind was a basic Wix website, raising further questions about the extension’s credibility. 

Despite the evidence, FreeVPN.One remains available on the Chrome Web Store with an average rating of 3.7 stars, though its reviews are now filled with complaints from users who learned of the findings. The fact that the extension continues to carry a “featured” label is troubling, as it may mislead more users into installing it.  

The case serves as a stark reminder that free VPN tools often come with hidden risks, particularly when offered through browser extensions. While some may be tempted by the promise of free online protection, the reality is that such tools can expose sensitive data and compromise user privacy. As the FreeVPN.One controversy shows, paying for a reputable VPN service remains the safer choice.

Data Portability and Sovereign Clouds: Building Resilience in a Globalized Landscape

 

The emergence of sovereign clouds has become increasingly inevitable as organizations face mounting regulatory demands and geopolitical pressures that influence where their data must be stored. Localized cloud environments are gaining importance, ensuring that enterprises keep sensitive information within specific jurisdictions to comply with legal frameworks and reduce risks. However, the success of sovereign clouds hinges on data portability, the ability to transfer information smoothly across systems and locations, which is essential for compliance and long-term resilience.  

Many businesses cannot afford to wait for regulators to impose requirements; they need to proactively adapt. Yet, the reality is that migrating data across hybrid environments remains complex. Beyond shifting primary data, organizations must also secure related datasets such as backups and information used in AI-driven applications. While some companies focus on safeguarding large language model training datasets, others are turning to methods like retrieval-augmented generation (RAG) or AI agents, which allow them to leverage proprietary data intelligence without creating models from scratch. 

Regardless of the approach, data sovereignty is crucial, but the foundation must always be strong data resilience. Global regulators are shaping the way enterprises view data. The European Union, for example, has taken a strict stance through the General Data Protection Regulation (GDPR), which enforces data sovereignty by applying the laws of the country where data is stored or processed. Additional frameworks such as NIS2 and DORA further emphasize the importance of risk management and oversight, particularly when third-party providers handle sensitive information.

Governments and enterprises alike are concerned about data moving across borders, which has made sovereign cloud adoption a priority for safeguarding critical assets. Some governments are going a step further by reducing reliance on foreign-owned data center infrastructure and reinvesting in domestic cloud capabilities. This shift ensures that highly sensitive data remains protected under national laws. Still, sovereignty alone is not a complete solution. 

Even if organizations can specify where their data is stored, there is no absolute guarantee of permanence, and related datasets like backups or AI training files must be carefully considered. Data portability becomes essential to maintaining sovereignty while avoiding operational bottlenecks. Hybrid cloud adoption offers flexibility, but it also introduces complexity. Larger enterprises may need multiple sovereign clouds across regions, each governed by unique data protection regulations. 

While this improves resilience, it also raises the risk of data fragmentation. To succeed, organizations must embed data portability within their strategies, ensuring seamless transfer across platforms and providers. Without this, the move toward sovereign or hybrid clouds could stall. SaaS and DRaaS providers can support the process, but businesses cannot entirely outsource responsibility. Active planning, oversight, and resilience-building measures such as compliance audits and multi-supplier strategies are essential. 

By clearly mapping where data resides and how it flows, organizations can strengthen sovereignty while enabling agility. As data globalization accelerates, sovereignty and portability are becoming inseparable priorities. Enterprises that proactively address these challenges will be better positioned to adapt to future regulations while maintaining flexibility, security, and long-term operational strength in an increasingly uncertain global landscape.

Texas Attorney General Probes Meta AI Studio and Character.AI Over Child Data and Health Claims

 

Texas Attorney General Ken Paxton has opened an investigation into Meta AI Studio and Character.AI over concerns that their AI chatbots may present themselves as health or therapeutic tools while potentially misusing data collected from underage users. Paxton argued that some chatbots on these platforms misrepresent their expertise by suggesting they are licensed professionals, which could leave minors vulnerable to misleading or harmful information. 

The issue extends beyond false claims of qualifications. AI models often learn from user prompts, raising concerns that children’s data may be stored and used for training purposes without adequate safeguards. Texas law places particular restrictions on the collection and use of minors’ data under the SCOPE Act, which requires companies to limit how information from children is processed and to provide parents with greater control over privacy settings. 

As part of the inquiry, Paxton issued Civil Investigative Demands (CIDs) to Meta and Character.AI to determine whether either company is in violation of consumer protection laws in the state. While neither company explicitly promotes its AI tools as substitutes for licensed mental health services, there are multiple examples of “Therapist” or “Psychologist” chatbots available on Character.AI. Reports have also shown that some of these bots claim to hold professional licenses, despite being fictional. 

In response to the investigation, Character.AI emphasized that its products are intended solely for entertainment and are not designed to provide medical or therapeutic advice. The company said it places disclaimers throughout its platform to remind users that AI characters are fictional and should not be treated as real individuals. Similarly, Meta stated that its AI assistants are clearly labeled and include disclaimers highlighting that responses are generated by machines, not people. 

The company also said its AI tools are designed to encourage users to seek qualified medical or safety professionals when appropriate. Despite these disclaimers, critics argue that such warnings are easy to overlook and may not effectively prevent misuse. Questions also remain about how the companies collect, store, and use user data. 

According to their privacy policies, Meta gathers prompts and feedback to enhance AI performance, while Character.AI collects identifiers and demographic details that may be applied to advertising and other purposes. Whether these practices comply with Texas’ SCOPE Act will likely depend on how easily children can create accounts and how much parental oversight is built into the platforms. 

The investigation highlights broader concerns about the role of AI in sensitive areas such as mental health and child privacy. The outcome could shape how companies must handle data from younger users while limiting the risks of AI systems making misleading claims that could harm vulnerable individuals.

Connex Credit Union Confirms Data Breach Impacting 172,000 Customers

 

Connex Credit Union, headquartered in North Haven, Connecticut, recently revealed that a data breach may have affected around 172,000 of its members. The compromised data includes names, account numbers, debit card information, Social Security numbers, and government identification used for account openings. The credit union emphasized that there is no indication that customer accounts or funds were accessed during the incident. 

The breach was identified after Connex noticed unusual activity in its digital systems on June 3, prompting an internal investigation. The review indicated that certain files could have been accessed or copied without permission on June 2 and 3. By late July, the credit union had determined which members were potentially affected. To inform customers and prevent fraud, Connex posted a notice on its website warning that scammers might attempt to impersonate the credit union through calls or messages. 

The advisory stressed that Connex would never request PINs, account numbers, or passwords over the phone. To support affected individuals, the credit union set up a toll-free call center and is offering a year of free credit monitoring and identity theft protection through TransUnion’s CyberScout service. Connex also reported the breach to federal authorities, including the National Credit Union Administration, and committed to cooperating fully with law enforcement to hold the attackers accountable. 

This breach is part of a broader trend of cyberattacks on financial institutions. Earlier in 2025, Western Alliance Bank in Phoenix reported a cyber incident that potentially exposed 22,000 customers’ information due to vulnerabilities in third-party file transfer software, which remained undetected for over three months. Regulatory agencies have also been targeted; in April, attackers accessed emails from the Office of the Comptroller of the Currency containing sensitive financial information, prompting banks such as JPMorgan Chase and Bank of America to temporarily halt electronic data sharing. Other credit unions have faced similar incidents. 

In 2024, TDECU in Lake Jackson, Texas, learned it had been affected by a MoveIt cybersecurity breach over a year after it occurred. One of the largest bank breaches in recent memory took place in July 2019, when Capital One was hacked by a former Amazon Web Services employee, compromising data of 106 million individuals. The company faced an $80 million penalty to the OCC and a $190 million class-action settlement, while the hacker was convicted in 2022 for wire fraud and unauthorized access. 

As cyberattacks become more sophisticated, this incident underscores the importance of vigilance, strong cybersecurity practices, and proactive protection measures for customers and financial institutions alike.

Think Twice Before Uploading Personal Photos to AI Chatbots

 

Artificial intelligence chatbots are increasingly being used for fun, from generating quirky captions to transforming personal photos into cartoon characters. While the appeal of uploading images to see creative outputs is undeniable, the risks tied to sharing private photos with AI platforms are often overlooked. A recent incident at a family gathering highlighted just how easy it is for these photos to be exposed without much thought. What might seem like harmless fun could actually open the door to serious privacy concerns. 

The central issue is unawareness. Most users do not stop to consider where their photos are going once uploaded to a chatbot, whether those images could be stored for AI training, or if they contain personal details such as house numbers, street signs, or other identifying information. Even more concerning is the lack of consent—especially when it comes to children. Uploading photos of kids to chatbots, without their ability to approve or refuse, creates ethical and security challenges that should not be ignored.  

Photos contain far more than just the visible image. Hidden metadata, including timestamps, location details, and device information, can be embedded within every upload. This information, if mishandled, could become a goldmine for malicious actors. Worse still, once a photo is uploaded, users lose control over its journey. It may be stored on servers, used for moderation, or even retained for training AI models without the user’s explicit knowledge. Just because an image disappears from the chat interface does not mean it is gone from the system.  

One of the most troubling risks is the possibility of misuse, including deepfakes. A simple selfie, once in the wrong hands, can be manipulated to create highly convincing fake content, which could lead to reputational damage or exploitation. 

There are steps individuals can take to minimize exposure. Reviewing a platform’s privacy policy is a strong starting point, as it provides clarity on how data is collected, stored, and used. Some platforms, including OpenAI, allow users to disable chat history to limit training data collection. Additionally, photos can be stripped of metadata using tools like ExifTool or by taking a screenshot before uploading. 

Consent should also remain central to responsible AI use. Children cannot give informed permission, making it inappropriate to share their images. Beyond privacy, AI-altered photos can distort self-image, particularly among younger users, leading to long-term effects on confidence and mental health. 

Safer alternatives include experimenting with stock images or synthetic faces generated by tools like This Person Does Not Exist. These provide the creative fun of AI tools without compromising personal data. 

Ultimately, while AI chatbots can be entertaining and useful, users must remain cautious. They are not friends, and their cheerful tone should not distract from the risks. Practicing restraint, verifying privacy settings, and thinking critically before uploading personal photos is essential for protecting both privacy and security in the digital age.