Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label User Security. Show all posts

Google Launches Gemini AI Across Home and Nest Devices

 

Google has unveiled its new Gemini-powered smart home lineup and AI strategy, positioning its AI assistant Gemini at the core of refreshed Google Home and Nest devices. This reimagined approach follows Amazon's recent Echo launch, highlighting an intensifying competition in the AI-driven home sector. 

Google’s aim is to extend Gemini’s capabilities beyond just its own hardware, making it available to other device manufacturers, reminiscent of how Android’s open platform fostered an expansive device ecosystem. The company plans to keep innovating with flagship hardware, particularly where Gemini’s potential can shine, while encouraging third-party OEMs and partners to integrate Gemini regardless of price point or form factor.

The new Nest lineup features products like the Nest Cam Outdoor, Nest Cam Indoor, and Nest Doorbell—all updated to leverage Gemini’s intelligence. Additionally, Google teased its next-generation Google Home speaker for spring 2026 and revealed a partnership with Walmart to launch affordable indoor cameras and doorbells under the “onn” brand. 

Notably, Google is prioritizing current device owners by rolling out Gemini features to devices with adequate processing power, using cloud APIs and the Matter smart home standard for broad compatibility. This ensures Gemini’s reach to over 800 million devices—including both Google and third-party products—while the company refines experiences before releasing new hardware.

Gemini enhances user interaction by enabling more conversational, contextually aware commands. Users can reference vague details—like a movie or song—and Gemini will intuit the correct response, such as playing music, explaining lyrics, or suggesting content. It can handle more complex household management tasks like creating shopping lists based on recipes, setting nuanced timers, and chaining multiple requests. 

Device naming is now simplified, and Gemini can manage routines, automate energy usage monitoring, and suggest home security setups via the new “Ask Home” feature. These improvements are facilitated by Google’s upgraded Home app, now more stable and powered by Gemini.

The app uses AI to summarize camera activity, describe detected events, and guide users with direct answers and recommendations, streamlining daily home routines. Gemini Live introduces continuous conversational interaction without the need to repeat “Hey Google,” promising a more natural AI experience. 

Google’s toolkit, reference hardware, and SDK support further empower partners and developers, reinforcing its market-wide AI ambitions. The Nest and Walmart devices are available now, while the new Home speaker is due spring 2026.

Meta Overhauls AI Chatbot Safeguards for Teenagers

 

Meta has announced new artificial intelligence safeguards to protect teenagers following a damaging Reuters investigation that exposed internal company policies allowing inappropriate chatbot interactions with minors. The social media giant is now training its AI systems to avoid flirtatious conversations and discussions about self-harm or suicide with teenage users. 

Background investigation 

The controversy began when Reuters uncovered an internal 200-page Meta document titled "GenAI: Content Risk Standards" that permitted chatbots to engage in "romantic or sensual" conversations with children as young as 13. 

The document contained disturbing examples of acceptable AI responses, including "Your youthful form is a work of art" and "Every inch of you is a masterpiece – a treasure I cherish deeply". These guidelines had been approved by Meta's legal, public policy, and engineering teams, including the company's chief ethicist. 

Immediate safety measures 

Meta spokesperson Andy Stone announced that the company is implementing immediate interim measures while developing more comprehensive long-term solutions for teen AI safety. The new safeguards include training chatbots to avoid discussing self-harm, suicide, disordered eating, and potentially inappropriate romantic topics with teenage users. Meta is also temporarily limiting teen access to certain AI characters that could hold inappropriate conversations.

Some of Meta's user-created AI characters include sexualized chatbots such as "Step Mom" and "Russian Girl," which will now be restricted for teen users. Instead, teenagers will only have access to AI characters that promote education and creativity. The company acknowledged that these policy changes represent a reversal from previous positions where it deemed such conversations appropriate. 

Government response and investigation

The revelations sparked swift political backlash. Senator Josh Hawley launched an official investigation into Meta's AI policies, demanding documentation about the guidelines that enabled inappropriate chatbot interactions with minors. A coalition of 44 state attorneys general wrote to AI companies including Meta, expressing they were "uniformly revolted by this apparent disregard for children's emotional well-being". 

Senator Edward Markey has urged Meta to completely prevent minors from accessing AI chatbots on its platforms, citing concerns that Meta incorporates teenagers' conversations into its AI training process. The Federal Trade Commission is now preparing to scrutinize the mental health risks of AI chatbots to children and will demand internal documents from major tech firms including Meta. 

Implementation timeline 

Meta confirmed that the revised document was "inconsistent with its broader policies" and has since removed sections allowing chatbots to flirt or engage in romantic roleplay with minors. Company spokesperson Stephanie Otway acknowledged these were mistakes, stating the updates are "already in progress" and the company will "continue to adapt our approach to help ensure teens have safe, age-appropriate experiences with AI". 

The controversy highlights broader concerns about AI chatbot safety for vulnerable users, particularly as large companies integrate these tools directly into widely-used platforms where the vast majority of young people will encounter them.

India's Biggest Cyber Fraud: Businessman Duped of ₹25 Crore Through Fake Trading App

 

A Kochi-based pharmaceutical company owner has suffered a loss of ₹25 crore in what is being described as the largest single-person cyber fraud case in India. 

The incident involved a sophisticated online trading scam, executed through a fake trading application that lured the victim with promises of lucrative returns. Despite being an experienced trader, the businessman fell prey to deception after engaging with the fraudulent app for nearly two years.

The scam unfolded over four months, during which the victim was lured by substantial profits displayed on his initial investments. These early gains convinced him of the app’s legitimacy, prompting more substantial investments.

Investigators from the Cyber Cell revealed that the app consistently showed double profits, creating an illusion of credibility and financial success. This psychological manipulation is a common tactic used by cyber fraudsters to build trust and encourage deeper engagement from unsuspecting victims. 

Trouble began when the businessman attempted to withdraw his funds, only to be met with repeated delays and a variety of excuses from the operators of the fake platform. As withdrawal requests were consistently stonewalled, suspicion grew. It was only after persistent failed attempts to access his money that the reality of the fraud became clear to the victim. 

Upon reporting the crime, swift action was taken by law enforcement. The Indian Cyber Crime Coordination Centre was immediately alerted and subsequently forwarded the information to the Thiruvananthapuram Cyber Operations Headquarters. A formal case was registered, and efforts have been initiated to freeze the remaining funds before they could be routed to additional accounts.

Investigation revealed that the fraudulent app was under the control of a foreign national, indicating possible international links and making the operation broader and more complex. The case has prompted a larger crackdown on similar cyber threats, with the Cyber Cell widening its probe to trace the perpetrators and prevent further occurrences. 

This incident highlights the growing sophistication of online financial scams in India, emphasizing the need for increased vigilance, especially even among experienced investors. Awareness and prompt reporting remain essential defenses against such evolving cyber threats.

Major Password Managers Leak User Credentials in Unpatched Clickjacking Attacks

 

Six popular password managers serving tens of millions of users remain vulnerable to unpatched clickjacking flaws that could allow cybercriminals to steal login credentials, two-factor authentication codes, and credit card information. 

Modus operandi

Security researcher Marek Tóth, who presented these findings at DEF CON 33, demonstrated how attackers exploit these vulnerabilities by running malicious scripts on compromised websites. 

The attack works by using opacity settings and overlays to hide password manager autofill dropdown menus while displaying fake elements like cookie banners or CAPTCHA prompts. When users click on these decoy elements, they unknowingly trigger autofill actions that expose sensitive data. 

Tóth developed multiple exploitation variants, including DOM element manipulation techniques and a method where the user interface follows the mouse cursor, making any click trigger data autofill. The researcher created a universal attack script that can identify which password manager a target is using and adapt the attack in real-time. 

Impacted password managers

The vulnerable password managers include: 
  • 1Password 8.11.4.27 
  • Bitwarden 2025.7.0 
  • Enpass 6.11.6 
  • iCloud Passwords 3.1.25
  • LastPass 4.146.3 
  • LogMeOnce 7.12.4 
These services collectively have approximately 40 million users. 

Vendor responses 

Vendor responses have been mixed. 1Password dismissed the report as "out-of-scope/informative," arguing that clickjacking is a general web risk users should mitigate themselves. Similarly, LastPass initially marked the report as "informative" before later acknowledging they're working on fixes. 

Bitwarden downplayed the severity but claims to have addressed the issues in version 2025.8.0. However, LogMeOnce initially failed to respond to any communication attempts, though they later released an update. Several vendors have successfully implemented fixes, including Dashlane, NordPass, ProtonPass, RoboForm, and Keeper.

Safety measures 

Until patches are available, Tóth recommends that users disable autofill functionality in their password managers and rely on manual copy-paste operations instead. This significantly reduces the attack surface while maintaining password manager security benefits. 

The research highlights ongoing challenges in balancing user convenience with security in password management tools, particularly regarding browser extension vulnerabilities.

New Phishing Scam Uses Japanese Character to Perfectly Mimic Legitimate URLs

 

Cybersecurity researchers have recently flagged a highly sophisticated phishing campaign that leverages a unique tactic: the use of the Japanese hiragana character “ã‚“” to mimic the appearance of a forward slash (“/”) in website URLs. This technique is especially effective on certain fonts and browser systems, making phony URLs appear nearly identical to legitimate ones, thus tricking even vigilant internet users. 

The campaign’s primary target is customers of the travel platform Booking.com. Instead of the real URL containing forward slashes, attackers craft addresses using the “ã‚“” character, such as “https://account.booking[.]comã‚“detailã‚“restric-access.www-account-booking[.]comã‚“en/”. On first glance, these URLs look authentic, but they redirect users to fraudulent domains controlled by cybercriminals.

The malicious strategy starts with phishing emails containing these deceptive links. When clicked, users are sent to sites that deliver MSI installer files, which may secretly install malware like information stealers or remote access trojans on victim devices. 

This approach is part of a broader trend known as homograph attacks. Cybercriminals exploit visual similarities between characters from different Unicode sets, using them to spoof trusted domains. Previously, attackers have used Cyrillic letters to impersonate Latin ones; the use of Japanese “ã‚“” adds a clever new layer to these deceptions. 

According to the 2025 Phishing Trends Report, homograph attacks are evolving and becoming harder to filter out, as criminals strive to defeat security systems and bypass standard defenses. 

Safety tips 

Security experts recommend multiple protective strategies. Users should hover over links to reveal actual destination URLs, though this has limitations with sophisticated character spoofing. Modern browsers like Chrome have implemented protections against many homograph attacks, but visual URL inspection alone is insufficient. 

The most effective defense combines updated security software, email filtering, and comprehensive user education about evolving attack vectors. This campaign demonstrates how cybercriminals continuously adapt their techniques to exploit even subtle visual ambiguities in digital communication systems. 

Ultimately, this new phishing campaign highlights cybercriminals’ constant creativity in exploiting even the smallest ambiguities in digital communication. As attackers continue to adapt their methods, organizations and individuals need to stay aware of these rapidly advancing attack vectors and double down on multi-layered security measures.

Facial Recognition's False Promise: More Sham Than Security

 

Despite the rapid integration of facial recognition technology (FRT) into daily life, its effectiveness is often overstated, creating a misleading picture of its true capabilities. While developers frequently tout accuracy rates as high as 99.95%, these figures are typically achieved in controlled laboratory settings and fail to reflect the system's performance in the real world.

The discrepancy between lab testing and practical application has led to significant failures with severe consequences. A prominent example is the wrongful arrest of Robert Williams, a Black man from Detroit who was misidentified by police facial recognition software based on a low-quality image.

This is not an isolated incident; there have been at least seven confirmed cases of misidentification from FRT, six of which involved Black individuals. Similarly, an independent review of the London Metropolitan Police's use of live facial recognition found that out of 42 matches, only eight were definitively accurate.

These real-world failures stem from flawed evaluation methods. The benchmarks used to legitimize the technology, such as the US National Institute of Standards and Technology's (NIST) Facial Recognition Technology Evaluation (FRTE), do not adequately account for real-world conditions like blurred images, poor lighting, or varied camera angles. Furthermore, the datasets used for training these systems are often not representative of diverse demographics, which leads to significant biases .

The inaccuracies of FRT are not evenly distributed across the population. Research consistently shows that the technology has higher error rates for people of color, women, and individuals with disabilities. For example, one of Microsoft’s early models had a 20.8% error rate for dark-skinned women but a 0% error rate for light-skinned men . This systemic bias means the technology is most likely to fail the very communities that are already vulnerable to over-policing and surveillance.

Despite these well-documented issues, FRT is being widely deployed in sensitive areas such as law enforcement, airports, and retail stores. This raises profound ethical concerns about privacy, civil rights, and due process, prompting companies like IBM, Amazon, and Microsoft to restrict or halt the sale of their facial recognition systems to police departments. The continued rollout of this flawed technology suggests that its use is more of a "sham" than a reliable security solution, creating a false sense of safety while perpetuating harmful biases.

Indian Government Flag Security Concerns with WhatsApp Web on Work PCs

 

The Indian government has issued a significant cybersecurity advisory urging citizens to avoid using WhatsApp Web on office computers and laptops, highlighting serious privacy and security risks that could expose personal information to employers and cybercriminals. 

The Ministry of Electronics and Information Technology (MeitY) released this public advisory through its Information Security Awareness (ISEA) team, warning that while accessing WhatsApp Web on office devices may seem convenient, it creates substantial cybersecurity vulnerabilities. The government describes the practice as a "major cybersecurity mistake" that could lead to unauthorized access to personal conversations, files, and login credentials. 

According to the advisory, IT administrators and company systems can gain access to private WhatsApp conversations through multiple pathways, including screen-monitoring software, malware infections, and browser hijacking tools. The government warns that many organizations now view WhatsApp Web as a potential security risk that could serve as a gateway for malware and phishing attacks, potentially compromising entire corporate networks. 

Specific privacy risks identified 

The advisory outlines several "horrors" of using WhatsApp on work-issued devices. Data breaches represent a primary concern, as compromised office laptops could expose confidential WhatsApp conversations containing sensitive personal information. Additionally, using WhatsApp Web on unsecured office Wi-Fi networks creates opportunities for malicious actors to intercept private data.

Perhaps most concerning, the government notes that even using office Wi-Fi to access WhatsApp on personal phones could grant companies some level of access to employees' private devices, further expanding the potential privacy violations. The advisory emphasizes that workplace surveillance capabilities mean employers may monitor browser activity, creating situations where sensitive personal information could be accessed, intercepted, or stored without employees' knowledge. 

Network security implication

Organizations increasingly implement comprehensive monitoring systems on corporate devices, making WhatsApp Web usage particularly risky. The government highlights that corporate networks face elevated vulnerability to phishing attacks and malware distribution through messaging applications like WhatsApp Web. When employees click malicious links or download suspicious attachments through WhatsApp Web on office systems, they could inadvertently provide hackers with backdoor access to organizational IT infrastructure. 

Recommended safety measures

For employees who must use WhatsApp Web on office devices, the government provides specific precautionary guidelines. Users should immediately log out of WhatsApp Web when stepping away from their desks or finishing work sessions. The advisory strongly recommends exercising caution when clicking links or opening attachments from unknown contacts, as these could contain malware designed to exploit corporate networks. 

Additionally, employees should familiarize themselves with their company's IT policies regarding personal application usage and data privacy on work devices. The government emphasizes that understanding organizational policies helps employees make informed decisions about personal technology use in professional environments. 

This advisory represents part of broader cybersecurity awareness efforts as workplace digital threats continue evolving, with the government positioning employee education as crucial for maintaining both personal privacy and corporate network security.

Security Flaws Found in Police and Military Radio Encryption

 

Cybersecurity experts have uncovered significant flaws in encryption systems used by police and military radios globally, potentially allowing malicious actors to intercept secure communications. 

Background and context 

In 2023, Dutch security researchers from Midnight Blue unearthed an intentional backdoor in TETRA (Terrestrial Trunked Radio) encryption algorithms used in radios deployed by law enforcement, intelligence agencies, and military organizations worldwide. This discovery led the European Telecommunications Standards Institute (ETSI) to recommend users implement additional end-to-end encryption (E2EE) for sensitive communications. 

The same research team has now identified that at least one version of the TCCA-endorsed E2EE solution contains similar flaws. The encryption algorithm analyzed starts with a 128-bit key but reduces it to just 56 bits before encrypting data, making it vulnerable to unauthorized access. Additionally, researchers discovered a second vulnerability that could allow attackers to send deceptive messages or replay legitimate communications.

The TETRA standard includes four encryption algorithms (TEA1, TEA2, TEA3, TEA4) designed for different security levels based on the target customer. All use 80-bit keys, but TEA1 was found to reduce to just 32 bits, enabling researchers to crack it in under a minute. The key reduction appears to be implemented to comply with export control regulations for encryption sold to customers outside Europe. 

Global impact

TETRA radios are extensively employed by law enforcement agencies in Belgium, Scandinavian nations, Eastern European countries, and Middle Eastern nations including Iran, Iraq, Lebanon, and Syria. Defense ministries in Bulgaria, Kazakhstan, and Syria, along with intelligence services from Poland, Finland, Lebanon, and Saudi Arabia also employ these systems. However, it remains unclear how many entities use the vulnerable E2EE implementation.

Disclosure challenges

The research reveals a concerning lack of transparency regarding security limitations. While some manufacturers include vulnerability information in brochures, others only address it in internal communications or don't mention it at all. A leaked product bulletin indicated that encryption key length is "subject to export control regulations," but it's uncertain whether end users are properly informed about potential security risks.

The findings will be presented at the BlackHat security conference, highlighting ongoing challenges in securing critical communications infrastructure used by law enforcement and military organizations worldwide.

Millions Face Potential Harm After Experts Uncovered a Vast Network of 5,000+ Fake Pharmacy Sites

 

Security experts have exposed "PharmaFraud," a criminal network of more than 5,000 fraudulent online pharmacies. The operation puts millions of consumers at risk by selling unsafe counterfeit medications while also stealing their private data. 

The fraudulent campaign mimics legitimate online pharmacies and specifically targets individuals seeking discreet access to medications such as erectile dysfunction treatments, antibiotics, steroids, and weight-loss drugs. What makes this operation particularly dangerous is its use of advanced deception techniques, including AI-generated health content, fabricated customer reviews, and misleading advertisements to establish credibility with potential victims. 

These sites are designed to circumvent basic security indicators by omitting legitimate business credentials and requiring payments through cryptocurrency, which makes transactions virtually untraceable. The operation extends beyond simply selling fake drugs—it actively harvests sensitive medical information, personal details, and financial data that can be exploited in subsequent fraud schemes. 

Health and financial risks

Even when products are delivered, there's no guarantee of safety or effectiveness—medications may be expired, contaminated, or completely fake, creating health risks that extend far beyond financial losses. The report highlights that these fraudulent sites often bypass prescription requirements entirely, allowing dangerous medications to reach consumers without proper medical oversight. 

The broader cyberthreat landscape has seen escalation, with financial scams increasing by 340% in just three months, often using fake advertisements and chatbot interfaces to impersonate legitimate legal or investment services. Tech support scams appearing as browser pop-ups have also risen sharply, luring users into contacting fraudulent help services.

Safety tips 

To avoid these scams, consumers should be vigilant about several key warning signs: 

  • Websites that offer prescription medications without requiring valid prescriptions.
  • Missing or unclear contact information and business registration details.
  • Absence of verifiable physical addresses.
  • Unusually low prices and limited-time offers.
  • Payment requests specifically for cryptocurrency.

Essential security measures include verifying that websites use secure checkout processes with HTTPS protocols and trusted payment gateways. Users should also deploy antivirus software to detect malware that may be embedded in fraudulent medical sites, enable firewalls to block suspicious traffic from known scam domains, and install endpoint protection across multiple devices for comprehensive security. 

Consumers should maintain healthy skepticism toward unsolicited health advice, product reviews, or miracle cure claims found through advertisements, emails, or social media links. When in doubt, consumers should verify pharmacy legitimacy through official regulatory channels before sharing any personal or financial information.

Here's Why We Need Child Influencer Laws in a Monetised Content Society

 

The increasing urgency around safeguarding children who are featured as influencers or content creators online is a concerning trend that has grown rapidly in recent years. Earlier, U.S. child labor laws like the Coogan Law were designed to protect child actors, but the rise of social media has created an environment where many minors—sometimes as family breadwinners—are now regularly producing monetized content. This shift raises new legal and ethical questions regarding consent, financial exploitation, and the long-term impact on children’s wellbeing. 

Recent popular documentaries, such as Hulu’s "Devil in the Family" and Netflix’s "Bad Influence," have brought to light extreme cases of abuse and exploitation involving child influencers. These shows highlight not only physical and emotional abuse but also the dangers posed when children’s most private moments are shared for profit. Central concerns include whether children can meaningfully consent to being featured, how difficult it is for them to refuse their parents, and who ultimately controls the digital footprint these young people accumulate. 

State legislatures are starting to take action in response to these harms. In 2023, Illinois took the lead by amending its child labour laws to specifically define vlogging as work and required parents to record their children's participation in content. Additionally, the state established trust funds to receive a percentage of the revenue.Other states, including Minnesota, Montana, California, and Utah, have enacted similar laws with unique provisions. For example, Minnesota prohibits children under 14 from engaging in content creation as labor, while Utah only mandates compensation for children when families earn a threshold amount from content. 

A key feature of some new state laws is the “right to be forgotten,” allowing individuals who were featured as minors to have their content removed later in life. While this empowers former child influencers, it can sometimes conflict with law enforcement needs for evidence. Even so, these laws mark important progress toward recognizing and addressing the specific risks faced by children in the influencer economy, mainly by treating online content creation as work and prioritizing financial safeguards. 

However, child experts stress that legislation alone cannot solve all problems associated with child influencing. Effective protection requires a collaborative approach: tech platforms should enforce clear and accessible privacy options, families must educate themselves and respect children’s right to consent, and policymakers should continue to advocate for balanced regulations. Ultimately, safeguarding the emotional and psychological wellbeing of child influencers—and considering the lasting effects of exposing personal lives online—must remain the top priority.

FBI Warns Chrome Users Against Unofficial Updates Downloading

 

If you use Windows, Chrome is likely to be the default browser. Despite Microsoft's ongoing efforts to lure users to the Edge and the rising threat of AI browsers, Google's browser remains dominant. However, Chrome is a victim of its own success. Because attackers are aware that you are likely to have it installed, it is the ideal entry point for them to gain access to your PC and your data. 

That is why you are seeing a series of zero-day alerts and emergency updates. This is also why the FBI is warning about the major threat posed by fraudulent Chrome updates. As part of the "ongoing #StopRansomware effort to publish advisories for network defenders that detail various ransomware variants and ransomware threat actors," the FBI and CISA, America's cyber defence agency, have issued their latest warning. 

The latest advisory addresses the recent rise in Interlock ransomware attacks. And, while the majority of the advice is aimed at individuals in charge of securing corporate networks and enforcing IT policies, it also includes a caution for PC users. Ransomware assaults require an entry point, or "initial access." And if you have a PC (or smartphone) connected to your employer's network, you are affected. The advisory also recommends that organisations "train users to spot social engineering attempts.”

In the case of Interlock, two of these ways of first entrance leverage the same lures that cybercriminals employ to target your personal accounts, as well as the data and security credentials on your own devices. You should be looking for these anyway. One of the techniques is ClickFix, which is easily detectable. This is where a notice or popup encourages you to paste content into a Windows command and run the script. It's accomplished by impersonating a technical issue, a secure website, or a file that you need to open. Any such directive is always an attack and should be ignored. 

Installing and updating fake Chrome has become commonplace, both on Android smartphones and Windows PCs. As with ClickFix, the guidance is quite explicit. Never use links in emails or texts to access upgrades or new installs. Always get updates and programs from the official websites or shops. Keep in mind that Chrome will automatically download updates and will prompt you to restart your browser to ensure the installation. Although those links are delivered to you, you are not required to look for them or click on random links.

World Leaks Outfit Linked to Dell Test Lab Intrusion

 

Dell Technologies has acknowledged a serious security compromise affecting its Customer Solution Centers platform, the latest high-profile intrusion by the World Leaks extortion outfit. 

The breach occurred earlier this month and targeted Dell's isolated demonstration environment, which is designed to showcase commercial solutions to enterprise customers, however the company claims that critical user data and operating systems are still secure. 

The attack targeted Dell's Customer Solution Centres infrastructure, which is a controlled environment used for product presentations and proof-of-concept testing for commercial users. Threat actors were able to successfully breach this platform, which follows stringent network segmentation guidelines to keep it isolated from production systems, according to Dell's official statement. 

The platform "is intentionally separated from customer and partner systems, as well as Dell's networks and is not used in the provision of services to Dell customers," according to Dell, which underlined the purposeful isolation of the compromised environment. Multiple isolation levels and clear warnings that forbid users from uploading private or sensitive data to the demonstration environment are features of the company's security architecture. 

The breach investigation discovered that the stolen data mostly consisted of fake test information, publicly available datasets used for demonstrations, Dell scripts, system data, and testing results. The only authentic data exposed appears to be an out-of-date contact list with little operational value, severely limiting the possible impact on Dell's company operations and customer relationships. 

Security review 

Report claims that Dell's thorough security response shows how well their multi-layered defence architecture can limit the potential harm caused by advanced cyberattacks. While ensuring that partner systems, production networks, and customer data repositories are unaffected by the incident, the company's security team is still looking into the breach vectors. 

The breach's limited scope shows Dell's strong data management processes and network segmentation strategies, which effectively prevented lateral movement into vital company systems. Dell's emphasis on using synthetic data for demonstration reasons was critical in limiting the breach's potential damage, as attackers accessed created information rather than sensitive consumer or company data.

This incident shows the expanding landscape of cyber threats, as attackers increasingly target demonstration and testing environments as potential entry points into larger corporate networks, making robust security architecture vital for organisational protection.

Patient Care Technology Disruptions Linked With the CrowdStrike Outage, Study Finds

 

A little more than a year ago, nearly 8.5 million Windows-based IT systems went down due to a simple error made during a routine software update. Computers were unable to reboot for several hours due to a bug from CrowdStrike, a cybersecurity business whose products are used to detect and respond to security attacks. Many of the systems needed further manual patches, which prolonged the outage.

The estimated financial toll? Anywhere between $5 billion and $10 billion for Fortune 500 firms – and close to $2 billion for the healthcare sector specifically.

A new report reveals that the negative repercussions on healthcare organisations have gone far beyond financial. A study published in JAMA Network Open by the University of California San Diego found that the incident triggered measurable disruptions in a large proportion of US hospitals, including technical issues that impacted basic operations, research activities, and direct patient care. The researchers discovered that immediately following the CrowdStrike upgrade on July 19, 759 hospitals (out of 2232 with available data) had measurable service disruptions. That represents more than one-third of healthcare organisations.

Of a total of 1098 service outages across those organisations, 21.8% were patient-facing and had a direct impact on patient care. Just over 15% were relevant to health-care operations, with 5.3% affecting research activities. The remaining 57% were either not classified as significant or unknown. 

“Patient-facing services spanned imaging platforms, prehospital medicine health record systems, patient transfer portals, access to secure documentation, and staff portals for viewing patient details,” the researchers explained. “In addition to staff portals, we saw outages in patient access platforms across diverse hospital systems; these platforms, when operating as usual, allow patients to schedule appointments, contact health care practitioners, access laboratory results, and refill prescriptions.” 

Additionally, some hospitals experienced outages in laboratory information systems (LIS), behavioural health apps, and patient monitoring systems like foetal monitors and cardiac telemetry devices. Software in development or pre-deployment stages, informational pages, educational resources for medical and nursing students, or donation pages for institutions were primarily impacted by the outages classified as irrelevant or unknown.

3.9% of hospitals had outages longer than 48 hours, while the majority of hospital services returned within 6 hours. Outages lasting longer than two full days were most common in hospitals in South Carolina, Maryland, and New Jersey. With the majority of assessed hospitals returning to service within six hours, Southern US organizations—including those in Tennessee, North Carolina, Louisiana, Alabama, Texas, and Florida—were among the quickest to recover.

The incident served as a stark reminder that human error is and always will be a serious threat to even the most resilient-seeming technologies, while also highlighting the extraordinarily fragile nature of the modern, hyperconnected healthcare ecosystem. CrowdStrike criticised the UCSD research methods and findings, but it also acknowledged and apologised to its customers and other impacted parties for the disruption and promised to be focused on enhancing the resilience of its platform.

Healthcare Firms Face Major Threats from Risk Management and Legacy Tech, Report Finds

 

With healthcare facilities scrambling to pinpoint and address their top cyber threats, Fortified's report provides some guidance on where to begin. The report identifies five major security gaps in healthcare organisations: inadequate asset inventories, a lack of unified risk management strategies, a lack of focus on supply-chain vulnerabilities, a preference for installing new technology over maintaining legacy systems, and poor employee training.

Major cyberattacks in recent years have demonstrated how these threats are linked. Weak supply-chain oversight is an especially critical issue given the interconnected framework of the healthcare ecosystem, which includes hospitals, pharmacies, and specialty-care institutions.

The 2024 Change Healthcare hack highlighted the industry's reliance on a few obscure but ubiquitous vendors. Outdated asset inventories exacerbate these flaws, making it more difficult to repair the damage after a supply-chain attack. And these attacks frequently target the very legacy technologies that have been overlooked in favour of new products.

While securing old systems remains a persistent challenge for healthcare organisations, Fortified discovered that it was the most significant area for improvement in the previous year, followed by recovery process improvements, response planning, post-incident communications, and threat analysis maturity.

Identity management, risk assessment maturity, and leadership involvement were further areas that needed improvement. Since many attacks start with credentials that have been stolen or falsified, the latter is particularly critical. 

A spokesperson stated that Fortified's study is predicated on client interactions, including incident engagements and security ratings derived from the Cybersecurity Framework, that took place between 2023 and June 2025. Fortified serves all of its clients in North America, including major university medical centres, integrated delivery networks, and small community hospitals.

Online Criminals Steal $500K Crypto Via Malicious AI Browser Extension

 

A Russian blockchain engineer lost over $500,000 worth of cryptocurrencies in a sophisticated cyberattack, highlighting the persisting and increasing threats posed by hostile open-source packages. Even seasoned users can be duped into installing malicious software by attackers using public repositories and ranking algorithms, despite the developer community's growing knowledge and caution.

The incident was discovered in June 2025, when the victim, an experienced developer who had recently reinstalled his operating system and only employed essential, well-known applications, noticed his crypto assets had been drained, despite rigorous attention to cybersecurity. 

The researchers linked the breach to a Visual Studio Code-compatible extension called "Solidity Language" for the Cursor AI IDE, a productivity-boosting tool for smart contract developers. The extension, which was made public via the Open VSX registry, masqueraded as a legal code highlighting tool but was actually a vehicle for remote code execution. After installation, the rogue extension ran a JavaScript file called extension.js, which linked to a malicious web site to download and run PowerShell scripts. 

These scripts, in turn, installed the genuine remote management tool ScreenConnect, allowing the perpetrators to maintain remote access to the compromised PC. The attackers used this access to execute further VBScripts, which delivered additional payloads such as the Quasar open-source backdoor and a stealer module capable of syphoning credentials and wallet passphrases from browsers, email clients, and cryptocurrency wallets. 

The masquerade was effective: the malicious extension appeared near the top of search results in the extension marketplace, thanks to a ranking mechanism that prioritised recency and perceived activity over plain download counts. The attackers also plagiarised descriptions from legitimate items, thus blurring the distinction between genuine and fraudulent offerings. When the bogus extension failed to deliver the promised capabilities, the user concluded it was a glitch, allowing the malware to remain undetected. 

In an additional twist, after the malicious item was removed from the store, the threat actors swiftly uploaded a new clone called "solidity," employing advanced impersonation techniques. The malicious publisher's name differed by only one character: an uppercase "I" instead of a lowercase "l," a discrepancy that was nearly hard to detect due to font rendering. The bogus extension's download count was intentionally boosted to two million in a bid to outshine the real program, making the correct choice difficult for users.

The effort did not end there; similar attack tactics were discovered in further malicious packages on both the Open VSX registry and npm, which targeted blockchain developers via extensions and packages with recognisable names. Each infection chain followed a well-known pattern: executing PowerShell scripts, downloading further malware, and communicating with attacker-controlled command-and-control servers. This incident highlights the ongoing threat of supply-chain attacks in the open-source ecosystem.

Major Breach at Medical Billing Giant Results in The Data Leak of 5.4 Million Users

 

Episource, the medical billing behemoth, has warned millions of Americans that a hack earlier this year resulted in the theft of their private and medical data. According to a listing with the United States Department of Health and Human Services, one of the year's largest healthcare breaches affects around 5.4 million people. 

Episource, which is owned by Optum, a subsidiary of the largest health insurance company UnitedHealth Group, offers billing adjustment services to doctors, hospitals, and other healthcare-related organisations. In order to process claims through their health insurance, the company handles a lot of patients' personal and medical data.

In notices filed in California and Vermont on Friday last week, Episource stated that a criminal was able to "see and take copies" of patient and member data from its systems during the weeklong breach that ended on February 6. 

Private information stolen includes names, postal and email addresses, and phone numbers, as well as protected health data such as medical record numbers and information on doctors, diagnoses, drugs, test results, imaging, care, and other treatments. The stolen data also includes health insurance information, such as health plans, policies, and member numbers. 

Episource would not elaborate on the nature of the issue, but Sharp Healthcare, one of the organisations that worked with Episource and was impacted by the intrusion, notified its clients that the Episource hack was triggered by ransomware. This is the latest cybersecurity incident to affect UnitedHealth in recent years.

Change Healthcare, one of the top companies in the U.S. healthcare industry, which conducts billions of health transactions each year, was attacked by a ransomware gang in February 2024, resulting in the theft of personal and health information for over 190 million Americans. The cyberattack resulted in the largest healthcare data breach in US history. Several months later, UnitedHealth's Optum division exposed to the internet an internal chatbot used by staff to enquire about claims.

Axis Max Life Cyberattack: A Warning to the Indian Insurance Sector

 

On July 2, 2025, Max Financial Services revealed a cybersecurity incident targeting its subsidiary, Axis Max Life Insurance, India's fifth-largest life insurer. This incident raises severe concerns regarding data security and threat detection in the Indian insurance sector. 

The breach was discovered by an unknown third party who notified Axis Max Life Insurance of the data access, while exact technical specifics are still pending public release. In response, the company started: 

  • Evaluation of internal security 
  • Log analysis 
  • Consulting with cybersecurity specialists for investigation and remediation 

Data leaked during the breach 

The firm accepted that some client data could have been accessed, but no specific data types or quantities were confirmed at the time of the report. Given the sensitive nature of insurance data, the exposed data could include: 

  • Personally identifiable information (PII). 
  • Financial/Insurance Policy Data Contact and health information (common for life insurers) 

This follows a recent trend of PII-focused assaults on Indian insurers (e.g., Niva Bupa, Star Health, HDFC Life), indicating an increased threat to consumer data. 

Key takeaways

Learning of a breach from an anonymous third party constitutes a serious failure in internal threat identification and monitoring. Implement real-time threat detection across endpoints, servers, and cloud platforms with SIEM, UEBA, and EDR/XDR to ensure that the organisation identifies breaches before external actors do. 

Agents, partners, and tech vendors are frequently included in insurance ecosystems, with each serving as a possible point of compromise. Extend Zero Trust principles to all third-party access, requiring tokenised, time-limited access and regular security evaluations of suppliers with data credentials. 

Mitigation tips 

  • Establish strong data inventory mapping and access logging, particularly in systems that store personally identifiable information (PII) and financial records. 
  • Have a pre-established IR crisis communication architecture that is linked with legal, regulatory, and consumer response channels that can be activated within hours. 
  • Continuous vulnerability scanning, least privilege policies, and red teaming should be used to identify exploitable holes at both the technical and human layers. 
  • Employ continuous security education, necessitate incident reporting processes, and behavioural monitoring to detect policy violations or insider abuse early.

Deepfakes Explained: How They Operate and How to Safeguard Yourself

 

In May of this year, an anonymous person called and texted elected lawmakers and business executives pretending to be a senior White House official. U.S. senators were among the recipients who believed they were speaking with White House chief of staff Susie Wiles. In reality, though, it was a phoney. 

The scammer employed AI-generated deepfake software to replicate Wiles' voice. This easily accessible, low-cost software modifies a public speech clip to deceive the target. 

Why are deepfakes so convincing? 

Deepfakes are alarming because of how authentic they appear. AI models can analyse public photographs or recordings of a person (for example, from social media or YouTube) and then create a fake that mimics their face or tone very accurately. As a result, many people overestimate their ability to detect fakes. In an iProov poll, 43% of respondents stated they couldn't tell the difference between a real video and a deepfake, and nearly one-third had no idea what a deepfake was, highlighting a vast pool of potential victims.

Deepfakes rely on trust: the victim recognises a familiar face or voice, and alarms do not sound. These scams also rely on haste and secrecy (for example, 'I need this wire transfer now—do not tell anyone'). When we combine emotional manipulation with visual/auditory reality, it is no surprise that even professionals have been duped. The employee in the $25 million case saw something odd—the call stopped abruptly, and he never communicated directly with colleagues—but only realised it was a scam after the money was stolen. 

Stay vigilant 

Given the difficulty in visually recognising a sophisticated deepfake, the focus switches to verification. If you receive an unexpected request by video call, phone, or voicemail, especially if it involves money, personal data, or anything high-stakes, take a step back. Verify the individual's identity using a separate channel.

For example, if you receive a call that appears to be from a family member in distress, hang up and call them back at their known number. If your supervisor requests that you buy gift cards or transfer payments, attempt to confirm in person or through an official company channel. It is neither impolite or paranoid; rather, it is an essential precaution today. 

Create secret safewords or verification questions with loved ones for emergencies (something a deepfake impostor would not know). Be wary of what you post publicly. If possible, limit the amount of high-quality videos or voice recordings you provide, as these are used to design deepfakes.

2.2 Million People Impacted by Ahold Delhaize Data Breach

 

Ahold Delhaize, the Dutch grocery company, reported this week that a ransomware attack on its networks last year resulted in a data breach that affected more than 2.2 million customers. 

The cybersecurity breach was discovered in November 2024, when numerous US pharmacies and grocery chains controlled by Ahold Delhaize reported network troubles. The incident affected Giant Food pharmacies, Hannaford supermarkets, Food Lion, The Giant Company, and Stop & Shop.

In mid-April 2025, Ahold Delhaize was attacked by the Inc Ransom ransomware organisation. Shortly after, the company acknowledged that the hackers probably stole data from some of its internal business systems.

 Since then, Ahold Delhaize has determined that personal data has been hacked, and those affected are currently being notified. Internal employment records for both current and defunct Ahold Delhaize USA enterprises were included in the stolen files. The organization told the Maine Attorney General’s Office that 2,242,521 people are affected.

The compromised information differs from person to person, however it includes name, contact information, date of birth, Social Security number, passport number, driver's license number, financial account information, health information, and employment-related information. Affected consumers will receive free credit monitoring and identity protection services for two years. 

The attackers published around 800 Gb of data allegedly stolen from Ahold Delhaize on their Tor-based leak website, indicating that the corporation did not pay a ransom. Inc Ransom claimed to have stolen 6 TB of data from the company.

Cyberattacks on the retail industry, notably supermarkets, have increased in recent months. In April, cybercriminals believed to be affiliated with the Scattered Spider group targeted UK retailers Co-op, Harrods, and M&S. 

Earlier this month, United Natural Foods (UNFI), the primary distributor for Amazon's Whole Foods and many other North American grocery shops, was targeted by a hack that disrupted company operations and resulted in grocery shortages. According to UNFI, there is no evidence that personal or health information was compromised, and no ransomware group claimed responsibility for the attack.

New Report Ranks Best And Worst Generative AI Tools For Privacy

 

Most generative AI companies use client data to train their chatbots. For this, they may use private or public data. Some services take a more flexible and non-intrusive approach to gathering customer data. Not so much for others. A recent analysis from data removal firm Incogni weighs the benefits and drawbacks of AI in terms of protecting your personal data and privacy.

As part of its "Gen AI and LLM Data Privacy Ranking 2025," Incogni analysed nine well-known generative AI services and evaluated their data privacy policies using 11 distinct factors. The following queries were addressed by the criteria: 

  • What kind of data do the models get trained on? 
  • Is it possible to train the models using user conversations? 
  • Can non-service providers or other appropriate entities receive prompts? 
  • Can the private data from users be erased from the training dataset?
  • How clear is it when training is done via prompts? 
  • How simple is it to locate details about the training process of models? 
  • Does the data collection process have a clear privacy policy?
  • How easy is it to read the privacy statement? 
  • Which resources are used to gather information about users?
  • Are third parties given access to the data? 
  • What information are gathered by the AI apps? 

The research involved Mistral AI's Le Chat, OpenAI's ChatGPT, xAI's Grok, Anthropic's Claude, Inflection AI's Pi, DeekSeek, Microsoft Copilot, Google Gemini, and Meta AI. Each AI performed well on certain questions but not so well on others. 

For instance, Grok performed poorly on the readability of its privacy policy but received a decent rating for how clearly it communicates that prompts are used for training. As another example, the ratings that ChatGPT and Gemini received for gathering data from their mobile apps varied significantly between the iOS and Android versions.

However, Le Chat emerged as the best privacy-friendly AI service overall. It did well in the transparency category, despite losing a few points. Additionally, it only collects a small amount of data and achieves excellent scores for additional privacy concerns unique to AI. 

Second place went to ChatGPT. Researchers at Incogni were a little worried about how user data interacts with the service and how OpenAI trains its models. However, ChatGPT explains the company's privacy standards in detail, lets you know what happens to your data, and gives you explicit instructions on how to restrict how your data is used. Claude and PI came in third and fourth, respectively, after Grok. Each performed reasonably well in terms of protecting user privacy overall, while there were some issues in certain areas. 

"Le Chat by Mistral AI is the least privacy-invasive platform, with ChatGPT and Grok following closely behind," Incogni noted in its report. "These platforms ranked highest when it comes to how transparent they are on how they use and collect data, and how easy it is to opt out of having personal data used to train underlying models. ChatGPT turned out to be the most transparent about whether prompts will be used for model training and had a clear privacy policy.” 

In its investigation, Incogni discovered that AI firms exchange data with a variety of parties, including service providers, law enforcement, members of the same corporate group, research partners, affiliates, and third parties. 

"Microsoft's privacy policy implies that user prompts may be shared with 'third parties that perform online advertising services for Microsoft or that use Microsoft's advertising technologies,'" Incogni added in the report. "DeepSeek's and Meta's privacy policies indicate that prompts can be shared with companies within its corporate group. Meta's and Anthropic's privacy policies can reasonably be understood to indicate that prompts are shared with research collaborators.” 

You can prevent the models from being trained using your prompts with some providers. This is true for Grok, Mistral AI, Copilot, and ChatGPT. However, based on their privacy rules and other resources, it appears that other services do not allow this kind of data collecting to be stopped. Gemini, DeepSeek, Pi AI, and Meta AI are a few of these. In response to this concern, Anthropic stated that it never gathers user input for model training. 

Ultimately, a clear and understandable privacy policy significantly helps in assisting you in determining what information is being gathered and how to opt out.