Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Data protection. Show all posts

Leaked Data Exposes Daily Lives of North Korean IT Workers in Remote Work Scams

 

A recent data leak has shed rare light on the hidden world of North Korean IT workers who carry out remote work scams worldwide. The revelations not only expose the highly organized operations of these state-sponsored workers but also offer an unusual glimpse into their demanding work culture and limited personal lives.  

According to the leak, North Korean IT operatives rely on a mix of fraudulent digital identities and sophisticated tools to infiltrate global companies. Using fake IDs, resumes, and accounts on platforms such as Google, GitHub, and Slack, they are able to secure remote jobs undetected. To conceal their location, they employ VPNs and remote access programs like AnyDesk, while AI-powered deepfakes and writing assistants assist in polishing resumes, generating fake profiles, and handling interviews or workplace communication in English. 

The documents reveal an intense work environment. Workers are typically expected to log a minimum of 14 hours per day, with strict quotas to meet. Failure to achieve these targets often results in even longer working hours. Supervisors keep close watch, employing surveillance measures like screen recordings and tight control over personal communications to ensure productivity and compliance. 

Despite the pressure, fragments of normalcy emerge in the leaked records. Spreadsheets point to organized social activities such as volleyball tournaments, while Slack messages show employees celebrating birthdays, exchanging jokes, and sharing memes. Some leaked recordings even caught workers playing multiplayer games like Counter-Strike, suggesting attempts to balance their grueling schedules with occasional leisure. 

The stakes behind these scams are far from trivial. According to estimates from the United Nations and the U.S. government, North Korea’s IT worker schemes generate between $250 million and $600 million annually. This revenue plays a direct role in funding the country’s ballistic missile programs and other weapons of mass destruction, underscoring the geopolitical consequences of what might otherwise appear as simple cyber fraud.  

The leaked data also highlights the global scale of the operation. Workers are not always confined to North Korea itself; many operate from China, Russia, and Southeast Asian nations to evade detection. Over time, the scheme has grown more sophisticated, with increasing reliance on AI and expanded targeting of companies across industries worldwide. 

A critical component of these scams lies in the use of so-called “laptop farms” based in countries like the United States. Here, individuals—sometimes unaware of their role—receive corporate laptops and install remote access software. This setup enables North Korean operatives to use the hardware as if they were legitimate employees, further complicating efforts to trace the fraud back to Pyongyang. 

Ultimately, the leak provides a rare inside view of North Korea’s state-directed cyber workforce. It underscores the regime’s ability to merge strict discipline, advanced digital deception, and even glimpses of ordinary life into a program that not only exploits global companies but also fuels one of the world’s most pressing security threats.

US Lawmakers Raise Concerns Over AI Airline Ticket Pricing Practices

 

Airline controversies often make headlines, and recent weeks have seen no shortage of them. Southwest Airlines faced passenger backlash after a leaked survey hinted at possible changes to its Rapid Rewards program. Delta Air Lines also reduced its Canadian routes in July amid a travel boycott, prompting mixed reactions from U.S. states dependent on Canadian tourism. 

Now, a new and more contentious issue involving Delta has emerged—one that merges the airline industry’s pricing strategies with artificial intelligence (AI), raising alarm among lawmakers and regulators. The debate centers on the possibility of airlines using AI to determine “personalized” ticket prices based on individual passenger data. 

Such a system could adjust fares in real time during searches and bookings, potentially charging some customers more—particularly those perceived as wealthier or in urgent need of travel—while offering lower rates to others. Factors influencing AI-driven pricing could include a traveler’s zip code, age group, occupation, or even recent online searches suggesting urgency, such as looking up obituaries. 

Critics argue this approach essentially monetizes personal information to maximize airline profits, while raising questions about fairness, transparency, and privacy. U.S. Transportation Secretary Sean Duffy voiced concerns on August 5, stating that any attempt to individualize airfare based on personal attributes would prompt immediate investigation. He emphasized that pricing seats according to income or personal identity is unacceptable. 

Delta Air Lines has assured lawmakers that it has never used, tested, or planned to use personal data to set individual ticket prices. The airline acknowledged its long-standing use of dynamic pricing, which adjusts fares based on competition, fuel costs, and demand, but stressed that personal information has never been part of the equation. While Duffy accepted Delta’s statement “at face value,” several Democratic senators, including Richard Blumenthal, Mark Warner, and Ruben Gallego, remain skeptical and are pressing for legislative safeguards. 

This skepticism is partly fueled by past comments from Delta President Glen Hauenstein, who in December suggested that AI could help predict how much passengers are willing to pay for premium services. Although Delta has promised not to implement AI-based personal pricing, the senators want clarity on the nature of the data being collected for fare determination. 

In response to these concerns, Democratic lawmakers Rashida Tlaib and Greg Casar have introduced a bill aimed at prohibiting companies from using AI to set prices or wages based on personal information. This would include preventing airlines from raising fares after detecting sensitive online activity. Delta’s partnership with AI pricing firm Fetcherr—whose clients include several major global airlines—has also drawn attention. While some carriers view AI pricing as a profit-boosting tool, others, like American Airlines CEO Robert Isom, have rejected the practice, citing potential damage to consumer trust. 

For now, AI-driven personal pricing in air travel remains a possibility rather than a reality in the U.S. Whether it will be implemented—or banned outright—depends on the outcome of ongoing political and public scrutiny. Regardless, the debate underscores a growing tension between technological innovation and consumer protection in the airline industry.

Venice Film Festival Cyberattack Leaks Personal Data of Accredited Participants

 

The Venice Film Festival has reportedly been hit by a cyberattack, resulting in the leak of sensitive personal data belonging to accredited attendees. According to The Hollywood Reporter, the breach exposed information including names, email addresses, contact numbers, and tax details of individuals registered for this year’s event. The affected group includes both festival participants and members of the press who had received official accreditation. News of the incident was communicated through an official notification. 

The report states that unauthorized actors gained access to the festival’s servers on July 7. In response, the event’s IT team acted swiftly to contain the breach. Their immediate measures included isolating compromised systems, securing affected infrastructure, and notifying relevant authorities. Restoration work was launched promptly to minimize disruption. Those impacted by the incident have been advised to contact the festival at privacy@labiennale.org for more information and guidance. 

Organizers assured that the breach would not affect payment processing, ticketing, or booking systems. This means that preparations for the upcoming 82nd edition of the Venice Film Festival will continue as scheduled, with the event set to run from August 27 to September 9, 2025, in Venice, Italy. As in previous years, the program will feature an eclectic mix of global cinema, spanning independent works, arthouse creations, and major Hollywood productions. 

The 2025 lineup boasts notable names in international filmmaking. Hollywood will be represented by directors such as Luca Guadagnino, Guillermo del Toro, Yorgos Lanthimos, Kathryn Bigelow, Benny Safdie, and Noah Baumbach. Baumbach’s new film Jay Kelly features a star pairing of George Clooney and Adam Sandler, alongside a supporting cast that includes Laura Dern, Greta Gerwig, Riley Keough, Billy Crudup, Eve Hewson, Josh Hamilton, and Patrick Wilson. 

Following last year’s Queer, Guadagnino returns with After The Hunt, a morally complex drama starring Ayo Edebiri, Julia Roberts, and Andrew Garfield, screening in the Out of Competition category. Benny Safdie will present The Smashing Machine, featuring Dwayne Johnson and Emily Blunt in a tense sports drama — his first solo directorial effort after his collaborations with brother Josh on acclaimed films like Uncut Gems and Good Time. 

Festival director Alberto Barbera has hinted at a strong awards season presence for several films in the lineup. He cited The Smashing Machine, Kathryn Bigelow’s latest feature, and Guillermo del Toro’s adaptation of Frankenstein as potential Oscar contenders. Despite the cyberattack, the Venice Film Festival remains on track to deliver one of the year’s most anticipated cinematic showcases.

Cybercriminals Escalate Client-Side Attacks Targeting Mobile Browsers

 

Cybercriminals are increasingly turning to client-side attacks as a way to bypass traditional server-side defenses, with mobile browsers emerging as a prime target. According to the latest “Client-Side Attack Report Q2 2025” by security researchers c/side, these attacks are becoming more sophisticated, exploiting the weaker security controls and higher trust levels associated with mobile browsing. 

Client-side attacks occur directly on the user’s device — typically within their browser or mobile application — instead of on a server. C/side’s research, which analyzed compromised domains, autonomous crawling data, AI-powered script analysis, and behavioral tracking of third-party JavaScript dependencies, revealed a worrying trend. Cybercriminals are injecting malicious code into service workers and the Progressive Web App (PWA) logic embedded in popular WordPress themes. 

When a mobile user visits an infected site, attackers hijack the browser viewport using a full-screen iframe. Victims are then prompted to install a fake PWA, often disguised as adult content APKs or cryptocurrency apps, hosted on constantly changing subdomains to evade takedowns. These malicious apps are designed to remain on the device long after the browser session ends, serving as a persistent backdoor for attackers. 

Beyond persistence, these apps can harvest login credentials by spoofing legitimate login pages, intercept cryptocurrency wallet transactions, and drain assets through injected malicious scripts. Some variants can also capture session tokens, enabling long-term account access without detection. 

To avoid exposure, attackers employ fingerprinting and cloaking tactics that prevent the malicious payload from triggering in sandboxed environments or automated security scans. This makes detection particularly challenging. 

Mobile browsers are a favored target because their sandboxing is weaker compared to desktop environments, and runtime visibility is limited. Users are also more likely to trust full-screen prompts and install recommended apps without questioning their authenticity, giving cybercriminals an easy entry point. 

To combat these threats, c/side advises developers and website operators to monitor and secure third-party scripts, a common delivery channel for malicious code. Real-time visibility into browser-executed scripts is essential, as relying solely on server-side protections leaves significant gaps. 

End-users should remain vigilant when installing PWAs, especially those from unfamiliar sources, and treat unexpected login flows — particularly those appearing to come from trusted providers like Google — with skepticism. As client-side attacks continue to evolve, proactive measures on both the developer and user fronts are critical to safeguarding mobile security.

New York Lawmaker Proposes Bill to Regulate Gait Recognition Surveillance

 

New York City’s streets are often packed with people rushing to work, running errands, or simply enjoying the day. For many residents, walking is faster than taking the subway or catching a taxi. However, a growing concern is emerging — the way someone walks could now be tracked, analyzed, and used to identify them. 

City Councilmember Jennifer Gutierrez is seeking to address this through new legislation aimed at regulating gait recognition technology. This surveillance method can identify people based on the way they move, including their walking style, stride length, and posture. In some cases, it even factors in other unique patterns, such as vocal cadence. 

Gutierrez’s proposal would classify a person’s gait as “personal identifying information,” giving it the same protection as highly sensitive data, including tax or medical records. Her bill also requires that individuals be notified if city agencies are collecting this type of information. She emphasized that most residents are unaware their movements could be monitored, let alone stored for future analysis. 

According to experts, gait recognition technology can identify a person from as far as 165 feet away, even if they are walking away from the camera. This capability makes it an appealing tool for law enforcement but raises significant privacy questions. While Gutierrez acknowledges its potential in solving crimes, she stresses that everyday New Yorkers should not have their personal characteristics tracked without consent. 

Public opinion is divided. Privacy advocates argue the technology poses a serious risk of misuse, such as mass tracking without warrants or transparency. Supporters of its use believe it can be vital for security and public safety when handled with proper oversight. 

Globally, some governments have already taken steps to regulate similar surveillance tools. The European Union enforces strict rules on biometric data collection, and certain U.S. states have introduced laws to address privacy risks. However, experts warn that advancements in technology often move faster than legislation, making it difficult to implement timely safeguards. 

The New York City administration is reviewing Gutierrez’s bill, while the NYPD’s use of gait recognition for criminal investigations would remain exempt under the proposed law. The debate continues over whether this technology’s benefits outweigh the potential erosion of personal privacy in one of the world’s busiest cities.

South Dakota Researchers Develop Secure IoT-Based Crop Monitoring System

 

At the 2025 annual meeting of the American Society of Agricultural and Biological Engineers, researchers from South Dakota State University unveiled a groundbreaking system designed to help farmers increase crop yields while reducing costs. This innovative technology combines sensors, biosensors, the Internet of Things (IoT), and artificial intelligence to monitor crop growth and deliver actionable insights. 

Unlike most projects that rely on simulated post-quantum security in controlled lab environments, the SDSU team, led by Professor Lin Wei and Ph.D. student Manish Shrestha, implemented robust, real-world security in a complete sensor-to-cloud application. Their work demonstrates that advanced, future-ready encryption can operate directly on small IoT devices, eliminating the need for large servers to safeguard agricultural data. 

The team placed significant emphasis on protecting the sensitive information collected by their system. They incorporated advanced encryption and cryptographic techniques to ensure the security and integrity of the vast datasets gathered from the field. These datasets included soil condition measurements—such as temperature, moisture, and nutrient availability—alongside early indicators of plant stress, including nutrient deficiencies, disease presence, and pest activity. Environmental factors were also tracked to provide a complete picture of field health. 

Once processed, this data was presented to farmers in a user-friendly format, enabling them to make informed management decisions without exposing their operational information to potential threats. This could include optimizing irrigation schedules, applying targeted fertilization, or implementing timely pest and disease control measures, all while ensuring data privacy.  

Cybersecurity’s role in agricultural technology emerged as a central topic at the conference, with many experts recognizing that safeguarding digital farming systems is as critical as improving productivity. The SDSU project attracted attention for addressing this challenge head-on, highlighting the importance of building secure infrastructure for the rapidly growing amount of agricultural data generated by smart farming tools.  

Looking ahead, the research team plans to further refine their crop monitoring system. Future updates may include faster data processing and a shift to solar-powered batteries, which would reduce maintenance needs and extend device lifespan. These improvements aim to make the technology even more efficient, sustainable, and farmer-friendly, ensuring that agricultural innovation remains both productive and secure in the face of evolving cyber threats.

UK'S Online Safety Act Faces Criticism, Doesn't Make Children Safer Online

UK'S Online Safety Act Faces Criticism, Doesn't Make Children Safer Online

The implementation of a new law to protect the online safety of children in the UK has caught criticism from digital rights activist groups, politicians, free-speech campaigners, tech companies, content creators, digital rights advocacy groups, and others. The Online Safety Act (OSA) came into effect on July 25th. The legislation aims to protect children from accessing harmful content on the internet. Why problematic?

Safe internet for kids?

However, the act also poses potential privacy risks. Certain provisions of the act require companies behind websites in the UK to prevent users under 18 from accessing dangerous content such as pornography and content related to eating disorders, self-harm, or worse- suicide. The act also mandates companies to give minors age-appropriate access to other types of material concerning abusive or hateful, and bullying content.

Tech companies' role

In compliance with the OSA provisions, platforms have enforced age authentication steps to verify the ages of users on their sites or apps. These include platforms like X, Discord, Bluesky, and Reddit; porn besides such as YouPorn and Pornhub, and music streaming services like Spotify, which also require users to provide face scans to view explicit content.

As a result, VPN companies have experienced a major surge in VPN subscriptions in the UK over the past few weeks. Proton VPN reported a 1800% hike in UK daily sign-ups, according to the BBC.

As the UK is one of the first democratic countries after Australia to enforce such strict content regulations on tech companies, it has garnered widespread criticism, becoming a watched test case, and might impact online safety regulation in other countries such as India.

About OSA rules

To make the UK the ‘safest place’ in the world to be online, the OSA Act was signed into law in 2023. It includes provisions that impose a burden on social media platforms to remove illegal content as well as implement transparency and accountability measures. But the British government website claims that the strictest provisions in the OSA are aimed a promoting online safety of under-18 children.

The provisions apply to companies that exist even outside the UK. Companies had until July 24, 2025, to assess if their websites were likely to be accessed by children and complete their evaluation of the harm to children.

CISA Urges Immediate Patching of Critical SysAid Vulnerabilities Amid Active Exploits

 

The Cybersecurity and Infrastructure Security Agency (CISA) has issued a critical alert about two high-risk vulnerabilities in SysAid’s IT service management (ITSM) platform that are being actively exploited by attackers. These security flaws, identified as CVE-2025-2775 and CVE-2025-2776, can enable unauthorized actors to hijack administrator accounts without requiring credentials. 

Discovered in December 2024 by researchers at watchTowr Labs, the two vulnerabilities stem from XML External Entity (XXE) injection issues. SysAid addressed these weaknesses in March 2025 through version 24.4.60 of its On-Premises software. However, the urgency escalated when proof-of-concept code demonstrating how to exploit the flaws was published just a month later, highlighting how easily bad actors could access sensitive files on affected systems. 

Although CISA has not provided technical specifics about the ongoing attacks, it added the vulnerabilities to its Known Exploited Vulnerabilities Catalog. Under Binding Operational Directive 22-01, all Federal Civilian Executive Branch (FCEB) agencies are required to patch their systems by August 12. CISA also strongly recommends that organizations in the private sector act swiftly to apply the necessary updates, regardless of the directive’s federal scope. 

“These vulnerabilities are commonly exploited by malicious cyber actors and present serious threats to government systems,” CISA stated in its warning. SysAid’s On-Prem solution is deployed on an organization’s internal infrastructure, allowing IT departments to manage help desk tickets, assets, and other services. According to monitoring from Shadowserver, several dozen SysAid installations remain accessible online, particularly in North America and Europe, potentially increasing exposure to these attacks. 

Although CISA has not linked these specific flaws to ransomware campaigns, the SysAid platform was previously exploited in 2023 by the FIN11 cybercrime group, which used another vulnerability (CVE-2023-47246) to distribute Clop ransomware in zero-day attacks. Responding to the alert, SysAid reaffirmed its commitment to cybersecurity. “We’ve taken swift action to resolve these vulnerabilities through security patches and shared the relevant information with CISA,” a company spokesperson said. “We urge all customers to ensure their systems are fully up to date.” 

SysAid serves a global clientele of over 5,000 organizations and 10 million users across 140 countries. Its user base spans from startups to major enterprises, including recognized brands like Coca-Cola, IKEA, Honda, Xerox, Michelin, and Motorola.

UK Army Probes Leak of Special Forces Identities in Grenadier Guards Publication

 

The British Army has initiated an urgent investigation following the public exposure of sensitive information identifying members of the UK Special Forces. General Sir Roly Walker, Chief of the General Staff, has directed a comprehensive review into how classified data was shared, after it was found that a regimental newsletter had published names and postings of elite soldiers over a period of more than ten years. 

The internal publication, created by the Grenadier Guards Regimental Association, is believed to have revealed the identities and current assignments of high-ranking officers serving in confidential roles. Several names were reportedly accompanied by the abbreviation “MAB,” a known military code linked to Special Forces. Security experts have expressed concern that such identifiers could be easily deciphered by hostile actors, significantly raising the risk to those individuals. 

The revelation has triggered backlash within the Ministry of Defence, with Defence Secretary John Healey reportedly outraged by the breach. The Ministry had already issued warnings about this very issue, yet the publication remained online until it was finally edited last week. The breach adds to growing concern over operational security lapses in elite British military units.  

This latest disclosure follows closely on the heels of another incident in which the identities of Special Forces soldiers involved in missions in Afghanistan were exposed through a separate data leak. That earlier breach had been shielded by a legal order for nearly two years, emphasizing the persistent nature of such security vulnerabilities. 

The protection of Special Forces members’ identities is a critical requirement due to the covert and high-risk nature of their work. Publicly exposing their names can not only endanger lives but also jeopardize ongoing intelligence missions and international collaborations. The leaked material is also said to have included information about officers working within the Cabinet Office’s National Security Secretariat—an agency that advises the Prime Minister on national defence—and even a soldier assigned to General Walker’s own operational staff. 

While the Grenadier Guards’ publication has now removed the sensitive content, another regiment had briefly published similar details before promptly deleting them. Still, the extended availability of the Grenadier data has raised questions about oversight and accountability in how military associations manage sensitive information.  

General Walker, a former commander of the Grenadier Guards, announced that he has mandated an immediate review of all information-sharing practices between the army and regimental associations. His directive aims to ensure that stronger protocols are in place to prevent such incidents in the future, while still supporting the positive role these associations play for veterans and serving members alike. 

The Defence Ministry has not released details on whether those named in the leak will be relocated or reassigned. However, security analysts say the long-term consequences of the breach could be serious, including potential threats to the personnel involved and operational risks to future Special Forces missions. As investigations continue, the British Army is now under pressure to tighten internal controls and better protect its most confidential information from digital exposure.

Legal Battle Over Meta’s AI Training Likely to Reach Europe’s Top Court

 


The ongoing debate around Meta’s use of European data to train its artificial intelligence (AI) systems is far from over. While Meta has started training its large language models (LLMs) using public content from Facebook and Instagram, privacy regulators in Europe are still questioning whether this is lawful and the issue may soon reach the European Court of Justice (ECJ).

Meta began training its AI using public posts made by users in the EU shortly after getting the go-ahead from several privacy watchdogs. This approval came just before Meta launched AI-integrated products, including its smart glasses, which rely heavily on understanding cultural and regional context from online data.

However, some regulators and consumer groups are not convinced the approval was justified. A German consumer organization had attempted to block the training through an emergency court appeal. Although the request was denied, that was only a temporary decision. The core legal challenges, including one led by Hamburg’s data protection office, are still expected to proceed in court.

Hamburg’s commissioner, who initially supported blocking the training, later withdrew a separate emergency measure under Europe’s data protection law. He stated that while the training has been allowed to continue for now, it’s highly likely that the final ruling will come from the EU’s highest court.

The controversy centers on whether Meta has a strong enough legal reason, known as "legitimate interest" to use personal data for AI training. Meta’s argument was accepted by Irish regulators, who oversee Meta’s EU operations, on the condition that strict privacy safeguards are in place.


What Does ‘Legitimate Interest’ Mean Under GDPR?

Under the General Data Protection Regulation (GDPR), companies must have a valid reason to collect and use personal data. One of the six legal bases allowed is called “legitimate interest.” 

This means a company can process someone’s data if it’s necessary for a real business purpose, as long as it does not override the privacy rights of the individual.

In the case of AI model training, companies like Meta claim that building better products and improving AI performance qualifies as a legitimate interest. However, this is debated, especially when public data includes posts with personal opinions, cultural expressions, or identity-related content.

Data protection regulators must carefully balance:

1. The company’s business goals

2. The individual’s right to privacy

3. The potential long-term risks of using personal data for AI systems


Some experts argue that this sets a broader precedent. If Meta can train its AI using public data under the concept of legitimate interest, other companies may follow. This has raised hopes among many European AI firms that have felt held back by unclear or strict regulations.

Industry leaders say that regulatory uncertainty, specifically surrounding Europe’s General Data Protection Regulation (GDPR) and the upcoming AI Act has been one of the biggest barriers to innovation in the region. Others believe the current developments signal a shift toward supporting responsible AI development while protecting users’ rights.

Despite approval from regulators and support from industry voices, legal clarity is still missing. Many legal experts and companies agree that only a definitive ruling from the European Court of Justice can settle whether using personal data for AI training in this way is truly lawful.


Why Major Companies Are Still Falling to Basic Cybersecurity Failures

 

In recent weeks, three major companies—Ingram Micro, United Natural Foods Inc. (UNFI), and McDonald’s—faced disruptive cybersecurity incidents. Despite operating in vastly different sectors—technology distribution, food logistics, and fast food retail—all three breaches stemmed from poor security fundamentals, not advanced cyber threats. 

Ingram Micro, a global distributor of IT and cybersecurity products, was hit by a ransomware attack in early July 2025. The company’s order systems and communication channels were temporarily shut down. Though systems were restored within days, the incident highlights a deeper issue: Ingram had access to top-tier security tools, yet failed to use them effectively. This wasn’t a tech failure—it was a lapse in execution and internal discipline. 

Just two weeks earlier, UNFI, the main distributor for Whole Foods, suffered a similar ransomware attack. The disruption caused significant delays in food supply chains, exposing the fragility of critical infrastructure. In industries that rely on real-time operations, cyber incidents are not just IT issues—they’re direct threats to business continuity. 

Meanwhile, McDonald’s experienced a different type of breach. Researchers discovered that its AI-powered hiring tool, McHire, could be accessed using a default admin login and a weak password—“123456.” This exposed sensitive applicant data, potentially impacting millions. The breach wasn’t due to a sophisticated hacker but to oversight and poor configuration. All three cases demonstrate a common truth: major companies are still vulnerable to basic errors. 

Threat actors like SafePay and Pay2Key are capitalizing on these gaps. SafePay infiltrates networks through stolen VPN credentials, while Pay2Key, allegedly backed by Iran, is now offering incentives for targeting U.S. firms. These groups don’t need advanced tools when companies are leaving the door open. Although Ingram Micro responded quickly—resetting credentials, enforcing MFA, and working with external experts—the damage had already been done. 

Preventive action, such as stricter access control, routine security audits, and proper use of existing tools, could have stopped the breach before it started. These incidents aren’t isolated—they’re indicative of a larger issue: a culture that prioritizes speed and convenience over governance and accountability. 

Security frameworks like NIST or CMMC offer roadmaps for better protection, but they must be followed in practice, not just on paper. The lesson is clear: when organizations fail to take care of cybersecurity basics, they put systems, customers, and their own reputations at risk. Prevention starts with leadership, not technology.

Doctors Warned Over Use of Unapproved AI Tools to Record Patient Conversations

 


Healthcare professionals in the UK are under scrutiny for using artificial intelligence tools that haven’t been officially approved to record and transcribe conversations with patients. A recent investigation has uncovered that several doctors and medical facilities are relying on AI software that does not meet basic safety and data protection requirements, raising serious concerns about patient privacy and clinical safety.

This comes despite growing interest in using artificial intelligence to help doctors with routine tasks like note-taking. Known as Ambient Voice Technology (AVT), these tools are designed to save time by automatically recording and summarising patient consultations. In theory, this allows doctors to focus more on care and less on paperwork. However, not all AVT tools being used in medical settings have passed the necessary checks set by national authorities.

Earlier this year, NHS England encouraged the use of AVT and outlined the minimum standards required for such software. But in a more recent internal communication dated 9 June, the agency issued a clear warning. It stated that some AVT providers are not following NHS rules, yet their tools are still being adopted in real-world clinical settings.

The risks associated with these non-compliant tools include possible breaches of patient confidentiality, financial liabilities, and disruption to the wider digital strategy of the NHS. Some AI programs may also produce inaccurate outputs— a phenomenon known as “hallucination”— which can lead to serious errors in medical records or decision-making.

The situation has left many general practitioners in a difficult position. While eager to embrace new technologies, many lack the technical expertise to determine whether a product is safe and compliant. Dr. David Wrigley, a senior representative of the British Medical Association, stressed the need for stronger guidance and oversight. He believes doctors should not be left to evaluate software quality alone and that central NHS support is essential to prevent unsafe usage.

Healthcare leaders are also concerned about the growing number of lesser-known AI companies aggressively marketing their tools to individual clinics and hospitals. With many different options flooding the market, there’s a risk that unsafe or poorly regulated tools might slip through the cracks.

Matthew Taylor, head of the NHS Confederation, called the situation a “turning point” and suggested that national authorities need to offer clearer recommendations on which AI systems are safe to use. Without such leadership, he warned, the current approach could become chaotic and risky.

Interestingly, the UK Health Secretary recently acknowledged that some doctors are already experimenting with AVT tools before receiving official approval. While not endorsing this behaviour, he saw it as a sign that healthcare workers are open to digital innovation.

On a positive note, some AVT software does meet current NHS standards. One such tool, Accurx Scribe, is being used successfully and is developed in close consultation with NHS leaders.

As AI continues to reshape healthcare, experts agree on one thing: innovation must go hand-in-hand with accountability and safety.

Dior Confirms Hack: Personal Data Stolen, Here’s What to Do


Christian Dior, the well-known luxury fashion brand, recently experienced a cyberattack that may have exposed customer information. The brand, owned by the French company LVMH, announced that an outsider had managed to break into part of its customer database. This has raised concerns about the safety of personal information, especially among shoppers in the UK.

Although no bank or card information was stolen, Dior said the hackers were able to access names, email addresses, phone numbers, mailing addresses, purchase records, and marketing choices of customers. Even though financial details remain safe, experts warn that this kind of personal data could still be used for scams that trick people into giving away more information.


How and When the Breach Happened

The issue was first noticed on May 7, 2025, when Dior’s online system in South Korea detected unusual activity involving customer records. Their technical team quickly responded by shutting down the affected servers to prevent more damage.

A week later, on May 14, French news sources reported the incident, and the following day, Dior publicly confirmed the breach on its websites. The company explained that while no payment data was involved, some customer details were accessed.


What Dior Is Doing Now

Following the European data protection rules, Dior acted quickly by resetting passwords, isolating the impacted systems, and hiring cybersecurity experts to investigate the attack. They also began informing customers where necessary and reassured the public that they are working on making their systems more secure.

Dior says it plans to improve security by increasing the use of two-factor login processes and monitoring accounts more closely for unusual behavior. The company says it takes customer privacy very seriously and is sorry for any trouble this may cause.


Why Luxury Brands Are Often Targeted

High-end brands like Dior are popular targets for cybercriminals because they cater to wealthy customers and run large digital operations. Earlier this month, other UK companies like Marks & Spencer and Co-op also reported customer data issues, showing that online attacks in the retail world are becoming more common.


What Customers Can Do to Stay Safe

If you’re a Dior customer, there are simple steps you can take to protect yourself:

1. Be careful with any messages that claim to be from Dior. Don’t click on links unless you are sure the message is real. Always visit Dior’s website directly.

2. Change your Dior account password to something new and strong. Avoid using the same password on other websites.

3. Turn on two-factor login for extra protection if available.

4. Watch your bank and credit card activity regularly for any unusual charges.

Be wary of fake ads or offers claiming big discounts from Dior, especially on social media.


Taking a few minutes now to secure your account could save you from a lot of problems later.

iHeartMedia Cyberattack Exposes Sensitive Data Across Multiple Radio Stations

 

iHeartMedia, the largest audio media company in the United States, has confirmed a significant data breach following a cyberattack on several of its local radio stations. In official breach notifications sent to affected individuals and state attorney general offices in Maine, Massachusetts, and California, the company disclosed that cybercriminals accessed sensitive customer information between December 24 and December 27, 2024. Although iHeartMedia did not specify how many individuals were affected, the breach appears to have involved data stored on systems at a “small number” of stations. 

The exact number of compromised stations remains undisclosed. With a network of 870 radio stations and a reported monthly audience of 250 million listeners, the potential scope of this breach is concerning. According to the breach notification letters, the attackers “viewed and obtained” various types of personal information. The compromised data includes full names, passport numbers, other government-issued identification numbers, dates of birth, financial account information, payment card data, and even health and health insurance records. 

Such a comprehensive data set makes the victims vulnerable to a wide array of cybercrimes, from identity theft to financial fraud. The combination of personal identifiers and health or insurance details increases the likelihood of victims being targeted by tailored phishing campaigns. With access to passport numbers and financial records, cybercriminals can attempt identity theft or engage in unauthorized transactions and wire fraud. As of now, the stolen data has not surfaced on dark web marketplaces, but the risk remains high. 

No cybercrime group has claimed responsibility for the breach as of yet. However, the level of detail and sensitivity in the data accessed suggests the attackers had a specific objective and targeted the breach with precision. 

In response, iHeartMedia is offering one year of complimentary identity theft protection services to impacted individuals. The company has also established a dedicated hotline for those seeking assistance or more information. While these actions are intended to mitigate potential fallout, they may offer limited relief given the nature of the exposed information. 

This incident underscores the increasing frequency and severity of cyberattacks on media organizations and the urgent need for enhanced cybersecurity protocols. For iHeartMedia, transparency and timely support for affected customers will be key in managing the aftermath of this breach. 

As investigations continue, more details may emerge regarding the extent of the compromise and the identity of those behind the attack.

Brave Browser’s New ‘Cookiecrumbler’ Tool Aims to Eliminate Annoying Cookie Consent Pop-Ups

 

While the General Data Protection Regulation (GDPR) was introduced with noble intentions—to protect user privacy and control over personal data—its practical side effects have caused widespread frustration. For many internet users, GDPR has become synonymous with endless cookie consent pop-ups and hours of compliance training. Now, Brave Browser is stepping up with a new solution: Cookiecrumbler, a tool designed to eliminate the disruptive cookie notices without compromising web functionality. 

Cookiecrumbler is not Brave’s first attempt at combating these irritating banners. The browser has long offered pop-up blocking capabilities. However, the challenge hasn’t been the blocking itself—it’s doing so while preserving website functionality. Many websites break or behave unexpectedly when these notices are blocked improperly. Brave’s new approach promises to fix that by taking cookie blocking to a new level of sophistication.  

According to a recent announcement, Cookiecrumbler combines large language models (LLMs) with human oversight to automate and refine the detection of cookie banners across the web. This hybrid model allows the tool to scale effectively while maintaining precision. By running on Brave’s backend servers, Cookiecrumbler crawls websites, identifies cookie notices, and generates custom rules tailored to each site’s layout and language. One standout feature is its multilingual capability. Cookie notices often vary not just in structure but in language and legal formatting based on the user’s location. 

Cookiecrumbler accounts for this by using geo-targeted vantage points, enabling it to view websites as a local user would, making detection far more effective. The developers highlight several reasons for using LLMs in this context: cookie banners typically follow predictable language patterns, the work is repetitive, and it’s relatively low-risk. The cost of each crawl is minimal, allowing the team to test different models before settling on smaller, efficient ones that provide excellent results with fine-tuning. Importantly, human reviewers remain part of the process. While AI handles the bulk detection, humans ensure that the blocking rules don’t accidentally interfere with important site functions. 

These reviewers refine and validate Cookiecrumbler’s suggestions before they’re deployed. Even better, Brave is releasing Cookiecrumbler as an open-source tool, inviting integration by other browsers and developers. This opens the door for tools like Vivaldi or Firefox to adopt similar capabilities. 

Looking ahead, Brave plans to integrate Cookiecrumbler directly into its browser, but only after completing thorough privacy reviews to ensure it aligns with the browser’s core principle of user-centric privacy. Cookiecrumbler marks a significant step forward in balancing user experience and privacy compliance—offering a smarter, less intrusive web.

Fourlis Group Confirms €20 Million Loss from IKEA Ransomware Attack

 

Fourlis Group, the retail operator responsible for IKEA stores across Greece, Cyprus, Romania, and Bulgaria, has revealed that a ransomware attack targeting its systems in late November 2024 led to significant financial losses. The cyber incident, which coincided with the busy Black Friday shopping period, disrupted critical parts of the business and caused damages estimated at €20 million (around $22.8 million). 

The breach initially surfaced as unexplained technical problems affecting IKEA’s e-commerce platforms. Days later, on December 3, the company confirmed that the disruptions were due to an external cyberattack. The attack affected digital infrastructure used for inventory restocking, online transactions, and broader retail operations, mainly impacting IKEA’s business. Other brands under the Fourlis umbrella, including Intersport and Holland & Barrett, were largely unaffected.  

According to CEO Dimitris Valachis, the company experienced a loss of approximately €15 million in revenue by the end of 2024, with an additional €5 million impact spilling into early 2025. Fourlis decided not to comply with the attackers’ demands and instead focused on system recovery through support from external cybersecurity professionals. The company also reported that it successfully blocked a number of follow-up attacks attempted after the initial breach. 

Despite the scale of the attack, an internal investigation supported by forensic analysts found no evidence that customer data had been stolen or exposed. The incident caused only a brief period of data unavailability, which was resolved swiftly. As part of its compliance obligations, Fourlis reported the breach to data protection authorities in all four affected countries, reassuring stakeholders that personal information remained secure. Interestingly, no known ransomware group has taken responsibility for the attack. This may suggest that the attackers were unable to extract valuable data or are holding out hope for an undisclosed settlement—though Fourlis maintains that no ransom was paid. 

The incident highlights the growing risks faced by digital retail ecosystems, especially during peak sales periods when system uptime is critical. As online platforms become more central to retail operations, businesses like Fourlis must invest heavily in cybersecurity defenses. Their experience reinforces the importance of swift response strategies, external threat mitigation support, and robust data protection practices to safeguard operations and maintain customer trust in the face of evolving cyber threats.

Ransomware Attacks Surge in Q1 2025 as Immutable Backup Emerges as Critical Defense

Ransomware attacks have seen a dramatic rise in the first quarter of 2025, with new research from Object First revealing an 84% increase compared to the same period in 2024. This alarming trend highlights the growing sophistication and frequency of ransomware campaigns, with nearly two-thirds of organizations reporting at least one attack in the past two years. 

The findings suggest that ransomware is no longer a matter of “if” but “when” for most businesses. Despite the increased threat, Object First’s study offers a silver lining. A large majority—81% of IT decision-makers—now recognize that immutable backup storage is the most effective defense against ransomware. Immutable storage ensures that once data is written, it cannot be changed or deleted, offering a critical safety net when other security measures fail. This form of storage plays a key role in enabling organizations to recover their data without yielding to ransom demands. 

However, the report also highlights a concerning gap between awareness and action. While most IT professionals acknowledge the benefits of immutable backups, only 59% of organizations have actually implemented such storage. Additionally, just 58% maintain multiple copies of their data in separate locations, falling short of the recommended 3-2-1 backup strategy. This gap leaves many companies dangerously exposed. The report also shows that ransomware actors are evolving their methods. A staggering 96% of organizations that experienced ransomware attacks in the last two years had their backup systems targeted at least once. Even more concerning, 10% of them had their backup storage compromised in every incident. 

These findings demonstrate how attackers now routinely seek to destroy recovery options, increasing pressure on victims to pay ransoms. Many businesses still place heavy reliance on traditional IT security hardening. In fact, 61% of respondents believe this approach is sufficient. But ransomware attackers are adept at bypassing such defenses using phishing emails, stolen credentials, and remote access tools. That’s why Object First recommends adopting a “breach mentality”—an approach that assumes an eventual breach and focuses on limiting damage. 

A Zero Trust architecture, paired with immutable backup, is essential. Organizations are urged to segment networks, restrict user access to essential data only, and implement multi-factor authentication. As cloud services grow, many companies are also turning to immutable cloud storage for flexible, scalable protection. Together, these steps offer a stronger, more resilient defense against today’s aggressive ransomware landscape.

Why Securing Online Accounts is Critical in Today’s Cybersecurity Landscape

 

In an era where cybercriminals are increasingly targeting passwords through phishing attacks, data breaches, and other malicious tactics, securing online accounts has never been more important. Relying solely on single-factor authentication, such as a password, is no longer sufficient to protect sensitive information. Multi-factor authentication (MFA) has emerged as a vital tool for enhancing security by requiring verification from multiple sources. Among the most effective MFA methods are hardware security keys, which provide robust protection against unauthorized access.

What Are Hardware Security Keys?

A hardware security key is a small physical device designed to enhance account security using public key cryptography. This method generates a pair of keys: a public key that encrypts data and a private key that decrypts it. The private key is securely stored on the hardware device, making it nearly impossible for hackers to access or replicate. Unlike SMS-based authentication, which is vulnerable to interception, hardware security keys offer a direct, offline authentication method that significantly reduces the risk of compromise.

Hardware security keys are compatible with major online platforms, including Google, Microsoft, Facebook, GitHub, and many financial institutions. They connect to devices via USB, NFC, or Bluetooth, ensuring compatibility with a wide range of hardware. Popular options include Yubico’s YubiKey, Google’s Titan Security Key, and Thetis. Setting up a hardware security key is straightforward. Users simply register the key with an online account that supports security keys. For example, in Google’s security settings, users can enable 2-Step Verification and add a security key.

Once linked, logging in requires inserting or tapping the key, making the process both highly secure and faster than receiving verification codes via email or SMS. When selecting a security key, compatibility is a key consideration. Newer devices often require USB-C keys, while older ones may need USB-A or NFC options. Security certifications also matter—FIDO U2F provides basic security, while FIDO2/WebAuthn offers advanced protection against phishing and unauthorized access. Some security keys even include biometric authentication, such as fingerprint recognition, for added security.

Prices for hardware security keys typically range from $30 to $100. It’s recommended to purchase a backup key in case the primary key is lost. Losing a security key does not mean being locked out of accounts, as most platforms allow backup authentication methods, such as SMS or authentication apps. However, having a secondary security key ensures uninterrupted access without relying on less secure recovery methods.

Maintaining Strong Online Security Habits

While hardware security keys provide excellent protection, maintaining strong online security habits is equally important. This includes creating complex passwords, being cautious with email links and attachments, and avoiding oversharing personal information on social media. For those seeking additional protection, identity theft monitoring services can offer alerts and assistance in case of a security breach.

By using a hardware security key alongside other cybersecurity measures, individuals can significantly reduce their risk of falling victim to online attacks. These keys not only enhance security but also ensure convenient and secure access to their most important accounts. As cyber threats continue to evolve, adopting advanced tools like hardware security keys is a proactive step toward safeguarding your digital life.

Smart Meter Privacy Under Scrutiny as Warnings Reach Millions in UK

 


According to a campaign group that has criticized government net zero policies, smart meters may become the next step in "snooping" on household energy consumption. Ministers are discussing the possibility of sharing household energy usage with third parties who can assist customers in finding cheaper energy deals and lower carbon tariffs from competitors. 

The European watchdog responsible for protecting personal data has been concerned that high-tech monitors that track households' energy use are likely to pose a major privacy concern. A recent report released by the European Data Protection Supervisor (EDPS) states that smart meters, which must be installed in every home in the UK by the year 2021, will be used not only to monitor energy consumption but also to track a great deal more data. 

According to the EDPS, "while the widespread rollout of smart meters will bring some substantial benefits, it will also provide us with the opportunity to collect huge amounts of personal information." Smart meters have been claimed to be a means of spying on homes by net zero campaigners. A privacy dispute has broken out in response to government proposals that will allow energy companies to harvest household smart meter data to promote net zero energy. 

In the UK, the Telegraph newspaper reports that the government is consulting on the idea of letting consumers share their energy usage with third parties who can direct them to lower-cost deals and lower carbon tariffs from competing suppliers. The Telegraph quoted Neil Record, the former economist for the Bank of England and currently chairman of Net Zero Watch, as saying that smart meters could potentially have serious privacy implications, which he expressed concerns to the paper. 

According to him, energy companies collect a large amount of consumer information, which is why he advised the public to remain vigilant about the increasing number of external entities getting access to household information. Further, Record explained that, once these measures are authorized, the public would be able to view detailed details of the activities of households in real-time. 

The record even stated that the public might not fully comprehend the extent to which the data is being shared and the possible consequences of this access. Nick Hunn, founder of the wireless technology consulting firm WiFore, also commented on the matter, highlighting the original intent behind the smart meter rollout, He noted that the initiative was designed to enable consumers to access their energy usage data, thereby empowering them to make informed decisions regarding energy consumption and associated costs. Getting to net zero targets will be impossible without smart meters. 

They allow energy companies to get real-time data on how much energy they are using and can be used to manage demand as needed. Using smart meters, for instance, households will be rewarded for cutting energy use during peak hours, thereby reducing the need for the construction of new gas-fired power plants. Energy firms can also offer free electricity to households when wind energy is in abundance. Using smart meters as a means of controlling household energy usage, the Government has ambitions to install them in three-quarters of all households by the end of 2025, at the cost of £13.5 billion. 

A recent study by WiFore, which is a wireless technology consulting firm, revealed that approximately four million devices are broken in homes. According to Nick Hunn, who is the founder of the firm: "This is essentially what was intended at the beginning of the rollout of smart meters: that consumers would be able to see what energy data was affecting them so that they could make rational decisions about how much they were spending and how much they were using."

U.S. soldier linked to BSNL data breach: Arrest reveals cybercrime

 

The arrest of Cameron John Wagenius, a U.S. Army communications specialist, has unveiled potential connections to a significant data breach targeting India’s state-owned telecom provider, BSNL. The breach highlights the global reach of cybercrime networks and raises concerns about the security of sensitive data across continents. 

Wagenius, stationed in South Korea, was apprehended on December 20, 2023, for allegedly selling hacked data from U.S. telecom companies. According to cybersecurity experts, he may also be the individual behind the alias “kiberphant0m” on a dark web marketplace. In May 2023, “kiberphant0m” reportedly attempted to sell 278 GB of BSNL’s critical data, including subscriber details, SIM numbers, and server snapshots, for $5,000. Indian authorities confirmed that one of BSNL’s servers was breached in May 2023. 

While the Indian Computer Emergency Response Team (CERT-In) reported the intrusion, the identity of the perpetrator remained elusive until Wagenius’s arrest. Efforts to verify the hacker’s access to BSNL servers through Telegram communication and sample data proved inconclusive. The breach exposes vulnerabilities in telecom providers’ security measures, as sensitive data such as health records, payment details, and government-issued identification was targeted. 

Additionally, Wagenius is accused of selling call records of prominent U.S. political figures and data from telecom providers across Asia. The arrest also sheds light on Wagenius’s links to a broader criminal network led by Connor Riley Moucka. Moucka and his associates reportedly breached multiple organizations, extorting millions of dollars and selling stolen data. Wagenius’s involvement with this network underscores the organized nature of cybercrime operations targeting telecom infrastructure. 

Cybersecurity researchers, including Allison Nixon of Unit 221B, identified Wagenius as the individual behind illicit sales of BSNL data. However, she clarified that these activities differ from state-sponsored cyberattacks by groups such as Salt Typhoon, a Chinese-linked advanced persistent threat actor known for targeting major U.S. telecom providers. The case has also exposed challenges in prosecuting international cybercriminals. Indian authorities have yet to file a First Information Report (FIR) or engage with U.S. counterparts on Wagenius’s case, limiting legal recourse. 

Experts suggest leveraging international treaties and cross-border collaboration to address such incidents. As the investigation unfolds, the breach serves as a stark reminder of the growing threat posed by insider actions and sophisticated cybercriminal networks. It underscores the urgent need for robust data protection measures and international cooperation to counter cybercrime.