Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Data protection. Show all posts

CISA Urges Immediate Patching of Critical SysAid Vulnerabilities Amid Active Exploits

 

The Cybersecurity and Infrastructure Security Agency (CISA) has issued a critical alert about two high-risk vulnerabilities in SysAid’s IT service management (ITSM) platform that are being actively exploited by attackers. These security flaws, identified as CVE-2025-2775 and CVE-2025-2776, can enable unauthorized actors to hijack administrator accounts without requiring credentials. 

Discovered in December 2024 by researchers at watchTowr Labs, the two vulnerabilities stem from XML External Entity (XXE) injection issues. SysAid addressed these weaknesses in March 2025 through version 24.4.60 of its On-Premises software. However, the urgency escalated when proof-of-concept code demonstrating how to exploit the flaws was published just a month later, highlighting how easily bad actors could access sensitive files on affected systems. 

Although CISA has not provided technical specifics about the ongoing attacks, it added the vulnerabilities to its Known Exploited Vulnerabilities Catalog. Under Binding Operational Directive 22-01, all Federal Civilian Executive Branch (FCEB) agencies are required to patch their systems by August 12. CISA also strongly recommends that organizations in the private sector act swiftly to apply the necessary updates, regardless of the directive’s federal scope. 

“These vulnerabilities are commonly exploited by malicious cyber actors and present serious threats to government systems,” CISA stated in its warning. SysAid’s On-Prem solution is deployed on an organization’s internal infrastructure, allowing IT departments to manage help desk tickets, assets, and other services. According to monitoring from Shadowserver, several dozen SysAid installations remain accessible online, particularly in North America and Europe, potentially increasing exposure to these attacks. 

Although CISA has not linked these specific flaws to ransomware campaigns, the SysAid platform was previously exploited in 2023 by the FIN11 cybercrime group, which used another vulnerability (CVE-2023-47246) to distribute Clop ransomware in zero-day attacks. Responding to the alert, SysAid reaffirmed its commitment to cybersecurity. “We’ve taken swift action to resolve these vulnerabilities through security patches and shared the relevant information with CISA,” a company spokesperson said. “We urge all customers to ensure their systems are fully up to date.” 

SysAid serves a global clientele of over 5,000 organizations and 10 million users across 140 countries. Its user base spans from startups to major enterprises, including recognized brands like Coca-Cola, IKEA, Honda, Xerox, Michelin, and Motorola.

UK Army Probes Leak of Special Forces Identities in Grenadier Guards Publication

 

The British Army has initiated an urgent investigation following the public exposure of sensitive information identifying members of the UK Special Forces. General Sir Roly Walker, Chief of the General Staff, has directed a comprehensive review into how classified data was shared, after it was found that a regimental newsletter had published names and postings of elite soldiers over a period of more than ten years. 

The internal publication, created by the Grenadier Guards Regimental Association, is believed to have revealed the identities and current assignments of high-ranking officers serving in confidential roles. Several names were reportedly accompanied by the abbreviation “MAB,” a known military code linked to Special Forces. Security experts have expressed concern that such identifiers could be easily deciphered by hostile actors, significantly raising the risk to those individuals. 

The revelation has triggered backlash within the Ministry of Defence, with Defence Secretary John Healey reportedly outraged by the breach. The Ministry had already issued warnings about this very issue, yet the publication remained online until it was finally edited last week. The breach adds to growing concern over operational security lapses in elite British military units.  

This latest disclosure follows closely on the heels of another incident in which the identities of Special Forces soldiers involved in missions in Afghanistan were exposed through a separate data leak. That earlier breach had been shielded by a legal order for nearly two years, emphasizing the persistent nature of such security vulnerabilities. 

The protection of Special Forces members’ identities is a critical requirement due to the covert and high-risk nature of their work. Publicly exposing their names can not only endanger lives but also jeopardize ongoing intelligence missions and international collaborations. The leaked material is also said to have included information about officers working within the Cabinet Office’s National Security Secretariat—an agency that advises the Prime Minister on national defence—and even a soldier assigned to General Walker’s own operational staff. 

While the Grenadier Guards’ publication has now removed the sensitive content, another regiment had briefly published similar details before promptly deleting them. Still, the extended availability of the Grenadier data has raised questions about oversight and accountability in how military associations manage sensitive information.  

General Walker, a former commander of the Grenadier Guards, announced that he has mandated an immediate review of all information-sharing practices between the army and regimental associations. His directive aims to ensure that stronger protocols are in place to prevent such incidents in the future, while still supporting the positive role these associations play for veterans and serving members alike. 

The Defence Ministry has not released details on whether those named in the leak will be relocated or reassigned. However, security analysts say the long-term consequences of the breach could be serious, including potential threats to the personnel involved and operational risks to future Special Forces missions. As investigations continue, the British Army is now under pressure to tighten internal controls and better protect its most confidential information from digital exposure.

Legal Battle Over Meta’s AI Training Likely to Reach Europe’s Top Court

 


The ongoing debate around Meta’s use of European data to train its artificial intelligence (AI) systems is far from over. While Meta has started training its large language models (LLMs) using public content from Facebook and Instagram, privacy regulators in Europe are still questioning whether this is lawful and the issue may soon reach the European Court of Justice (ECJ).

Meta began training its AI using public posts made by users in the EU shortly after getting the go-ahead from several privacy watchdogs. This approval came just before Meta launched AI-integrated products, including its smart glasses, which rely heavily on understanding cultural and regional context from online data.

However, some regulators and consumer groups are not convinced the approval was justified. A German consumer organization had attempted to block the training through an emergency court appeal. Although the request was denied, that was only a temporary decision. The core legal challenges, including one led by Hamburg’s data protection office, are still expected to proceed in court.

Hamburg’s commissioner, who initially supported blocking the training, later withdrew a separate emergency measure under Europe’s data protection law. He stated that while the training has been allowed to continue for now, it’s highly likely that the final ruling will come from the EU’s highest court.

The controversy centers on whether Meta has a strong enough legal reason, known as "legitimate interest" to use personal data for AI training. Meta’s argument was accepted by Irish regulators, who oversee Meta’s EU operations, on the condition that strict privacy safeguards are in place.


What Does ‘Legitimate Interest’ Mean Under GDPR?

Under the General Data Protection Regulation (GDPR), companies must have a valid reason to collect and use personal data. One of the six legal bases allowed is called “legitimate interest.” 

This means a company can process someone’s data if it’s necessary for a real business purpose, as long as it does not override the privacy rights of the individual.

In the case of AI model training, companies like Meta claim that building better products and improving AI performance qualifies as a legitimate interest. However, this is debated, especially when public data includes posts with personal opinions, cultural expressions, or identity-related content.

Data protection regulators must carefully balance:

1. The company’s business goals

2. The individual’s right to privacy

3. The potential long-term risks of using personal data for AI systems


Some experts argue that this sets a broader precedent. If Meta can train its AI using public data under the concept of legitimate interest, other companies may follow. This has raised hopes among many European AI firms that have felt held back by unclear or strict regulations.

Industry leaders say that regulatory uncertainty, specifically surrounding Europe’s General Data Protection Regulation (GDPR) and the upcoming AI Act has been one of the biggest barriers to innovation in the region. Others believe the current developments signal a shift toward supporting responsible AI development while protecting users’ rights.

Despite approval from regulators and support from industry voices, legal clarity is still missing. Many legal experts and companies agree that only a definitive ruling from the European Court of Justice can settle whether using personal data for AI training in this way is truly lawful.


Why Major Companies Are Still Falling to Basic Cybersecurity Failures

 

In recent weeks, three major companies—Ingram Micro, United Natural Foods Inc. (UNFI), and McDonald’s—faced disruptive cybersecurity incidents. Despite operating in vastly different sectors—technology distribution, food logistics, and fast food retail—all three breaches stemmed from poor security fundamentals, not advanced cyber threats. 

Ingram Micro, a global distributor of IT and cybersecurity products, was hit by a ransomware attack in early July 2025. The company’s order systems and communication channels were temporarily shut down. Though systems were restored within days, the incident highlights a deeper issue: Ingram had access to top-tier security tools, yet failed to use them effectively. This wasn’t a tech failure—it was a lapse in execution and internal discipline. 

Just two weeks earlier, UNFI, the main distributor for Whole Foods, suffered a similar ransomware attack. The disruption caused significant delays in food supply chains, exposing the fragility of critical infrastructure. In industries that rely on real-time operations, cyber incidents are not just IT issues—they’re direct threats to business continuity. 

Meanwhile, McDonald’s experienced a different type of breach. Researchers discovered that its AI-powered hiring tool, McHire, could be accessed using a default admin login and a weak password—“123456.” This exposed sensitive applicant data, potentially impacting millions. The breach wasn’t due to a sophisticated hacker but to oversight and poor configuration. All three cases demonstrate a common truth: major companies are still vulnerable to basic errors. 

Threat actors like SafePay and Pay2Key are capitalizing on these gaps. SafePay infiltrates networks through stolen VPN credentials, while Pay2Key, allegedly backed by Iran, is now offering incentives for targeting U.S. firms. These groups don’t need advanced tools when companies are leaving the door open. Although Ingram Micro responded quickly—resetting credentials, enforcing MFA, and working with external experts—the damage had already been done. 

Preventive action, such as stricter access control, routine security audits, and proper use of existing tools, could have stopped the breach before it started. These incidents aren’t isolated—they’re indicative of a larger issue: a culture that prioritizes speed and convenience over governance and accountability. 

Security frameworks like NIST or CMMC offer roadmaps for better protection, but they must be followed in practice, not just on paper. The lesson is clear: when organizations fail to take care of cybersecurity basics, they put systems, customers, and their own reputations at risk. Prevention starts with leadership, not technology.

Doctors Warned Over Use of Unapproved AI Tools to Record Patient Conversations

 


Healthcare professionals in the UK are under scrutiny for using artificial intelligence tools that haven’t been officially approved to record and transcribe conversations with patients. A recent investigation has uncovered that several doctors and medical facilities are relying on AI software that does not meet basic safety and data protection requirements, raising serious concerns about patient privacy and clinical safety.

This comes despite growing interest in using artificial intelligence to help doctors with routine tasks like note-taking. Known as Ambient Voice Technology (AVT), these tools are designed to save time by automatically recording and summarising patient consultations. In theory, this allows doctors to focus more on care and less on paperwork. However, not all AVT tools being used in medical settings have passed the necessary checks set by national authorities.

Earlier this year, NHS England encouraged the use of AVT and outlined the minimum standards required for such software. But in a more recent internal communication dated 9 June, the agency issued a clear warning. It stated that some AVT providers are not following NHS rules, yet their tools are still being adopted in real-world clinical settings.

The risks associated with these non-compliant tools include possible breaches of patient confidentiality, financial liabilities, and disruption to the wider digital strategy of the NHS. Some AI programs may also produce inaccurate outputs— a phenomenon known as “hallucination”— which can lead to serious errors in medical records or decision-making.

The situation has left many general practitioners in a difficult position. While eager to embrace new technologies, many lack the technical expertise to determine whether a product is safe and compliant. Dr. David Wrigley, a senior representative of the British Medical Association, stressed the need for stronger guidance and oversight. He believes doctors should not be left to evaluate software quality alone and that central NHS support is essential to prevent unsafe usage.

Healthcare leaders are also concerned about the growing number of lesser-known AI companies aggressively marketing their tools to individual clinics and hospitals. With many different options flooding the market, there’s a risk that unsafe or poorly regulated tools might slip through the cracks.

Matthew Taylor, head of the NHS Confederation, called the situation a “turning point” and suggested that national authorities need to offer clearer recommendations on which AI systems are safe to use. Without such leadership, he warned, the current approach could become chaotic and risky.

Interestingly, the UK Health Secretary recently acknowledged that some doctors are already experimenting with AVT tools before receiving official approval. While not endorsing this behaviour, he saw it as a sign that healthcare workers are open to digital innovation.

On a positive note, some AVT software does meet current NHS standards. One such tool, Accurx Scribe, is being used successfully and is developed in close consultation with NHS leaders.

As AI continues to reshape healthcare, experts agree on one thing: innovation must go hand-in-hand with accountability and safety.

Dior Confirms Hack: Personal Data Stolen, Here’s What to Do


Christian Dior, the well-known luxury fashion brand, recently experienced a cyberattack that may have exposed customer information. The brand, owned by the French company LVMH, announced that an outsider had managed to break into part of its customer database. This has raised concerns about the safety of personal information, especially among shoppers in the UK.

Although no bank or card information was stolen, Dior said the hackers were able to access names, email addresses, phone numbers, mailing addresses, purchase records, and marketing choices of customers. Even though financial details remain safe, experts warn that this kind of personal data could still be used for scams that trick people into giving away more information.


How and When the Breach Happened

The issue was first noticed on May 7, 2025, when Dior’s online system in South Korea detected unusual activity involving customer records. Their technical team quickly responded by shutting down the affected servers to prevent more damage.

A week later, on May 14, French news sources reported the incident, and the following day, Dior publicly confirmed the breach on its websites. The company explained that while no payment data was involved, some customer details were accessed.


What Dior Is Doing Now

Following the European data protection rules, Dior acted quickly by resetting passwords, isolating the impacted systems, and hiring cybersecurity experts to investigate the attack. They also began informing customers where necessary and reassured the public that they are working on making their systems more secure.

Dior says it plans to improve security by increasing the use of two-factor login processes and monitoring accounts more closely for unusual behavior. The company says it takes customer privacy very seriously and is sorry for any trouble this may cause.


Why Luxury Brands Are Often Targeted

High-end brands like Dior are popular targets for cybercriminals because they cater to wealthy customers and run large digital operations. Earlier this month, other UK companies like Marks & Spencer and Co-op also reported customer data issues, showing that online attacks in the retail world are becoming more common.


What Customers Can Do to Stay Safe

If you’re a Dior customer, there are simple steps you can take to protect yourself:

1. Be careful with any messages that claim to be from Dior. Don’t click on links unless you are sure the message is real. Always visit Dior’s website directly.

2. Change your Dior account password to something new and strong. Avoid using the same password on other websites.

3. Turn on two-factor login for extra protection if available.

4. Watch your bank and credit card activity regularly for any unusual charges.

Be wary of fake ads or offers claiming big discounts from Dior, especially on social media.


Taking a few minutes now to secure your account could save you from a lot of problems later.

iHeartMedia Cyberattack Exposes Sensitive Data Across Multiple Radio Stations

 

iHeartMedia, the largest audio media company in the United States, has confirmed a significant data breach following a cyberattack on several of its local radio stations. In official breach notifications sent to affected individuals and state attorney general offices in Maine, Massachusetts, and California, the company disclosed that cybercriminals accessed sensitive customer information between December 24 and December 27, 2024. Although iHeartMedia did not specify how many individuals were affected, the breach appears to have involved data stored on systems at a “small number” of stations. 

The exact number of compromised stations remains undisclosed. With a network of 870 radio stations and a reported monthly audience of 250 million listeners, the potential scope of this breach is concerning. According to the breach notification letters, the attackers “viewed and obtained” various types of personal information. The compromised data includes full names, passport numbers, other government-issued identification numbers, dates of birth, financial account information, payment card data, and even health and health insurance records. 

Such a comprehensive data set makes the victims vulnerable to a wide array of cybercrimes, from identity theft to financial fraud. The combination of personal identifiers and health or insurance details increases the likelihood of victims being targeted by tailored phishing campaigns. With access to passport numbers and financial records, cybercriminals can attempt identity theft or engage in unauthorized transactions and wire fraud. As of now, the stolen data has not surfaced on dark web marketplaces, but the risk remains high. 

No cybercrime group has claimed responsibility for the breach as of yet. However, the level of detail and sensitivity in the data accessed suggests the attackers had a specific objective and targeted the breach with precision. 

In response, iHeartMedia is offering one year of complimentary identity theft protection services to impacted individuals. The company has also established a dedicated hotline for those seeking assistance or more information. While these actions are intended to mitigate potential fallout, they may offer limited relief given the nature of the exposed information. 

This incident underscores the increasing frequency and severity of cyberattacks on media organizations and the urgent need for enhanced cybersecurity protocols. For iHeartMedia, transparency and timely support for affected customers will be key in managing the aftermath of this breach. 

As investigations continue, more details may emerge regarding the extent of the compromise and the identity of those behind the attack.

Brave Browser’s New ‘Cookiecrumbler’ Tool Aims to Eliminate Annoying Cookie Consent Pop-Ups

 

While the General Data Protection Regulation (GDPR) was introduced with noble intentions—to protect user privacy and control over personal data—its practical side effects have caused widespread frustration. For many internet users, GDPR has become synonymous with endless cookie consent pop-ups and hours of compliance training. Now, Brave Browser is stepping up with a new solution: Cookiecrumbler, a tool designed to eliminate the disruptive cookie notices without compromising web functionality. 

Cookiecrumbler is not Brave’s first attempt at combating these irritating banners. The browser has long offered pop-up blocking capabilities. However, the challenge hasn’t been the blocking itself—it’s doing so while preserving website functionality. Many websites break or behave unexpectedly when these notices are blocked improperly. Brave’s new approach promises to fix that by taking cookie blocking to a new level of sophistication.  

According to a recent announcement, Cookiecrumbler combines large language models (LLMs) with human oversight to automate and refine the detection of cookie banners across the web. This hybrid model allows the tool to scale effectively while maintaining precision. By running on Brave’s backend servers, Cookiecrumbler crawls websites, identifies cookie notices, and generates custom rules tailored to each site’s layout and language. One standout feature is its multilingual capability. Cookie notices often vary not just in structure but in language and legal formatting based on the user’s location. 

Cookiecrumbler accounts for this by using geo-targeted vantage points, enabling it to view websites as a local user would, making detection far more effective. The developers highlight several reasons for using LLMs in this context: cookie banners typically follow predictable language patterns, the work is repetitive, and it’s relatively low-risk. The cost of each crawl is minimal, allowing the team to test different models before settling on smaller, efficient ones that provide excellent results with fine-tuning. Importantly, human reviewers remain part of the process. While AI handles the bulk detection, humans ensure that the blocking rules don’t accidentally interfere with important site functions. 

These reviewers refine and validate Cookiecrumbler’s suggestions before they’re deployed. Even better, Brave is releasing Cookiecrumbler as an open-source tool, inviting integration by other browsers and developers. This opens the door for tools like Vivaldi or Firefox to adopt similar capabilities. 

Looking ahead, Brave plans to integrate Cookiecrumbler directly into its browser, but only after completing thorough privacy reviews to ensure it aligns with the browser’s core principle of user-centric privacy. Cookiecrumbler marks a significant step forward in balancing user experience and privacy compliance—offering a smarter, less intrusive web.