Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label data security. Show all posts

Zimbra Zero-Day Exploit Used in ICS File Attacks to Steal Sensitive Data

 

Security researchers have discovered that hackers exploited a zero-day vulnerability in Zimbra Collaboration Suite (ZCS) earlier this year using malicious calendar attachments to steal sensitive data. The attackers embedded harmful JavaScript code inside .ICS files—typically used to schedule and share calendar events—to target vulnerable Zimbra systems and execute commands within user sessions. 

The flaw, identified as CVE-2025-27915, affected ZCS versions 9.0, 10.0, and 10.1. It stemmed from inadequate sanitization of HTML content in calendar files, allowing cybercriminals to inject arbitrary JavaScript code. Once executed, the code could redirect emails, steal credentials, and access confidential user information. Zimbra patched the issue on January 27 through updates (ZCS 9.0.0 P44, 10.0.13, and 10.1.5), but at that time, the company did not confirm any active attacks. 

StrikeReady, a cybersecurity firm specializing in AI-based threat management, detected the campaign while monitoring unusually large .ICS files containing embedded JavaScript. Their investigation revealed that the attacks began in early January, predating the official patch release. In one notable instance, the attackers impersonated the Libyan Navy’s Office of Protocol and sent a malicious email targeting a Brazilian military organization. The attached .ICS file included Base64-obfuscated JavaScript designed to compromise Zimbra Webmail and extract sensitive data. 

Analysis of the payload showed that it was programmed to operate stealthily and execute in asynchronous mode. It created hidden fields to capture usernames and passwords, tracked user actions, and automatically logged out inactive users to trigger data theft. The script exploited Zimbra’s SOAP API to search through emails and retrieve messages, which were then sent to the attacker every four hours. It also added a mail filter named “Correo” to forward communications to a ProtonMail address, gathered contacts and distribution lists, and even hid user interface elements to avoid detection. The malware delayed its execution by 60 seconds and only reactivated every three days to reduce suspicion. 

StrikeReady could not conclusively link the attack to any known hacking group but noted that similar tactics have been associated with a small number of advanced threat actors, including those linked to Russia and the Belarusian state-sponsored group UNC1151. The firm shared technical indicators and a deobfuscated version of the malicious code to aid other security teams in detection efforts. 

Zimbra later confirmed that while the exploit had been used, the scope of the attacks appeared limited. The company urged all users to apply the latest patches, review existing mail filters for unauthorized changes, inspect message stores for Base64-encoded .ICS entries, and monitor network activity for irregular connections. The incident highlights the growing sophistication of targeted attacks and the importance of timely patching and vigilant monitoring to prevent zero-day exploitation.

Telstra Denies Scattered Spider Data Breach Claims Amid Ransom Threats

 

Telstra, one of Australia’s leading telecommunications companies, has denied claims made by the hacker group Scattered Spider that it suffered a massive data breach compromising nearly 19 million personal records. The company issued a statement clarifying that its internal systems remain secure and that the data in question was scraped from publicly available sources rather than stolen. In a post on X (formerly Twitter), Telstra emphasized that no passwords, banking details, or sensitive identification data such as driver’s licenses or Medicare numbers were included in the dataset. 

The claims originated from a dark web post published on October 3 by a group calling itself Scattered Lapsus$ Hunters, an offshoot of Scattered Spider. The group alleged it had stolen more than 100GB of personally identifiable information, including names and physical addresses, and warned that company executives should negotiate to avoid further data exposure. The attackers claimed the alleged breach took place in July 2023 and threatened to release the data publicly if a ransom was not paid by October 13, 2025. They also asserted possession of over 16 million records contained in a file named telstra.sql, which they said was part of a larger collection of 19 million records. 

In a surprising twist, the ransom note also mentioned Salesforce, the global cloud computing company, demanding negotiations begin with its executives. Salesforce swiftly rejected the demand, issuing a statement on October 8 declaring that it “will not engage, negotiate with, or pay any extortion demand,” aligning with global cybersecurity guidelines that discourage ransom payments. 

Scattered Lapsus$ Hunters has made similar claims about breaches involving several major corporations, including Qantas, IKEA, and Google AdSense. Cybersecurity intelligence platforms like Cyble Vision have documented multiple previous instances of alleged Telstra data breaches, some dating back to 2022. In one notable case, a threat actor called UnicornLover67 claimed to possess a dataset containing over 47,000 Telstra employee records, including email addresses and hashed passwords. Telstra has previously confirmed smaller breaches linked to third-party service providers, most recently in 2022, affecting around 132,000 customers. 

However, cybersecurity analysts remain uncertain whether the current claims represent a fresh breach or a recycling of old data. Experts suggest that previously leaked or publicly available datasets may have been repurposed to appear as new evidence of compromise. This possibility aligns with Telstra’s statement that no recent intrusion has occurred. 

The investigation into the alleged breach remains ongoing as the ransom deadline approaches. While Telstra continues to assert that its systems are uncompromised, the persistence of repeated breach claims underscores the growing challenge of misinformation and data reuse in the cybercrime landscape. The Cyber Express has reached out to Telstra for further updates and will continue to monitor the situation as new details emerge.

AI Adoption Outpaces Cybersecurity Awareness as Users Share Sensitive Data with Chatbots

 

The global surge in the use of AI tools such as ChatGPT and Gemini is rapidly outpacing efforts to educate users about the cybersecurity risks these technologies pose, according to a new study. The research, conducted by the National Cybersecurity Alliance (NCA) in collaboration with cybersecurity firm CybNet, surveyed over 6,500 individuals across seven countries, including the United States. It found that 65% of respondents now use AI in their everyday lives—a 21% increase from last year—yet 58% said they had received no training from employers on the data privacy and security challenges associated with AI use. 

“People are embracing AI in their personal and professional lives faster than they are being educated on its risks,” said Lisa Plaggemier, Executive Director of the NCA. The study revealed that 43% of respondents admitted to sharing sensitive information, including company financial data and client records, with AI chatbots, often without realizing the potential consequences. The findings highlight a growing disconnect between AI adoption and cybersecurity preparedness, suggesting that many organizations are failing to educate employees on how to use these tools responsibly. 

The NCA-CybNet report aligns with previous warnings about the risks posed by AI systems. A survey by software company SailPoint earlier this year found that 96% of IT professionals believe AI agents pose a security risk, while 84% said their organizations had already begun deploying the technology. These AI agents—designed to automate tasks and improve efficiency—often require access to sensitive internal documents, databases, or systems, creating new vulnerabilities. When improperly secured, they can serve as entry points for hackers or even cause catastrophic internal errors, such as one case where an AI agent accidentally deleted an entire company database. 

Traditional chatbots also come with risks, particularly around data privacy. Despite assurances from companies, most chatbot interactions are stored and sometimes used for future model training, meaning they are not entirely private. This issue gained attention in 2023 when Samsung engineers accidentally leaked confidential data to ChatGPT, prompting the company to ban employee use of the chatbot. 

The integration of AI tools into mainstream software has only accelerated their ubiquity. Microsoft recently announced that AI agents will be embedded into Word, Excel, and PowerPoint, meaning millions of users may interact with AI daily—often without any specialized training in cybersecurity. As AI becomes an integral part of workplace tools, the potential for human error, unintentional data sharing, and exposure to security breaches increases. 

While the promise of AI continues to drive innovation, experts warn that its unchecked expansion poses significant security challenges. Without comprehensive training, clear policies, and safeguards in place, individuals and organizations risk turning powerful productivity tools into major sources of vulnerability. The race to integrate AI into every aspect of modern life is well underway—but for cybersecurity experts, the race to keep users informed and protected is still lagging far behind.

Where Your Data Goes After a Breach and How to Protect Yourself

 

Data breaches happen every day—and they’re almost never random. Most result from deliberate, targeted cyberattacks or the exploitation of weak security systems that allow cybercriminals to infiltrate networks and steal valuable data. These breaches can expose email addresses, passwords, credit card details, Social Security numbers, medical records, and even confidential business documents. While it’s alarming to think about, understanding what happens after your data is compromised is key to knowing how to protect yourself.  

Once your information is stolen, it essentially becomes a commodity traded for profit. Hackers rarely use the data themselves. Instead, they sell it—often bundled with millions of other records—to other cybercriminals who use it for identity theft, fraud, or extortion. In underground networks, stolen information has its own economy, with prices fluctuating depending on how recent or valuable the data is. 

The dark web is the primary marketplace for stolen information. Hidden from regular search engines, it provides anonymity for sellers and buyers of credit cards, logins, and personal identifiers. Beyond that, secure messaging platforms such as Telegram and Signal are also used to trade stolen data discreetly, thanks to their encryption and privacy features. Some invite-only forums on the surface web also serve as data exchange hubs, while certain hacktivists or whistleblowers may release stolen data publicly to expose unethical practices. Meanwhile, more sophisticated cybercriminal groups operate privately, sharing or selling data directly to trusted clients or other hacker collectives. 

According to reports from cybersecurity firm PrivacyAffairs, dark web markets offer everything from bank login credentials to passports and crypto wallets. Payment card data—often used in “carding” scams—remains one of the most traded items. Similarly, stolen social media and email accounts are in high demand, as they allow attackers to launch phishing campaigns or impersonate victims. Even personal documents such as birth certificates or national IDs are valuable for identity theft schemes. 

Although erasing your personal data from the internet entirely is nearly impossible, there are ways to limit your exposure. Start by using strong, unique passwords managed through a reputable password manager, and enable multi-factor authentication wherever possible. A virtual private network (VPN) adds another layer of protection by encrypting your internet traffic and preventing data collection by third parties. 

It’s also wise to tighten your social media privacy settings and avoid sharing identifiable details such as your workplace, home address, or relationship status. Be cautious about what information you provide to websites and services—especially when signing up or making purchases. Temporary emails, one-time payment cards, and P.O. boxes can help preserve your anonymity online.  

If you discover that your data was part of a breach, act quickly. Monitor all connected accounts for suspicious activity, reset compromised passwords, and alert your bank or credit card provider if financial details were involved. For highly sensitive leaks, such as stolen ID numbers, consider freezing your credit report to prevent identity fraud. Data monitoring services can also help by tracking the dark web for mentions of your personal information.

In today’s digital world, data is currency—and your information is one of the most valuable assets you own. Staying vigilant, maintaining good cyber hygiene, and using privacy tools are your best defenses against becoming another statistic in the global data breach economy.

WestJet Confirms Cyberattack Exposed Passenger Data but No Financial Details

 

WestJet has confirmed that a cyberattack in June compromised certain passenger information, though the airline maintains that the breach did not involve sensitive financial or password data. The incident, which took place on June 13, was attributed to a “sophisticated, criminal third party,” according to a notice issued by the airline to U.S. residents earlier this week. 

WestJet stated that its internal precautionary measures successfully prevented the attackers from gaining access to credit and debit card details, including card numbers, expiry dates, and CVV codes. The airline further confirmed that no user passwords were stolen. However, the company acknowledged that some passengers’ personal information had been exposed. The compromised data included names, contact details, information and documents related to reservations and travel, and details regarding the passengers’ relationship with WestJet. 

“Containment is complete, and additional system and data security measures have been implemented,” WestJet said in an official release. The airline emphasized that analysis of the incident is still ongoing and that it continues to strengthen its cybersecurity framework to safeguard customer data. 

As part of its response plan, WestJet is contacting affected customers to offer support and guidance. The airline has partnered with Cyberscout, a company specializing in identity theft protection and fraud assistance, to help impacted individuals with remediation services. WestJet has also published advisory information on its website to assist passengers who may be concerned about their data.  

In its statement, the airline reassured customers that swift containment measures limited the breach’s impact. “Our cybersecurity teams acted immediately to contain the situation and secure our systems. We take our responsibility to protect customer information very seriously,” the company said. 

WestJet confirmed that it is working closely with law enforcement agencies, including the U.S. Federal Bureau of Investigation (FBI) and the Canadian Centre for Cyber Security. The airline also notified U.S. credit reporting agencies—TransUnion, Experian, and Equifax—along with the attorneys general of several U.S. states, Transport Canada, the Office of the Privacy Commissioner of Canada, and relevant provincial and international data protection authorities. 

While WestJet maintains that the exposed information does not appear to include sensitive financial or authentication details, cybersecurity experts note that personal identifiers such as names and contact data can still pose privacy and fraud risks if misused. The airline’s transparency and engagement with regulatory agencies reflect an effort to mitigate potential harm and restore public trust. 

The company reiterated that it remains committed to improving its security posture through enhanced monitoring, employee training, and the implementation of additional cybersecurity controls. The investigation into the breach continues, and WestJet has promised to provide further updates as new information becomes available. 

The incident highlights the ongoing threat of cyberattacks against the aviation industry, where companies hold large volumes of personal and travel-related data. Despite the rise in security investments, even well-established airlines remain attractive targets for sophisticated cybercriminals. WestJet’s quick response and cooperation with authorities underscore the importance of rapid containment and transparency in handling such data breaches.

Sam Altman Pushes for Legal Privacy Protections for ChatGPT Conversations

 

Sam Altman, CEO of OpenAI, has reiterated his call for legal privacy protections for ChatGPT conversations, arguing they should be treated with the same confidentiality as discussions with doctors or lawyers. “If you talk to a doctor about your medical history or a lawyer about a legal situation, that information is privileged,” Altman said. “We believe that the same level of protection needs to apply to conversations with AI.”  

Currently, no such legal safeguards exist for chatbot users. In a July interview, Altman warned that courts could compel OpenAI to hand over private chat data, noting that a federal court has already ordered the company to preserve all ChatGPT logs, including deleted ones. This ruling has raised concerns about user trust and OpenAI’s exposure to legal risks. 

Experts are divided on whether Altman’s vision could become reality. Peter Swire, a privacy and cybersecurity law professor at Georgia Tech, explained that while companies seek liability protection, advocates want access to data for accountability. He noted that full privacy privileges for AI may only apply in “limited circumstances,” such as when chatbots explicitly act as doctors or lawyers. 

Mayu Tobin-Miyaji, a law fellow at the Electronic Privacy Information Center, echoed that view, suggesting that protections might be extended to vetted AI systems operating under licensed professionals. However, she warned that today’s general-purpose chatbots are unlikely to receive such privileges soon. Mental health experts, meanwhile, are urging lawmakers to ban AI systems from misrepresenting themselves as therapists and to require clear disclosure when users are interacting with bots.  

Privacy advocates argue that transparency, not secrecy, should guide AI policy. Tobin-Miyaji emphasized the need for public awareness of how user data is collected, stored, and shared. She cautioned that confidentiality alone will not address the broader safety and accountability issues tied to generative AI. 

Concerns about data misuse are already affecting user behavior. After a May court order requiring OpenAI to retain ChatGPT logs indefinitely, many users voiced privacy fears online. Reddit discussions reflected growing unease, with some advising others to “assume everything you post online is public.” While most ChatGPT conversations currently center on writing or practical queries, OpenAI’s research shows an increase in emotionally sensitive exchanges. 

Without formal legal protections, users may hesitate to share private details, undermining the trust Altman views as essential to AI’s future. As the debate over AI confidentiality continues, OpenAI’s push for privacy may determine how freely people engage with chatbots in the years to come.

Why CEOs Must Go Beyond Backups and Build Strong Data Recovery Plans

 

We are living in an era where fast and effective solutions for data challenges are crucial. Relying solely on backups is no longer enough to guarantee business continuity in the face of cyberattacks, hardware failures, human error, or natural disasters. Every CEO must take responsibility for ensuring that their organization has a comprehensive data recovery plan that extends far beyond simple backups. 

Backups are not foolproof. They can fail, be misconfigured, or become corrupted, leaving organizations exposed at critical moments. Modern attackers are also increasingly targeting backup systems directly, making it impossible to restore data when needed. Even when functioning correctly, traditional backups are usually scheduled once a day and do not run in real time, putting businesses at risk of losing hours of valuable work. Recovery time is equally critical, as lengthy downtime caused by delays in data restoration can severely damage both reputation and revenue.  

Businesses often overestimate the security that traditional backups provide, only to discover their shortcomings when disaster strikes. A strong recovery plan should include proactive measures such as regular testing, simulated ransomware scenarios, and disaster recovery drills to ensure preparedness. Without this, the organization risks significant disruption and financial losses. 

The consequences of poor planning extend beyond operational setbacks. For companies handling sensitive personal or financial data, legal and compliance requirements demand advanced protection and recovery systems. Failure to comply can lead to legal penalties and fines in addition to reputational harm. To counter modern threats, organizations should adopt solutions like immutable backups, air-gapped storage, and secure cloud-based systems. While migrating to cloud storage may seem time-consuming, it offers resilience against physical damage and ensures that data cannot be lost through hardware failures alone. 

An effective recovery plan must be multi-layered. Following the 3-2-1 backup rule—keeping three copies of data, on two different media, with one offline—is widely recognized as best practice. Cloud-based disaster recovery platforms such as Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) should also be considered to provide automated failover and minimize downtime. Beyond technology, employee awareness is essential. IT and support staff should be well-trained and recovery protocols tested quarterly to confirm readiness. 

Communication plays a vital role in data recovery planning. How an organization communicates a disruption to clients can directly influence how much trust is retained. While some customers may inevitably be lost, a clear and transparent communication strategy can help preserve the majority. CEOs should also evaluate cyber insurance options to mitigate financial risks tied to recovery costs. 

Ultimately, backups are just snapshots of data, while a recovery plan acts as a comprehensive playbook for survival when disaster strikes. CEOs who neglect this responsibility risk severe financial losses, regulatory penalties, and even business closure. A well-designed, thoroughly tested recovery plan not only minimizes downtime but also protects revenue, client trust, and the long-term future of the organization.

OpenAI Patches ChatGPT Gmail Flaw Exploited by Hackers in Deep Research Attacks

 

OpenAI has fixed a security vulnerability that could have allowed hackers to manipulate ChatGPT into leaking sensitive data from a victim’s Gmail inbox. The flaw, uncovered by cybersecurity company Radware and reported by Bloomberg, involved ChatGPT’s “deep research” feature. This function enables the AI to carry out advanced tasks such as web browsing and analyzing files or emails stored in services like Gmail, Google Drive, and Microsoft OneDrive. While useful, the tool also created a potential entry point for attackers to exploit.  

Radware discovered that if a user requested ChatGPT to perform a deep research task on their Gmail inbox, hackers could trigger the AI into executing malicious instructions hidden inside a carefully designed email. These hidden commands could manipulate the chatbot into scanning private messages, extracting information such as names or email addresses, and sending it to a hacker-controlled server. The vulnerability worked by embedding secret instructions within an email disguised as a legitimate message, such as one about human resources processes. 

The proof-of-concept attack was challenging to develop, requiring a detailed phishing email crafted specifically to bypass safeguards. However, if triggered under the right conditions, the vulnerability acted like a digital landmine. Once ChatGPT began analyzing the inbox, it would unknowingly carry out the malicious code and exfiltrate data “without user confirmation and without rendering anything in the user interface,” Radware explained. 

This type of exploit is particularly difficult for conventional security tools to catch. Since the data transfer originates from OpenAI’s own infrastructure rather than the victim’s device or browser, standard defenses like secure web gateways, endpoint monitoring, or browser policies are unable to detect or block it. This highlights the growing challenge of AI-driven attacks that bypass traditional cybersecurity protections. 

In response to the discovery, OpenAI stated that developing safe AI systems remains a top priority. A spokesperson told PCMag that the company continues to implement safeguards against malicious use and values external research that helps strengthen its defenses. According to Radware, the flaw was patched in August, with OpenAI acknowledging the fix in September.

The findings emphasize the broader risk of prompt injection attacks, where hackers insert hidden commands into web content or messages to manipulate AI systems. Both Anthropic and Brave Software recently warned that similar vulnerabilities could affect AI-enabled browsers and extensions. Radware recommends protective measures such as sanitizing emails to remove potential hidden instructions and enhancing monitoring of chatbot activities to reduce exploitation risks.

Insight Partners Ransomware Attack Exposes Data of Thousands of Individuals

 

Insight Partners, a New York-based venture capital and private equity firm, is notifying thousands of individuals that their personal information was compromised in a ransomware attack. The firm initially disclosed the incident in February, confirming that the intrusion stemmed from a sophisticated social engineering scheme that gave attackers access to its systems. Subsequent investigations revealed that sensitive data had also been stolen, including banking details, tax records, personal information of current and former employees, as well as information connected to limited partners, funds, management companies, and portfolio firms. 

The company stated that formal notification letters are being sent to all affected parties, with complimentary credit monitoring and identity protection services offered as part of its response. It clarified that individuals who do not receive a notification letter by the end of September 2025 can assume their data was not impacted. According to filings with California’s attorney general, which were first reported by TechCrunch, the intrusion occurred in October 2024. Attackers exfiltrated data before encrypting servers on January 16, 2025, in what appears to be the culmination of a carefully planned ransomware campaign. Insight Partners explained that the attacker gained access to its environment on or around October 25, 2024, using advanced social engineering tactics. 

Once inside, the threat actor began stealing data from affected servers. Months later, at around 10:00 a.m. EST on January 16, the same servers were encrypted, effectively disrupting operations. While the firm has confirmed the theft and encryption, no ransomware group has claimed responsibility for the incident so far. A separate filing with the Maine attorney general disclosed that the breach impacted 12,657 individuals. The compromised information poses risks ranging from financial fraud to identity theft, underscoring the seriousness of the incident. 

Despite the scale of the attack, Insight Partners has not yet responded to requests for further comment on how it intends to manage recovery efforts or bolster its cybersecurity posture going forward. Insight Partners is one of the largest venture capital firms in the United States, with over $90 billion in regulatory assets under management. Over the past three decades, it has invested in more than 800 software and technology startups globally, making it a key player in the tech investment ecosystem. 

The breach marks a significant cybersecurity challenge for the firm as it balances damage control, regulatory compliance, and the trust of its investors and partners.

Digital Twins: Benefits and the Cybersecurity Risks They Bring

 

Digital twins—virtual digital counterparts of physical objects, people, or processes—are rapidly being adopted by organizations as tools for simulation, testing, and decision-making. The concept traces its roots to NASA’s physical replicas of spacecraft in the 1960s, but today’s digital twins have evolved into sophisticated frameworks that bridge physical and digital systems, offering the power to predict real-world outcomes and inform business strategy. 

David Shaw, Intuitus Corp. CEO and Digital Twin Consortium (DTC) working group co-chair, notes that these systems now do much more than simply mirror physical systems; they actively link both worlds, enabling predictive analytics at scale. 

Greg Porter, Principal Solutions Architect at Sev1Tech, describes digital twin technology as still emerging, but increasingly central to business innovation. Their key advantage lies in the ability to simulate future scenarios and outcomes without disrupting the actual physical assets, allowing companies to test changes, interventions, or potential failures in a risk-free environment.

Industry applications are diverse: in healthcare, digital twins can model the effects of new medications or surgical procedures before implementation, while other organizations use digital twins to map employee interactions with physical assets, providing insights into cybersecurity attack surfaces and operational efficiencies. The cost to implement these systems varies widely, from a few hundred dollars for basic models to multi-million-dollar deployments for complex, mission-critical infrastructures. 

However, while digital twins unlock new capabilities in prototyping, testing, and risk management, they also introduce significant cybersecurity risks. Porter warns that, particularly in “full-loop” digital twin environments—where data flows both from the physical system into the digital twin and back again—organizations open a new attack vector from the digital realm directly into physical assets. If the digital twin infrastructure is insecure, threat actors could manipulate data in ways that affect real-world systems, potentially leading to loss of control or catastrophic outcomes. 

Kayne McGladrey, CISO in residence at Hyperproof, highlights that intellectual property theft is another rising threat; access to a digital twin could allow attackers to reverse-engineer sensitive business processes or product designs, providing competitors or nation-state actors with a strategic advantage. In sectors such as aerospace, defense, and critical infrastructure, the consequences of such breaches could be both severe and far-reaching. 

Mitigation tips 

To secure digital twins, organizations must implement robust data controls, segmenting and monitoring digital twin environments to prevent lateral movement by attackers. McGladrey recommends adopting “classic cybersecurity” measures with some enhancements: deploying phishing-resistant multi-factor authentication, tightly controlling user access, and maintaining comprehensive activity logs to support forensic investigation if an incident occurs. These steps, he notes, are not overly complex but do require deliberate planning to ensure that the security of both digital and physical assets is maintained. 

As digital twin adoption accelerates, organizations must weigh their operational benefits against the new risks they introduce. By understanding the full scope of both opportunities and threats, and by embedding strong cybersecurity principles from the outset, businesses can harness digital twins’ transformative potential without exposing themselves to undue risk.

DevOps data breaches expose Microsoft, Schneider Electric, Mercedes-Benz, and New York Times

 

Source code forms the backbone of every digital enterprise, and platforms such as GitHub and Atlassian are trusted to safeguard this critical data. Yet, organizations must remember that under the Shared Responsibility Model, users retain accountability for the security of their data. Even the smallest mistake can trigger a devastating cascade, from large-scale leaks of proprietary code to stolen credentials and severe reputational and financial consequences. 

Recent breaches across industries highlight how valuable DevOps environments have become to cybercriminals. Companies as diverse as Mercedes-Benz, The New York Times, and Schneider Electric have all suffered from security lapses, showing that innovation without adequate protection leaves no organization immune. The growing threat landscape underscores the scale of the problem, with cyberattacks occurring roughly every 39 seconds worldwide. IBM has observed a 56% increase in active ransomware groups, while Cybersecurity Ventures predicts that cybercrime costs will rise from $10.5 trillion in 2025 to more than $15 trillion by 2029. The CISO’s Guide to DevOps Threats further identifies technology, fintech, and media as the sectors most at risk, with 59% of ransomware activity concentrated in the United States. Data breaches typically ripple beyond the initial target, affecting partners, customers, and supply chains. 

The ransomware group HellCat has demonstrated how exposed credentials can become a doorway to widespread damage. By exploiting stolen Atlassian Jira logins, they infiltrated global enterprises including Schneider Electric, Orange Group, Telefonica, Jaguar Land Rover, and Ascom. Schneider Electric alone had 40GB of data stolen in 2024, including user records, email addresses, and sensitive project information, with a ransom demand of $125,000. Telefonica was breached twice in 2025, losing over 100GB of internal documents and communications. Similar compromises at Jaguar Land Rover and Ascom revealed thousands of employee records and sensitive corporate data, illustrating how poor credential management fuels recurring attacks. 

Mismanaged access tokens also pose severe risks. Mercedes-Benz faced exposure when an employee accidentally embedded a GitHub token in a public repository, potentially granting attackers access to confidential assets like API keys and database credentials. Threat actors have also weaponized GitHub itself, using trojanized proof-of-concept code and malicious npm dependencies to exfiltrate hundreds of thousands of WordPress credentials and cloud keys. Even unexpected groups, such as fans of Disney’s discontinued Club Penguin, exploited exposed Confluence logins to access corporate files and developer resources. The New York Times confirmed that leaked credentials on a third-party code platform exposed 270GB of internal data, though it reported no operational disruption. 

The cumulative impact of these incidents is staggering, with terabytes of stolen data, millions of records exposed, and reputational harm that far exceeds immediate costs. As regulatory penalties intensify and compliance standards grow stricter, the financial fallout of DevOps data breaches is likely to escalate further, leaving organizations with little choice but to prioritize security at the core of their operations.

Restaurant Brands International faces cybersecurity flaws as ethical hackers expose data security risks

 

Restaurant Brands International (RBI), the parent company of Burger King, Tim Hortons, and Popeyes, has come under scrutiny after two ethical hackers uncovered major cybersecurity flaws across its digital systems. The researchers, known by their handles BobDaHacker and BobTheShoplifter, revealed how weak security practices left RBI’s global operations, spanning more than 30,000 outlets, dangerously exposed. Their findings, once detailed in a blog that has since been archived, highlight critical oversights in RBI’s approach to data security.  

Among the most concerning discoveries was a password hard-coded into the HTML of an equipment ordering site, a lapse that would typically raise alarms in even the most basic security audits. In another instance, the hackers found that the drive-through tablet system used the password “admin,” a default credential considered one of the most insecure in the industry. Such weak safeguards left RBI vulnerable to unauthorized access, calling into question the company’s investment in even the most fundamental cybersecurity measures. 

The hackers went further, demonstrating access to employee accounts, internal configurations, and raw audio files from drive-through conversations. These recordings, sometimes containing fragments of personal information, were later processed by artificial intelligence to evaluate customer interactions and staff performance. While the hackers emphasized that they did not retain or misuse any data, their ability to reach such sensitive systems underscores the potential risks had malicious actors discovered the same flaws. 

Their probe also extended into unexpected areas, such as software linked to bathroom rating screens in restaurants. While they joked about leaving fake reviews remotely, the researchers remained committed to responsible disclosure, ensuring no disruption to RBI’s operations. Nevertheless, the ease with which they navigated these systems illustrates how deeply embedded vulnerabilities had gone unnoticed. 

Other problems included APIs that allowed unrestricted sign-ups, plain-text emails containing passwords, and methods to escalate privileges to administrator access across platforms. These oversights are precisely the types of risks that established cybersecurity practices like ransomware protection and malware prevention are designed to prevent. According to the ethical hackers, RBI’s overall digital defenses could best be described as “catastrophic,” comparing them humorously to a paper Whopper wrapper in the rain. 

Although RBI reportedly addressed the vulnerabilities after being informed, the company has not publicly acknowledged the hackers or commented on the severity of the issues. This lack of transparency raises concerns about whether the incident will lead to lasting security reforms or if it will be treated as a quick fix before moving on. For a multinational corporation handling sensitive customer interactions daily, the revelations serve as a stark warning about the consequences of neglecting cybersecurity fundamentals.

Blackpool Credit Union Cyberattack Exposes Customer Data in Cork

 

A Cork-based credit union has issued a warning to its customers after a recent cyberattack exposed sensitive personal information. Blackpool Credit Union confirmed that the breach occurred late last month and subsequently notified members through a formal letter. Investigators determined that hackers may have gained access to personal records, including names, contact information, residential addresses, dates of birth, and account details. While there is no evidence that any funds were stolen or PIN numbers compromised, concerns remain that the stolen data could be misused. 

The investigation raised the possibility that cybercriminals may publish the stolen records on underground marketplaces such as the dark web. This type of exposure increases the risk of identity theft or secondary scams, particularly phishing attacks in which fraudsters impersonate trusted organizations to steal additional details from unsuspecting victims. Customers were urged to remain vigilant and to treat any unsolicited communication requesting personal or financial information with caution. 

The Central Bank of Ireland has been briefed on the situation and is monitoring developments. It has advised any members with concerns to reach out directly to Blackpool Credit Union through its official phone line. Meanwhile, a spokesperson for the credit union assured the public that services remain operational and that members can continue to access assistance in person, by phone, or through email. The organization emphasized that safeguarding customer data remains a priority and expressed regret over the incident. Impacted individuals will be contacted directly for follow-up support. 

The Irish League of Credit Unions reinforced the importance of caution, noting that legitimate credit unions will never ask members to verify accounts through text messages or unsolicited communications. Fraudsters often exploit publicly available details to appear convincing, setting up sophisticated websites and emails to lure individuals into disclosing confidential information. Customers were reminded to independently verify the authenticity of any suspicious outreach and to rely on official registers when dealing with financial services.  

Experts warn that people who have already fallen victim to scams are more likely to be targeted again. Attackers often pressure individuals into making hasty decisions, using the sense of urgency to trick them into disclosing sensitive information or transferring money. Customers were encouraged to take their time before responding to unexpected requests and to trust their instincts if something feels unusual or out of place.

The Central Bank reiterated its awareness of the breach and confirmed that it is in direct communication with Blackpool Credit Union regarding the response measures. Members seeking clarification were again directed to the credit union’s official helpline for assistance.

GitHub Supply Chain Attack ‘GhostAction’ Exposes Over 3,000 Secrets Across Ecosystems

 

A newly uncovered supply chain attack on GitHub, named GhostAction, has compromised more than 3,300 secrets across multiple ecosystems, including PyPI, npm, DockerHub, GitHub, Cloudflare, and AWS. The campaign was first identified by GitGuardian researchers, who traced initial signs of suspicious activity in the FastUUID project on September 2, 2025. The attack relied on compromised maintainer accounts, which were used to commit malicious workflow files into repositories. These GitHub Actions workflows were configured to trigger automatically on push events or manual dispatch, enabling the attackers to extract sensitive information. 

Once executed, the malicious workflow harvested secrets from GitHub Actions environments and transmitted them to an attacker-controlled server through a curl POST request. In FastUUID’s case, the attackers accessed the project’s PyPI token, although no malicious package versions were published before the compromise was detected and contained. Further investigation revealed that the attack extended well beyond a single project. Researchers found similar workflow injections across at least 817 repositories, all exfiltrating data to the same domain. To maximize impact, the attackers enumerated secret variables from existing legitimate workflows and embedded them into their own files, ensuring multiple types of secrets could be stolen. 

GitGuardian publicly disclosed the findings on September 5, raising issues in 573 affected repositories and notifying security teams at GitHub, npm, and PyPI. By that time, about 100 repositories had already identified the unauthorized commits and reverted them. Soon after the disclosures, the exfiltration endpoint used by the attackers went offline, halting further data transfers. 

The scope of the incident is significant, with researchers estimating that roughly 3,325 secrets were exposed. These included API tokens, access keys, and database credentials spanning several major platforms. At least nine npm packages and 15 PyPI projects remain directly affected, with the risk that compromised tokens could allow the release of malicious or trojanized versions if not revoked. GitGuardian noted that some companies had their entire SDK portfolios compromised, with repositories in Python, Rust, JavaScript, and Go impacted simultaneously. 

While the attack bears some resemblance to the s1ngularity campaign reported in late August, GitGuardian stated that it does not see a direct connection between the two. Instead, GhostAction appears to represent a distinct, large-scale attempt to exploit open-source ecosystems through stolen maintainer credentials and poisoned automation workflows. The findings underscore the growing challenges in securing supply chains that depend heavily on public code repositories and automated build systems.

Czechia Warns of Chinese Data Transfers and Espionage Risks to Critical Infrastructure

 

Czechia’s National Cyber and Information Security Agency (NÚKIB) has issued a stark warning about rising cyber espionage campaigns linked to China and Russia, urging both government institutions and private companies to strengthen their security measures. The agency classified the threat as highly likely, citing particular concerns over data transfers to China and remote administration of assets from Chinese territories, including Hong Kong and Macau. According to the watchdog, these operations are part of long-term efforts by foreign states to compromise critical infrastructure, steal sensitive data, and undermine public trust. 

The agency’s concerns are rooted in China’s legal and regulatory framework, which it argues makes private data inherently insecure. Laws such as the National Intelligence Law of 2017 require all citizens and organizations to assist intelligence services, while the 2015 National Security Law and the 2013 Company Law provide broad avenues for state interference in corporate operations. Additionally, regulations introduced in 2021 obligate technology firms to report software vulnerabilities to government authorities within two days while prohibiting disclosure to foreign organizations. NÚKIB noted that these measures give Chinese state actors sweeping access to sensitive information, making foreign businesses and governments vulnerable if their data passes through Chinese systems. 

Hong Kong and Macau also fall under scrutiny in the agency’s assessment. In Hong Kong, the 2024 Safeguarding National Security Ordinance integrates Chinese security laws into its own legal system, broadening the definition of state secrets. Macau’s 2019 Cybersecurity Law grants authorities powers to monitor data transmissions from critical infrastructure in real time, with little oversight to prevent misuse. NÚKIB argues that these developments extend the Chinese government’s reach well beyond its mainland jurisdiction. 

The Czech warning gains credibility from recent attribution efforts. Earlier this year, Prague linked cyberattacks on its Ministry of Foreign Affairs to APT31, a group tied to China’s Ministry of State Security, in a campaign active since 2022. The government condemned the attacks as deliberate attempts to disrupt its institutions and confirmed a high degree of certainty about Chinese involvement, based on cooperation among domestic and international intelligence agencies. 

These warnings align with broader global moves to limit reliance on Chinese technologies. Countries such as Germany, Italy, and the Netherlands have already imposed restrictions, while the Five Eyes alliance has issued similar advisories. For Czechia, the implications are serious: NÚKIB highlighted risks across devices and systems such as smartphones, cloud services, photovoltaic inverters, and health technology, stressing that disruptions could have wide-reaching consequences. The agency’s message reflects an ongoing effort to secure its digital ecosystem against foreign influence, particularly as geopolitical tensions deepen in Europe.

Massive database of 250 million data leaked online for public access


Around a quarter of a billion identity records were left publicly accessible, exposing people located in seven countries- Saudi Arabia, the United Arab Emirates, Canada, Mexico, South Africa, Egypt, and Turkey. 

According to experts from Cybernews, three misconfigured servers, registered in the UAE and Brazil, hosting IP addresses, contained personal information such as “government-level” identity profiles. The leaked data included contact details, dates of birth, ID numbers, and home addresses. 

Cybernews experts who found the leak said the databases seemed to have similarities with the naming conventions and structure, which hinted towards the same source. But they could not identify the actor who was responsible for running the servers. 

“These databases were likely operated by a single party, due to the similar data structures, but there’s no attribution as to who controlled the data, or any hard links proving that these instances belonged to the same party,” they said. 

The leak is particularly concerning for citizens in South Africa, Egypt, and Turkey, as the databases there contained full-spectrum data. 

The leak would have exposed the database to multiple threats, such as phishing campaigns, scams, financial fraud, and abuses.

Currently, the database is not publicly accessible (a good sign). 

This is not the first incident where a massive database holding citizen data (250 million) has been exposed online. Cybernews’ research revealed that the entire Brazilian population might have been impacted by the breach.

Earlier, a misconfigured Elasticsearch instance included the data with details such as sex,  names, dates of birth, and Cadastro de Pessoas Físicas (CPF) numbers. This number is used to identify taxpayers in Brazil. 

Browser-Based Attacks in 2025: Key Threats Security Teams Must Address

 

In 2025, the browser has become one of the primary battlefields for cybercriminals. Once considered a simple access point to the internet, it now serves as the main gateway for employees into critical business applications and sensitive data. This shift has drawn attackers to target browsers directly, exploiting them as the weakest link in a highly connected and decentralized work environment. With enterprises relying heavily on SaaS platforms, online collaboration tools, and cloud applications, the browser has transformed into the focal point of modern cyberattacks, and security teams must rethink their defenses to stay ahead. 

The reason attackers focus on browsers is not because of the technology itself, but because of what lies beyond them. When a user logs into a SaaS tool, an ERP system, or a customer database, the browser acts as the entryway. Incidents such as the Snowflake customer data breach and ongoing attacks against Salesforce users demonstrate that attackers no longer need to compromise entire networks; they simply exploit the session and gain direct access to enterprise assets. 

Phishing remains one of the most common browser-driven threats, but it has grown increasingly sophisticated. Attackers now rely on advanced Attacker-in-the-Middle kits that steal not only passwords but also active sessions, rendering multi-factor authentication useless. These phishing campaigns are often cloaked with obfuscation and hosted on legitimate SaaS infrastructure, making them difficult to detect. In other cases, attackers deliver malicious code through deceptive mechanisms such as ClickFix, which disguises harmful commands as verification prompts. Variants like FileFix are spreading across both Windows and macOS, frequently planting infostealer malware designed to harvest credentials and session cookies. 

Another growing risk comes from malicious OAuth integrations, where attackers trick users into approving third-party applications that secretly provide them with access to corporate systems. This method proved devastating in recent Salesforce-related breaches, where hackers bypassed strong authentication and gained long-term access to enterprise environments. Similarly, compromised or fraudulent browser extensions represent a silent but dangerous threat. These can capture login details, hijack sessions, or inject malicious scripts, as highlighted in the Cyberhaven incident in late 2024. 

File downloads remain another effective attack vector. Malware-laced documents, often hidden behind phishing portals, continue to slip past traditional defenses. Meanwhile, stolen credentials still fuel account takeovers in cases where multi-factor authentication is weak, absent, or improperly enforced. Attackers exploit these gaps using ghost logins and bypass techniques, highlighting the need for real-time browser-level monitoring. 

As attackers increasingly exploit the browser as a central point of entry, organizations must prioritize visibility and control at this layer. By strengthening browser security, enterprises can reduce identity exposure, close MFA gaps, and limit the risks of phishing, malware delivery, and unauthorized access. The browser has become the new endpoint of enterprise defense, and protecting it is no longer optional.

Disney to Pay $10 Million Fine in FTC Settlement Over Child Data Collection on YouTube

 

Disney has agreed to pay millions of dollars in penalties to resolve allegations brought by the Federal Trade Commission (FTC) that it unlawfully collected personal data from young viewers on YouTube without securing parental consent. Federal law under the Children’s Online Privacy Protection Act (COPPA) requires parental approval before companies can gather data from children under the age of 13. 

The case, filed by the U.S. Department of Justice on behalf of the FTC, accused Disney Worldwide Services Inc. and Disney Entertainment Operations LLC of failing to comply with COPPA by not properly labeling Disney videos on YouTube as “Made for Kids.” This mislabeling allegedly allowed the company to collect children’s data for targeted advertising purposes. 

“This case highlights the FTC’s commitment to upholding COPPA, which ensures that parents, not corporations, control how their children’s personal information is used online,” said FTC Chair Andrew N. Ferguson in a statement. 

As part of the settlement, Disney will pay a $10 million civil penalty and implement stricter mechanisms to notify parents and obtain consent before collecting data from underage users. The company will also be required to establish a panel to review how its YouTube content is designated. According to the FTC, these measures are intended to reshape how Disney manages child-directed content on the platform and to encourage the adoption of age verification technologies. 

The complaint explained that Disney opted to designate its content at the channel level rather than individually marking each video as “Made for Kids” or “Not Made for Kids.” This approach allegedly enabled the collection of data from child-directed videos, which YouTube then used for targeted advertising. Disney reportedly received a share of the ad revenue and, in the process, exposed children to age-inappropriate features such as autoplay.  

The FTC noted that YouTube first introduced mandatory labeling requirements for creators, including Disney, in 2019 following an earlier settlement over COPPA violations. Despite these requirements, Disney allegedly continued mislabeling its content, undermining parental safeguards. 

“The order penalizes Disney’s abuse of parental trust and sets a framework for protecting children online through mandated video review and age assurance technology,” Ferguson added. 

The settlement arrives alongside an unrelated investigation launched earlier this year by the Federal Communications Commission (FCC) into alleged hiring practices at Disney and its subsidiary ABC. While separate, the two cases add to the regulatory pressure the entertainment giant is facing. 

The Disney case underscores growing scrutiny of how major media and technology companies handle children’s privacy online, particularly as regulators push for stronger safeguards in digital environments where young audiences are most active.

Jaguar Land Rover Cyberattack Breaches Data and Halts Global Production

Jaguar Land Rover (JLR), the UK’s largest automaker and a subsidiary of Tata Motors, has confirmed that the recent cyberattack on its systems has not only disrupted global operations but also resulted in a data breach. The company revealed during its ongoing investigation that sensitive information had been compromised, although it has not yet specified whether the data belonged to customers, suppliers, or employees. JLR stated that it will directly contact anyone impacted once the scope of the breach is confirmed. 

The incident has forced JLR to shut down its IT systems across the globe in an effort to contain the ransomware attack. Production has been halted at its Midlands and Merseyside factories in the UK, with workers told they cannot return until at least next week. Other plants outside the UK have also been affected, with some industry insiders warning that it could take weeks before operations return to normal. The disruption has spilled over to suppliers and retailers, some of whom are unable to access databases used for registering vehicles or sourcing spare parts. 

The automaker has reported the breach to all relevant authorities, including the UK’s Information Commissioner’s Office. A JLR spokesperson emphasized that third-party cybersecurity experts are assisting in forensic investigations and recovery efforts, while the company works “around the clock” to restore services safely. The spokesperson also apologized for the ongoing disruption and reiterated JLR’s commitment to transparency as the inquiry continues. 

Financial pressure is mounting as the costs of the prolonged shutdown escalate. Shares of Tata Motors dropped 0.9% in Mumbai following the disclosure, reflecting investor concerns about the impact on the company’s bottom line. The disruption comes at a challenging time for JLR, which is already dealing with falling profits and delays in the launch of new electric vehicle models. 

The attack appears to be part of a growing trend of aggressive cyber campaigns targeting global corporations. A group of English-speaking hackers, linked to previously documented attacks on retailers such as Marks & Spencer, has claimed responsibility for the JLR breach. Screenshots allegedly showing the company’s internal IT systems were posted on a Telegram channel associated with hacker groups including Scattered Spider, Lapsus$, and ShinyHunters. 

Cybersecurity analysts warn that the automotive industry is becoming a prime target due to its reliance on connected systems and critical supply chains. Attacks of this scale not only threaten operations but also risk exposing valuable intellectual property and sensitive personal data. As JLR races to restore its systems, the incident underscores the urgent need for stronger resilience measures in the sector.

AI Image Attacks: How Hidden Commands Threaten Chatbots and Data Security

 



As artificial intelligence becomes part of daily workflows, attackers are exploring new ways to exploit its weaknesses. Recent research has revealed a method where seemingly harmless images uploaded to AI systems can conceal hidden instructions, tricking chatbots into performing actions without the user’s awareness.


How hidden commands emerge

The risk lies in how AI platforms process images. To reduce computing costs, most systems shrink images before analysis, a step known as downscaling. During this resizing, certain pixel patterns, deliberately placed by an attacker can form shapes or text that the model interprets as user input. While the original image looks ordinary to the human eye, the downscaled version quietly delivers instructions to the system.

This technique is not entirely new. Academic studies as early as 2020 suggested that scaling algorithms such as bicubic or bilinear resampling could be manipulated to reveal invisible content. What is new is the demonstration of this tactic against modern AI interfaces, proving that such attacks are practical rather than theoretical.


Why this matters

Multimodal systems, which handle both text and images, are increasingly connected to calendars, messaging apps, and workplace tools. A hidden prompt inside an uploaded image could, in theory, request access to private information or trigger actions without explicit permission. One test case showed that calendar data could be forwarded externally, illustrating the potential for identity theft or information leaks.

The real concern is scale. As organizations integrate AI assistants into daily operations, even one overlooked vulnerability could compromise sensitive communications or financial data. Because the manipulation happens inside the preprocessing stage, traditional defenses such as firewalls or antivirus tools are unlikely to detect it.


Building safer AI systems

Defending against this form of “prompt injection” requires layered strategies. For users, simple precautions include checking how an image looks after resizing and confirming any unusual system requests. For developers, stronger measures are necessary: restricting image dimensions, sanitizing inputs before models interpret them, requiring explicit confirmation for sensitive actions, and testing models against adversarial image samples.

Researchers stress that piecemeal fixes will not be enough. Only systematic design changes such as enforcing secure defaults and monitoring for hidden instructions can meaningfully reduce the risks.

Images are no longer guaranteed to be safe when processed by AI systems. As attackers learn to hide commands where only machines can read them, users and developers alike must treat every upload with caution. By prioritizing proactive defenses, the industry can limit these threats before they escalate into real-world breaches.