Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label News. Show all posts

Akira Ransomware Claims 23GB Data Theft in Alleged Apache OpenOffice Breach

 

The Akira ransomware group has reportedly claimed responsibility for breaching Apache OpenOffice, asserting that it stole 23 gigabytes of sensitive internal data from the open-source software foundation. 

The announcement was made on October 29 through Akira’s dark web leak site, where the group threatened to publish the stolen files if its ransom demands were not met. Known for its double-extortion tactics, Akira typically exfiltrates confidential data before encrypting victims’ systems to increase pressure for payment. 

Apache OpenOffice, a long-standing project under the Apache Software Foundation, provides free productivity tools that rival commercial platforms such as Microsoft Office. Its suite includes Writer, Calc, Impress, Draw, Base, and Math, and it supports more than 110 languages across major operating systems. The software is widely used by educational institutions, small businesses, and individuals around the world. 

Despite the severity of the claims, early reports indicate that the public download servers for OpenOffice remain unaffected, meaning users’ software installations are currently considered safe. 

Details of the Alleged Breach 

According to Akira’s post, the data set includes personal details of employees such as home addresses, phone numbers, birth dates, driver’s licenses, Social Security numbers, and credit card information. The hackers also claim to have financial documents, internal communications, and detailed technical reports related to application bugs and development work. 

In their online statement, the group said, “We will upload 23 GB of corporate documents soon,” implying the data could soon be released publicly. As of November 1, the Apache Software Foundation has not confirmed or denied the breach. Representatives have declined to comment, and independent investigators have not yet verified the authenticity of the stolen data. 

Experts caution that, if genuine, the leak could expose staff to identity theft and phishing attacks. However, the open-source nature of the software itself likely limits risks to the product’s source code. 

Akira’s Growing Threat 

Akira emerged in March 2023 and operates as a ransomware-as-a-service network, offering its tools to affiliates in exchange for a share of the profits. The group has executed hundreds of attacks across North America, Europe, and Asia, reportedly extorting tens of millions of dollars from victims. Akira’s malware variants target both Windows and Linux systems, including VMware ESXi environments. 

In some cases, the hackers have even used compromised webcams for added intimidation. The group communicates in Russian on dark web forums and is known to avoid attacking computers configured with Russian-language keyboards. 

The alleged Apache OpenOffice incident comes amid a surge in ransomware attacks on open-source projects. Security experts are urging volunteer-based organizations to adopt stronger defenses, better data hygiene, and more robust incident response protocols. 

Until the claim is verified or disproved, users and contributors to Apache OpenOffice are advised to stay alert for suspicious activity and ensure that backups are secure and isolated from their main systems.

Ransomware Surge Poses Geopolitical and Economic Risks, Warns Joint Cybersecurity Report

 

A new joint report released this week by Northwave Cyber Security and Marsh, a division of Marsh McLennan, warns that ransomware attacks targeting small and medium-sized businesses have sharply increased, creating serious geopolitical, economic, and national security concerns. Northwave Cyber Security, a leading European cyber resilience firm, and Marsh, one of the world’s largest insurance brokers and risk advisers, analyzed thousands of cyber incidents across Europe and Israel to reveal how ransomware threats are turning into a structured global industry. 

The report finds that many ransomware operators, often linked to Russia, Iran, North Korea, and China, have intensified their attacks on small and mid-sized businesses that form the backbone of Western economies. Instead of focusing only on large corporations or government agencies, these groups are increasingly targeting vulnerable firms in sectors such as IT services, retail, logistics, and construction. 

Peter Teishev, head of the Special Risks Department at Marsh Israel, said the threat landscape has changed significantly. “As ransomware attacks become more sophisticated and decentralized, organizations must shift from responding after incidents to building proactive defense strategies,” he explained. 

He added that Israel has faced particularly high levels of cyberattacks over the past two years, making preparedness a national priority. The report estimates that global ransom payments reached nearly €700 million in 2024, with the average ransom demand standing at €172,000, which equals about 2 percent of a company’s annual revenue. 

In Europe, ransomware incidents increased by 34 percent in the first half of 2025 compared with the same period in 2024. Northwave and Marsh attribute this rapid growth to the rise of Ransomware-as-a-Service (RaaS) models, which allow criminal groups to rent out their hacking tools to others, turning ransomware into a profitable business. 

When authorities disrupt such groups, they often split and rebrand, continuing their activities under new identities. Recent attacks in Israel highlight the geopolitical aspects of ransomware. The Israel National Cyber Directorate (INCD) recently warned of a wave of intrusions against IT service providers, likely linked to Iran. 

One major incident targeted Shamir Medical Center in Tzrifin, where hackers leaked sensitive patient emails. Although an Eastern European ransomware group initially claimed responsibility, Israeli investigators later traced the attack to Iranian actors. 

Cyber experts say this collaboration between state-sponsored hackers and criminal groups shows how ransomware is now used as a tool of hybrid warfare to disrupt healthcare, energy, and transport systems for political purposes. 

The report also discusses divisions among hacker networks following Russia’s invasion of Ukraine. Some ransomware groups sided with Moscow and joined state-backed operations against NATO and EU countries. Others opposed this alignment, which led to the breakup of the infamous Conti Group. 

The exposure of more than 60,000 internal chat logs in what became known as ContiLeaks revealed the internal workings of the ransomware industry and forced several groups to reorganize under new names. Even with these internal divisions, ransomware operations have become more competitive and unpredictable. 

According to Marsh and Northwave, this has made it harder to anticipate their next moves. At the same time, cyber insurance prices fell globally by about 12 percent in the last quarter, making protection more accessible for many organizations. 

The report concludes that ransomware is no longer only a criminal enterprise but also an instrument of global power politics that can undermine economic stability and national security. As Teishev summarized, “The threat is growing, but so is the ability to prepare. The next phase of cybersecurity will focus not on recovery but on resilience.”

Privacy Laws Struggle to Keep Up with Meta’s ‘Luxury Surveillance’ Glasses


Meta’s newest smart glasses have reignited concerns about privacy, as many believe the company is inching toward a world where constant surveillance becomes ordinary. 

Introduced at Meta’s recent Connect event, the glasses reflect the kind of future that science fiction has long warned about, where everyone can record anyone at any moment and privacy nearly disappears. This is not the first time the tech industry has tried to make wearable cameras mainstream. 

More than ten years ago, Google launched Google Glass, which quickly became a public failure. People mocked its users as “Glassholes,” criticizing how easily the device could invade personal space. The backlash revealed that society was not ready for technology that quietly records others without their consent. 

Meta appears to have taken a different approach. By partnering with Ray-Ban, the company has created glasses that look fashionable and ordinary. Small cameras are placed near the nose bridge or along the outer rims, and a faint LED light is the only sign that recording is taking place. 

The glasses include a built-in display, voice-controlled artificial intelligence, and a wristband that lets the wearer start filming or livestreaming with a simple gesture. All recorded footage is instantly uploaded to Meta’s servers. 

Even with these improvements in design, the legal and ethical issues remain. Current privacy regulations are too outdated to deal with the challenges that come with such advanced wearable devices. 

Experts believe that social pressure and public disapproval may still be stronger than any law in discouraging misuse. As Meta promotes its vision of smart eyewear, critics warn that what is really being made normal is a culture of surveillance. 

The sleek design and luxury branding may make the technology more appealing, but the real risk lies in how easily people may accept being watched everywhere they go.

Cybercriminals Target Fans Ahead of 2026 FIFA World Cup, Norton Warns

 

Cybercriminals Target Fans Ahead of 2026 FIFA World Cup, Norton Warns With the 2026 FIFA World Cup still months away, cybersecurity experts are already warning fans to stay alert as fraudsters begin exploiting the global excitement surrounding the tournament. According to cybersecurity firm Norton, a wave of early scams is emerging aimed at deceiving soccer enthusiasts and stealing their money and personal data. 

The tournament, set to take place across the United States, Canada, and Mexico next summer, presents a lucrative opportunity for cybercriminals. 

“Every major event attracts cybercriminals. They exploit the distraction and excitement of fans to make them more vulnerable,” said Iskander Sanchez-Rola, Director of AI and Innovation at Norton. 

Experts say online threats range from counterfeit ticket offers and phishing campaigns to fake sweepstakes and manipulated search results. Fraudsters are reportedly creating fake websites that mimic official World Cup pages to distribute malware or collect sensitive information. 

Others are setting up bogus social media accounts promoting exclusive ticket deals or giveaways to lure victims. 

Norton’s analysis highlights several prevalent scam types: 

Manipulated Search Results: Fake ticketing and merchandise sites appearing high in search results to spread malware. 

Fake Sweepstakes and Promotions: Fraudulent offers designed to capture personal data under the guise of contests. 

Counterfeit Tickets: Illegitimate sales on social media or private channels that leave fans without valid entry after payment. 

Phishing Emails: Messages imitating FIFA or partner brands to trick users into downloading malicious files. 

Travel Booking Scams: Sham websites offering discounted accommodations that disappear after receiving payments. 

Security professionals urge fans to exercise caution. Norton advises checking URLs carefully for misspellings or strange domain names, purchasing tickets only through verified platforms, and avoiding money transfers to private accounts. 

Users are also encouraged to enable two-factor authentication and use password managers for added protection. Authorities warn that such scams will likely escalate as the tournament nears. Fans are urged to remain vigilant, verify every offer, and immediately report any suspected fraud to official channels or local law enforcement.

$1 Million WhatsApp Hack That Never Happened: Inside Pwn2Own’s Biggest Mystery

 

The world of ethical hacking saw an unexpected turn at the Pwn2Own Ireland 2025 competition, where an eagerly anticipated attempt to exploit WhatsApp Messenger for a record 1 million dollar prize was withdrawn at the last moment. Pwn2Own rewards researchers who responsibly discover and disclose zero day vulnerabilities, and this year’s final day promised a high stakes demonstration. 

The researcher known as Eugene, representing Team Z3, had been expected to reveal a zero click remote code execution exploit for WhatsApp. Such an exploit would have marked a major security finding and carried the largest single reward ever offered by the contest. Instead, organizers confirmed that Team Z3 pulled the entry, citing that their research was not ready for public demonstration. 

Dustin Childs, head of threat awareness at Trend Micro’s Zero Day Initiative which runs Pwn2Own events, said Team Z3 withdrew because they did not feel their work was ready. Childs added that Meta remains interested in receiving any valid findings, and that ZDI analysts will perform an initial assessment before passing material to Meta engineers for triage. 

The withdrawal sparked speculation across security forums and social media about whether a viable exploit had existed at all. Meta offered a measured response, telling press outlets that it was disappointed Team Z3 did not present a viable exploit but that it was in contact with ZDI and the researchers to understand submitted research and to triage lower risk issues received. 

The company reiterated that it welcomes valid reports through its bug bounty program and values collaboration with the security community. When approached, Eugene told Security Week that the matter would remain private between Meta, ZDI and the researcher, declining further comment. No public demonstration took place and the million dollar prize remained unclaimed. 

The episode highlights the pressures researchers face at high profile competitions, the importance of coordinated disclosure, and the fine line between proving a vulnerability and ensuring it can be safely handled. For vendors, competitions like Pwn2Own continue to be a vital source of intelligence about real world security risks, even when the most dramatic moments fail to materialize.

India Moves to Mandate Labels on AI-Generated Content Across Social Media

India’s Ministry of Electronics and Information Technology has proposed new regulations that would make it compulsory for all social media platforms to clearly label artificial intelligence (AI)-generated or “synthetic” content. 

Under the draft amendment to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, users would be required to self-declare whether their uploaded posts contain AI-generated material. 

If users fail to disclose this, platforms themselves will need to proactively detect and tag such content. The labels must occupy at least 10% of the content’s visible area and would apply to all media formats, including text, video, audio, and images, not just photorealistic deepfakes.

“Deepfakes are harming society by misusing people’s likeness and spreading misinformation,” said IT Minister Ashwini Vaishnaw, stressing the need to help users distinguish between “synthetic” and “real” content online. 

Officials said the draft rules are intended to restore trust in digital information by ensuring that manipulated or computer-generated content is prominently tagged or embedded with unique metadata identifiers. 

The proposed amendment also defines synthetically generated information as content that is “artificially or algorithmically created, generated, modified, or altered using a computer resource in a way that it appears authentic or true.” 

This marks a policy shift from the government’s earlier position, which had maintained that existing laws against impersonation and misinformation were adequate. The latest proposal reflects growing public and parliamentary concern over the social and political impact of deepfakes and manipulated media. 

The Ministry has invited public and industry feedback on the draft amendment until November 6, 2025, with officials noting that major social platforms have acknowledged they already possess the technical tools to comply with such requirements.

Agentic AI Demands Stronger Digital Trust Systems

 

As agentic AI becomes more common across industries, companies face a new cybersecurity challenge: how to verify and secure systems that operate independently, make decisions on their own, and appear or disappear without human involvement. 

Consider a financial firm where an AI agent activates early in the morning to analyse trading data, detect unusual patterns, and prepare reports before the markets open. Within minutes, it connects to several databases, completes its task, and shuts down automatically. This type of autonomous activity is growing rapidly, but it raises serious concerns about identity and trust. 

“Many organisations are deploying agentic AI without fully thinking about how to manage the certificates that confirm these systems’ identities,” says Chris Hickman, Chief Security Officer at Keyfactor. 

“The scale and speed at which agentic AI functions are far beyond what most companies have ever managed.” 

AI agents are unlike human users who log in with passwords or devices tied to hardware. They are temporary and adaptable, able to start, perform complex jobs, and disappear without manual authentication. 

This fluid nature makes it difficult to manage digital certificates, which are essential for maintaining trusted communication between systems. 

Greg Wetmore, Vice President of Product Development at Entrust, explains that AI agents act like both humans and machines. 

“When an agent logs into a system or updates data, it behaves like a human user. But when it interacts with APIs or cloud platforms, it looks more like a software component,” he says. 

This dual behaviour requires a flexible security model. AI agents need stable certificates that prove their identity and temporary credentials that control what they are allowed to do. 

These permissions must be revocable in real time if the system behaves unexpectedly. The challenge becomes even greater when AI agents begin interacting with each other. Without proper cryptographic controls, one system could impersonate another. 

“Once agents start sharing information, certificate management becomes absolutely essential,” Hickman adds. 

Complicating matters further, three major changes are hitting cryptography at once. Certificate lifespans are being shortened to 47 days, post-quantum algorithms are nearing adoption, and organisations must now manage a far larger number of certificates due to AI automation. 

“We’re seeing huge changes in cryptography after decades of stability,” Hickman notes. “It’s a lot to handle for many teams.” 

Keyfactor’s research reveals that almost half of all organisations have not begun preparing for post-quantum encryption, and many still lack a clearly defined role for managing cryptography. 

This lack of governance poses serious risks, especially when certificate management is handled by IT departments without deep security expertise. Still, experts believe the situation can be managed with existing tools. 

“Agentic AI fits well within established security models such as zero trust,” Wetmore explains. “The technology to issue strong identities, enforce policies, and limit access already exists.” 

According to Sebastian Weir, AI Practice Leader at IBM UK and Ireland, many companies are now focusing on building security into AI projects from the start. 

“While AI development can be up to four times faster, the first version of code often contains many more vulnerabilities...” 

“...Organisations are learning to consider security early instead of adding it later,” he says.

Financial institutions are among those leading the shift, building identity systems that blend the stability of long-term certificates with the flexibility of short-term authorisations. 

Hickman points out that Public Key Infrastructure (PKI) already supports similar scale in IoT environments, managing billions of certificates worldwide. 

He adds, “PKI has always been about scale. The same principles can support agentic AI if implemented properly.” The real focus now, according to experts, should be on governance and orchestration. 

“Scalability depends on creating consistent and controllable deployment patterns. Orchestration frameworks and governance layers ensure transparency and auditability," says Weir. 

Poorly managed AI agents can cause significant damage. Some have been known to delete vital data or produce false financial information due to misconfiguration.

This makes it critical for companies to monitor agent behaviour closely and apply zero-trust principles where every interaction is verified. 

Securing agentic AI does not require reinventing cybersecurity. It requires applying proven methods to a new, fast-moving environment. 

“We already know that certificates and PKI work. An AI agent can have one certificate for identity and another for authorisation. The key is in how you manage them,” Hickman concludes. 

As businesses accelerate their use of AI, the winners will be those that design trust into their systems from the beginning. By investing in certificate lifecycle management and clear governance, they can ensure that every AI agent operates safely and transparently. Those who ignore this step risk letting their systems act autonomously in the dark, without the trust and control that modern enterprises demand.

Why Deleting Cookies Doesn’t Protect Your Privacy

Most internet users know that cookies are used to monitor their browsing activity, but few realize that deleting them does not necessarily protect their privacy. A newer and more advanced method known as browser fingerprinting is now being used to identify and track people online. 

Browser fingerprinting works differently from cookies. Instead of saving files or scripts on your device, it quietly gathers detailed information from your browser and computer settings. This includes your operating system, installed fonts, screen size, browser version, plug-ins, and other configuration details. Together, these elements create a unique digital signature, often as distinct as a real fingerprint. 

Each time you open a website, your browser automatically sends information so that the page can load correctly. Over time, advertisers and data brokers have learned to use this information to monitor your online movements. Because this process does not rely on files stored on your computer, it cannot be deleted or cleared, making it much harder to detect or block. 

Research from the Electronic Frontier Foundation (EFF) through its Cover Your Tracks project shows that most users have unique fingerprints among hundreds of thousands of samples. 

Similarly, researchers at Friedrich-Alexander University in Germany have been studying this technique since 2016 and found that many browsers retain the same fingerprint for long periods, allowing for continuous tracking. 

Even modern browsers such as Chrome and Edge reveal significant details about your system through something called a User Agent string. This data, when combined with other technical information, allows websites to recognize your device even after you clear cookies or use private browsing. 

To reduce exposure, experts recommend using privacy-focused browsers such as Brave, which offers built-in fingerprinting protection through its Shields feature. It blocks trackers, cookies, and scripts while allowing users to control what information is shared. 

A VPN can also help by hiding your IP address, but it does not completely prevent fingerprinting. In short, clearing cookies or using Incognito mode provides limited protection. 

True online privacy requires tools and browsers specifically designed to reduce digital tracking. As browser fingerprinting becomes more common, understanding how it works and how to limit it is essential for anyone concerned about online privacy.