Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label AI. Show all posts

How ChatGPT prompt can allow cybercriminals to steal your Google Drive data


Chatbots and other AI tools have made life easier for threat actors. A recent incident highlighted how ChatGPT can be exploited to obtain API keys and other sensitive data from cloud platforms.

Prompt injection attacks leads to cloud access

Experts have discovered a new prompt injection attack that can turn ChatGPT into a hacker’s best friend in data thefts. Known as AgentFlayer, the exploit uses a single document to hide “secret” prompt instructions that target OpenAI’s chatbot. An attacker can share what appears to be a harmless document with victims through Google Drive, without any clicks.

Zero-click threat: AgentFlayer

AgentFlayer is a “zero-click” threat as it abuses a vulnerability in Connectors, for instance, a ChatGPT feature that connects the assistant to other applications, websites, and services. OpenAI suggests that Connectors supports a few of the world’s most widely used platforms. This includes cloud storage platforms such as Microsoft OneDrive and Google Drive.

Experts used Google Drive to expose the threats possible from chatbots and hidden prompts. 

GoogleDoc used for injecting prompt

The malicious document has a 300-word hidden malicious prompt. The text is size one, formatted in white to hide it from human readers but visible to the chatbot.

The prompt used to showcase AgentFlayer’s attacks prompts ChatGPT to find the victim’s Google Drive for API keys, link them to a tailored URL, and an external server. When the malicious document is shared, the attack is launched. The threat actor gets the hidden API keys when the target uses ChatGPT (the Connectors feature has to be enabled).

Othe cloud platforms at risk too

AgentFlayer is not a bug that only affects the Google Cloud. “As with any indirect prompt injection attack, we need a way into the LLM's context. And luckily for us, people upload untrusted documents into their ChatGPT all the time. This is usually done to summarize files or data, or leverage the LLM to ask specific questions about the document’s content instead of parsing through the entire thing by themselves,” said expert Tamir Ishay Sharbat from Zenity Labs.

“OpenAI is already aware of the vulnerability and has mitigations in place. But unfortunately, these mitigations aren’t enough. Even safe-looking URLs can be used for malicious purposes. If a URL is considered safe, you can be sure an attacker will find a creative way to take advantage of it,” Zenith Labs said in the report.

Akira ransomware turns off Windows Defender to install malware on Windows devices

Akira ransomware turns off Windows Defender to install malware on Windows devices

Akira ransomware strikes again. This time, it has abused an Intel CPU tuning driver to stop Microsoft Defender in attacks from EDRs and security tools active on target devices.

Windows defender turned off for attacks

The exploited driver is called “rwdrv.sys” (used by ThrottleStop), which the hackers list as a service that allows them to gain kernel-level access. The driver is probably used to deploy an additional driver called “hlpdrv.sys,” a hostile tool that modifies Windows Defender to shut down its safety features.

'Bring your own vulnerable driver' attack

Experts have termed the attack “Bring your vulnerable driver (BYOVD), where hackers use genuine logged-in drivers that have known bugs that can be exploited to get privilege escalation. The driver is later used to deploy a hostile that turns off Microsoft Defender. According to the experts, the additional driver hlpdrv.sys is “similarly registered as a service. When executed, it modifies the DisableAntiSpyware settings of Windows Defender within \REGISTRY\MACHINE\SOFTWARE\Policies\Microsoft\Windows Defender\DisableAntiSpyware.” The malware achieves this by executing regedit.exe. 

Discovery of the Akira ransomware attack

The technique was observed by Guidepoint Security, which noticed repeated exploitation of the rwdrv.sys driver in Akira ransomware attacks. The experts flagged this tactic due to its ubiquity in the latest Akira ransomware incidents. “This high-fidelity indicator can be used for proactive detection and retroactive threat hunting,” the report said. 

To assist security experts in stopping these attacks, Guidepoint Security has offered a YARA rule for hlpdrv.sys and complete indicators of compromise (IoCs) for the two drivers, as well as their file paths and service names.

SonicWall VPN attack

Akira ransomware was also recently associated with SonicWall VPN attacks. The threat actor used an unknown bug. According to Guidepoint Security, it could not debunk or verify the abuse of a zero-day flaw in SonicWall VPNs by the Akira ransomware gang. Addressing the reports, SonicWall has advised to turn off SSLVPN, use two-factor authentication (2FA), remove inactive accounts, and enable Botnet/Geo-IP safety.

The DFIR report has also released a study of the Akira ransomware incidents, revealing the use of Bumblebee malware loader deployed through trojanized MSI loaders of IT software tools.

Nvidia Pushes Back Against Claims of Secret Backdoors in Its Chips



Nvidia has strongly denied accusations from China that its computer chips include secret ways to track users or shut down devices remotely. The company also warned that proposals to add such features, known as backdoors or kill switches would create major security risks.

The dispute began when the Cyberspace Administration of China said it met with Nvidia over what it called “serious security issues” in the company’s products. Chinese officials claimed US experts had revealed that Nvidia’s H20 chip, made for the Chinese market under US export rules, could be tracked and remotely disabled.

Nvidia responded in a blog post from its Chief Security Officer, David Reber Jr., stating: “There are no back doors in NVIDIA chips. No kill switches. No spyware. That’s not how trustworthy systems are built and never will be.” The company has consistently denied that such controls exist.


Concerns Over Proposed US Law

While dismissing China’s claims, Nvidia also appeared to be addressing US lawmakers. A proposed “Chip Security Act” in the United States would require exported chips to have location verification and possibly a way to stop unauthorized use. Critics argue this could open the door to government-controlled kill switches, something Nvidia says is dangerous.

Senator Tom Cotton’s office says the bill is meant to keep advanced American chips out of the hands of “adversaries like Communist China.” The White House’s AI Action Plan also suggests exploring location tracking for high-end computing hardware.


Why Nvidia Says Kill Switches Are a Bad Idea

Reber argued that adding kill switches or hidden access points would be a gift to hackers and foreign threats, creating weaknesses in global technology infrastructure. He compared it to buying a car where the dealer could apply the parking brake remotely without your consent.

“There is no such thing as a ‘good’ secret backdoor,” he said. “They only create dangerous vulnerabilities.” Instead, Nvidia says security should rely on rigorous testing, independent verification, and compliance with global cybersecurity standards.

Reber pointed to the 1990s “Clipper Chip” project, when the US government tried to create a form of encryption with a built-in backdoor for law enforcement. Researchers quickly found flaws, proving it was unsafe. That project was abandoned, and many experts now see it as a warning against similar ideas.

According to Reber, Nvidia’s chips are built with layered security to avoid any single point of failure. Adding a kill switch, he says, would break that design and harm both innovation and trust in US technology.

Cloudflare Accuses AI Startup Perplexity of Bypassing Web Blocking Measures

 





Cloudflare has accused artificial intelligence company Perplexity of using hidden tactics to bypass restrictions designed to stop automated bots from collecting website data.

In a statement published Monday, Cloudflare said it had received multiple complaints from its customers claiming that Perplexity was still able to view and collect information from their sites, even though they had taken steps to block its activity. These blocks were implemented through a robots.txt file, a common tool that tells search engine bots which parts of a website they can or cannot access.

According to Cloudflare’s engineers, testing confirmed that Perplexity’s official crawler — the automated system responsible for scanning and indexing web content was being blocked as expected. However, the company claims Perplexity was also using other, less obvious methods to gain access to pages where it was not permitted.

As a result, Cloudflare said it has removed Perplexity from its list of verified bots and updated its own security rules to detect and block what it called “stealth crawling.” The company stressed that trustworthy crawlers should operate transparently, follow site owner instructions, and clearly state their purpose.

This dispute comes shortly after Cloudflare introduced new tools allowing website operators to either block AI crawlers completely or charge them for access. The move is part of a broader debate over how AI firms gather the large amounts of online data needed to train their systems.

When contacted by media outlets, Perplexity did not respond immediately. Later, company spokesperson Jesse Dwyer told TechCrunch that Cloudflare’s claims were exaggerated, describing the blog post as a “sales pitch.” Dwyer also argued that Cloudflare’s screenshots showed no actual data collection, and that one of the bots mentioned “isn’t even ours.”

Perplexity went further in its own blog post, criticizing Cloudflare’s actions as “embarrassing” and “disqualifying.”

The AI company has faced similar accusations before. Earlier this year, the BBC threatened legal action against Perplexity over claims it had copied its content without permission. Perplexity is one of several AI companies caught up in disputes over online data scraping, though some media organizations have instead chosen to sign licensing agreements with AI firms, including Perplexity.

As the tension between AI data gathering and online privacy grows, this case stresses upon the increasing push from technology infrastructure providers like Cloudflare to give site owners more control over how and whether, AI systems can collect their content.

Why Companies Keep Ransomware Payments Secret


Companies hiding ransomware payments

Ransomware attacks are ugly. For every ransomware attack news story we see in our feed, a different reality hides behind it. Victims secretly pay their attackers. The shadow economy feeds on corporate guilt and regulatory hysteria.

Companies are hiding the true numbers of ransomware incidents. For each attack that makes headlines, five more companies quietly push it under the carpet, keeping it secret, and wire cryptocurrency payments to attackers, in hopes of avoiding detection. We can call it corporate cowardice, but this gives confidence to the ransomware cybercriminals. It costs the victims $57 billion annually and directly damages the devices that we use.

Paying attackers fuels future attacks

According to the FBI, it “does not support paying a ransom in response to a ransomware attack. Paying a ransom doesn’t guarantee you or your organization will get any data back. It also encourages perpetrators to target more victims and offers an incentive for others to get involved in this type of illegal activity.

The patches in our smartphones exist because companies suffer attacks. Our laptop endpoint protection was developed from enterprise systems compromised by ransomware groups that used secret corporate ransoms to invest in more advanced malware. 

Corporate guilt is a reason for keeping payments secret

Few experts believe that for every reported ransomware attack, five more are kept hidden, and the payments are made secretly to escape market panic and regulatory enquiry. The transactions travel through the cryptocurrency networks, managed by negotiators who deal in digital extortion.

Companies justify their actions by keeping quiet to avoid regulatory scrutiny and falling stock prices, and quietly resolving the issue. The average ransom demand is around $5.2 million, but actual payments hit $1 million, a relative discount that may fund future ransomware attacks.

According to Gadget Review, “This secrecy creates a feedback loop more vicious than algorithmic social media engagement. Ransomware groups reinvest payments into advanced encryption, better evasion techniques, and expanded target lists that inevitably include the consumer technology ecosystem you depend on daily.”

It adds that “even as payment rates drop to historic lows—just 25% of victims now pay—the total damage keeps climbing. Companies face average costs exceeding $5.5 million per attack, combining ransom payments, recovery expenses, and reputation management.”

DeepSeek Under Investigation Leading to App Store Withdrawals

 


As one of the world's leading AI players, DeepSeek, a chatbot application developed by the Chinese government, has been a formidable force in the artificial intelligence arena since it emerged in January 2025, launching at the top of the app store charts and reshaping conversations in the technology and investment industries. After initially being hailed as a potential "ChatGPT killer" by industry observers, the platform has been the subject of intense scrutiny since its meteoric rise. 

The DeepSeek platform is positioned in the centre of app store removals, cross-border security investigations, and measured enterprise adoption by August 2025. In other words, we are at the intersection of technological advances, infrastructure challenges, and geopolitical issues that may shape the next phase of the evolution of artificial intelligence in the years ahead. 

A significant regulatory development has occurred in South Korea, with the Personal Information Protection Commission confirming that DeepSeek temporarily suspended the download of its chatbot applications while working with local authorities to address privacy concerns and issues regarding DeepSeek's data assets. On Saturday, the South Korean version of Apple's App Store, as well as Google Play in South Korea, were taken down from their respective platforms, following an agreement with the company to enhance its privacy protection measures before they were relaunched.

It has been emphasised that, although existing mobile users and personal computer users are not affected, officials are urging caution on behalf of the commission; Nam Seok, director of the investigation division, has advised users to remove the app or to refrain from sharing personal information until the issues have been addressed. 

An investigation by Microsoft's security team has revealed that individuals reportedly linked to DeepSeek have been transferring substantial amounts of data using OpenAI's application programming interface (API), which is a core channel for developers and enterprises to integrate OpenAI technology into their products and services. Having become OpenAI's biggest shareholder, Microsoft flagged this unusual activity, triggering a review internally. 

There has been a meteoric rise by DeepSeek in recent days, and the Chinese artificial intelligence startup has emerged as an increasingly prominent competitor to established U.S. companies, including ChatGPT and Claude, whose AI assistant is currently more popular than ChatGPT. On Monday, as a result of a plunge in technology sector stock prices on Monday, the AI assistant surged to overtake ChatGPT in the U.S. App Store downloads. 

There has been growing international scrutiny surrounding the DeepSeek R1 chatbot, which has recently been removed from Apple’s App Store and Google Play in South Korea amid mounting international scrutiny. This follows an admission by the Hangzhou-based company that it did not comply with the laws regulating personal data privacy.

As DeepSeek’s R1 chatbot is lauded as having advanced capabilities at a fraction of its Western competitors’ cost, its data handling practices are being questioned sharply as well. Particularly, how user information is stored on secure servers in the People’s Republic of China has been criticised by the US and others. The Personal Information Protection Commission of South Korea confirmed that the app had been removed from the local app stores at 6 p.m. on Monday. 

In a statement released on Saturday morning (900 GMT), the commission said it had suspended the service due to violations of domestic data protection laws. Existing users can continue using the service, but the commission has urged the public not to provide personal information until the investigation is completed.

According to the PIPC, DeepSeek must make substantial changes so that it can meet Korean privacy standards. A shortcoming that DeepSeek has acknowledged is this. In addition, data security professor Youm Heung-youl, from Soonchunhyang University, further noted that despite the company's privacy policies relating to European markets and other markets, the same policy does not exist for South Korean users, who are subject to a different localised framework. 

In response to an inquiry by Italy's data protection authority, the company has taken steps to ensure the app takes the appropriate precautions with regard to the data that it collects, the sources from which it obtains it, its intended use, legal justifications, and its storage in China. 

While it is unclear to what extent DeepSeek initiated the removal or whether the app store operators took an active role in the process, the development follows the company's announcement last month of its R1 reasoning model, an open-source alternative to ChatGPT, positioned as an alternative to ChatGPT for more cost-effectiveness. 

Government concerns over data privacy have been heightened by the model's rapid success, which led to similar inquiries in Ireland and Italy as well as a cautionary directive from the United States Navy indicating that DeepSeek AI cannot be used because of its origin and operation, posing security and ethical risks. The controversy revolves around the handling and storage of user data at the centre of this controversy.

It has been reported that all user information, including chat histories and other personal information of users, has been transferred to China and stored on servers there. A more privacy-conscious version of DeepSeek's model may be run locally on a desktop computer, though the performance of this offline version is significantly slower than the cloud-connected version that can be accessed on Apple and Android phones. 

DeepSeek's data practices have drawn escalating regulatory attention across a wide range of jurisdictions, including the United States. According to the privacy policies of the company, personal data, including user requests and uploaded files, is stored on servers located in China, which the company also claims it does not store in the U.S. 

As Ulrich Kamp stated in his statement, DeepSeek has not provided credible assurances that data belonging to Germans will be protected to the same extent as data belonging to European Union citizens. He also pointed out that Chinese authorities have access to personal data held by domestic companies with extensive access rights. 

It was Kamp's office's request in May for DeepSeek to either meet EU data transfer requirements or voluntarily withdraw its app, but the company did not do so. The controversy follows DeepSeek's January debut when it said that it had created an AI model that rivalled the ones of other American companies like OpenAI, but at a fraction of the cost. 

Over the past few years, the app has been banned in Italy due to concerns about transparency, as well as restricted access to government devices in the Netherlands, Belgium, and Spain, and the consumer rights organisation OCU has called for an official investigation into the matter. After reports from Reuters alleging DeepSeek's involvement in China's military and intelligence activities, lawmakers are preparing legislation that will prohibit federal agencies from using artificial intelligence models developed by Chinese companies. 

According to the Italian data protection authority, the Guarantor for the Protection of Personal Data, Hangzhou DeepSeek Artificial Intelligence and Beijing DeepSeek Artificial Intelligence have been requested to provide detailed information concerning the collection and processing of their data. A regulator has requested clarification about which personal data is collected, where the data originates, what the legal basis is for processing, and whether it is stored on Chinese servers. 

There are also other inquiries peopl would like to make regarding the training methodologies used for DeepSeek's artificial intelligence models, such as whether web scraping is involved, and how both registered as well as unregistered users are informed of this data collection. DeepSeek has 20 days to reply to these inquiries. 

As Forrester analysts have warned, the app has been widely adopted — it has been downloaded millions of times, which means that large amounts of potentially sensitive information are being uploaded and processed as a result. Based on DeepSeek's own privacy policy, the company has noted that it may collect user input, audio prompts, uploaded files, feedback, chat histories, and other content for training purposes, and may share these details with law enforcement officials or public authorities as needed. 

Although DeepSeek's models remain freely accessible throughout the world, despite regulatory bans and investigations intensifying in China, developers continue to download, adapt, and deploy them, sometimes independent of the official app or Chinese infrastructure, regardless of the official ban. The technology has become increasingly important in industry analysis, not just as an isolated threat, but as part of a broader shift toward a hardware-efficient, open-weight AI architecture, a trend which has been influenced by players such as Mistral, OpenHermes, and Elon Musk's Grok initiative, among many others.

To join the open-weight reasoning movement, OpenAI has released two open-weight reasoning models, GPTT-OSS-120B and GPTT-OSS-20B, which have been deployed within their infrastructure. During the rapid evolution of the artificial intelligence market, the question is no longer whether open-source AI can compete with existing incumbents—in fact, it already has. 

It is much more pressing to decide who will define the governance frameworks that will earn the trust of the public at a time when artificial intelligence, infrastructure control, and national sovereignty are converging at unprecedented rates. Despite the growing complexity of regulating advanced artificial intelligence in an interconnected, highly competitive global market, the ongoing scrutiny surrounding DeepSeek underscores the importance of governing advanced artificial intelligence. 

As a disruptive technological breakthrough evolved, it became a geopolitical and regulatory hot-button, demonstrating how privacy, security, and data sovereignty have now become a major issue in the race against artificial intelligence. Policymakers will find this case extremely significant because it emphasizes the need for coherent international frameworks that can address cross-border data governance and balance innovation with accountability, as well as addressing cross-border data governance. 

Whether it is enterprises or individuals, it serves to remind them that despite the benefits of cutting-edge AI tools, they come with inherent risks, risks that need to be carefully evaluated before they are adopted. A significant part of the future will be the blurring of the boundaries between local oversight and global accessibility as AI models become increasingly lightweight, hardware-efficient, and widely deployable. 

As a result, trust will not be primarily dependent on technological capability, but also on transparency, verifiable safeguards, and the willingness of developers to adhere to ethical and legal standards in the markets they are trying to serve in this environment. It is clear from the ongoing scrutiny surrounding DeepSeek that in a highly competitive global market, regulating advanced artificial intelligence is becoming increasingly complicated as it becomes increasingly interconnected. 

The initial breakthrough in technology has evolved into a geopolitical and regulatory flashpoint, demonstrating how questions of privacy, security, and data sovereignty have become a crucial element in the race toward artificial intelligence. It is clear from this case that policymakers have a pressing need for international frameworks that can address cross-border data governance and balance innovation with accountability. 

For enterprises and individuals alike, the case serves as a reminder that embracing cutting-edge artificial intelligence tools comes with inherent risks and that the risks must be carefully weighed before adoption can be made. It will become increasingly difficult to distinguish between local oversight and global accessibility as AI models become more open-minded, hardware-efficient, and widely deployable as they become more open-hearted, hardware-efficient, and widely deployable. 

In such a situation, trust will not be solely based on technological capabilities, but also on transparency, verifiable safeguards, as well as the willingness of developers to operate within the ethical and legal guidelines of the market in which they seek to compete.

Ransomware Attacks Threaten CEOs to Get Results


Ransomware gangs are getting desperate for results. Generally known for encrypting and leaking data on the internet, they have now started blackmailing CEOs with physical violence. 

CEO's get physically threatened

Cybersecurity experts from Semperis say that over the past year, in 40% of ransomware attacks, the CEOs of the victim company were physically attacked, which is particularly prevalent in US-based organizations, at 46%.

However, even paying the attackers is not enough. The research revealed that over 55% of businesses that paid a ransom had to do so multiple times, with around 29% of those firms paying three or more times, and 15% didn’t even receive decryption keys, while in a few cases, they received corrupted keys.

New ransomware tactics 

Blackmailing to file a regulatory complaint is also a famous tactic, Semperis said. It was found in 47% of attacks, increasing to 58% in the US. 

In 2023, the notorious BlackCat ransomware gang reported one of its victims to the Securities and Exchange Commission (SEC) to make them pay. This was done because the SEC requires organizations to report about a cybersecurity incident if there is a breach, which includes the SEC's four-day disclosure rule for publicly traded businesses.

Ransomware on the rise

Ransomware attacks have threatened businesses and the cybersecurity industry for decades, constantly evolving and outsmarting security professionals. The attacks started with encryption, but the companies started mitigating by having offline backups of all the important data.

Ransomware actors then turned to stealing data and blackmailing to leak it on the web if the ransom was not paid. Known as “double extortion,” the technique works really well. Some threat actors even dropped the encryption part totally and now focus on stealing files. But many companies still don’t cave in, forcing cybercriminals to go to extreme lengths. 

New tactics

In a few cases, the attackers combine the encryption of the back-end with a DDoS on the front-end, stopping the business entirely. Semperis CEO  Mickey Bresman said that while some “circumstances might leave the company in a non-choice situation, we should acknowledge that it's a down payment on the next attack.”

"Every dollar handed to ransomware gangs fuels their criminal economy, incentivizing them to strike again. The only real way to break the ransomware scourge is to invest in resilience, creating an option to not pay ransom," he commented.

Proton Launches New Authenticator App With Standalone Features



Proton has released Proton Authenticator, an independent, standalone 2-factor authentication (2FA) app for macOS, Windows, Android, Linux, and iOS. 2FA verification applications are offline tools that create time-based OTPs that expire within 20 seconds, and can also be used with passwords when signing into offline accounts, offering a second layer of verification.

A Swiss tech company, Proton, is famous for its privacy-focused end-to-end encryption services such as

Integration of an authenticator app adds to the company’s product portfolio and brings a privacy-specialized tool that challenges competitors that are mostly ad-supported, closed-source, and trap customers into proprietary ecosystems.

But Proton Authenticator doesn’t have ads, vendor lock-in, or trackers, and uses no Proton account. According to the company, “Proton Authenticator is built with the same values that power everything Proton does: privacy, transparency, and user-first security.” "The company is now bringing these standards to the 2FA space – offering a secure, easy-to-use, and encrypted alternative to apps like Google Authenticator that further lock users into Big Tech's surveillance ecosystems." 

The application is open-source, but it takes around two weeks for the Proton team to release the source code of the latest tools on GitHub. The app has end-to-end encryption, which supports safe cross-device sync and shift to other platforms via easy-to-use import and export features. A lot of apps, such as Microsoft and Authy, cannot export the time-based OTP seeds feature.

The Proton Authenticator also provides automatic encrypted backups and app lock with PIN or biometrics, giving an extra security layer.

“Proton Authenticator will make it easier for everyone to log in to their online accounts securely, a vital step in making the internet a safer place,” read the product statement.

A Massive 800% Rise in Data Breach Incidents in First Half of 2025


Cybersecurity experts have warned of a significant increase in identity-based attacks, following the revelation that 1.8 billion credentials were stolen in the first half of 2025, representing an 800% increase compared to the previous six months.

Data breach attacks are rising rapidly

Flashpoint’s Global Threat Intelligence Index report is based on more than 3.6 petabytes of data studied by the experts. Hackers stole credentials from 5.8 million compromised devices, according to the report. The significant rise is problematic as stolen credentials can give hackers access to organizational data, even when the accounts are protected by multi-factor authentication (MFA).

The report also includes details that concern security teams.

About the bugs

Until June 2025, the firm has found over 20,000 exposed bugs, 12,200 of which haven’t been reported in the National Vulnerability Database (NVD). This means that security teams are not informed. 7000 of these have public exploits available, exposing organizations to severe threats.

According to experts, “The digital attack surface continues to expand, and the volume of disclosed vulnerabilities is growing at a record pace – up by a staggering 246% since February 2025.” “This explosion, coupled with a 179% increase in publicly available exploit code, intensifies the pressure on security teams. It’s no longer feasible to triage and remediate every vulnerability.”

Surge in ransomware attacks

Both these trends can cause ransomware attacks, as early access mostly comes through vulnerability exploitation or credential hacking. Total reports of breaches have increased by 179% since 2024, manufacturing (22%), technology (18%), and retail (13%) have been hit the most. The report has also disclosed 3104 data breaches in the first half of this year, linked to 9.5 billion hacked records.

2025 to be record year for data breaches

Flashpoint reports that “Over the past four months, data breaches surged by 235%, with unauthorized access accounting for nearly 78% of all reported incidents. Data breaches are both the genesis and culmination of threat actor campaigns, serving as a source of continuous fuel for cybercrime activity.” 

In June, the Identity Theft Resource Center (ITRC) warned that 2025 could become a record year for data cyberattacks in the US.

AI-supported Cursor IDE Falls Victim to Prompt Injection Attacks


Experts have found a bug called CurXecute that is present in all variants of the AI-supported code editor Cursor and can be compromised to run remote code execution (RCE), along with developer privileges. 

About the bug

The security bug is now listed as CVE-2025-54135 and can be exploited by giving the AI agent a malicious prompt to activate threat actor control commands. 

The Cursor combined development environment (IDE) relies on AI agents to allow developers to code quicker and more effectively, helping them to connect with external systems and resources using Model Context Protocol (MCP).

According to the experts, a threat actor effectively abusing the CurXecute bug could trigger ransomware and ransomware data theft attacks. 

Prompt-injection 

CurXecute shares similarities to the EchoLeak bug in Microsoft 365 CoPilot that hackers can use to extort sensitive data without interacting with the users. 

After finding and studying EchoLeak, the experts from the cybersecurity company Aim Security found that hackers can even exploit the local AI agent.

Cursor IDE supports the MCP open-standard framework, which increases an agent’s features by connecting it to external data tools and sources.

Agent exploitation

But the experts have warned that doing so can exploit the agent, as it is open to external, suspicious data that can impact its control flow. The threat actor can take advantage by hacking the agent’s session and features to work as a user.

According to the experts, Cursor doesn’t need permission to run new entries to the ~/.cursor/mcp.json file. When the target opens the new conversation and tells the agent to summarize the messages, the shell payload deploys on the device without user authorization.

“Cursor allows writing in-workspace files with no user approval. If the file is a dotfile, editing it requires approval, but creating one if it doesn't exist doesn't. Hence, if sensitive MCP files, such as the .cursor/mcp.json file, don't already exist in the workspace, an attacker can chain an indirect prompt injection vulnerability to hijack the context to write to the settings file and trigger RCE on the victim without user approval,” Cursor said in a report.

Ransomware Defence Begins with Fundamentals Not AI

 


The era of rapid technological advancements has made it clear that artificial intelligence isn't only influencing cybersecurity, it is fundamentally redefining its boundaries and capabilities as well. The transformation was evident at the RSA Conference in San Francisco in the year 2025, as more than 40,000 cybersecurity professionals gathered to discuss the path forward for the industry.

It was essential to emphasise that the rapid integration of agentic AI into cyber operations is one of the most significant topics discussed, highlighting both the disruptive potential and strategic complexities it introduces simultaneously. AI technologies continue to empower both defenders and adversaries alike, and organizations are taking a measured approach, recognising the immense potential of AI-driven solutions while remaining vigilant against the increasingly sophisticated attacks from adversaries. 

As the rise of artificial intelligence (AI) and its application in criminal activities dominates headlines more often than not, the narrative is far from a one-sided one, as there are several factors playing a role. However, the rise of AI reflects a broader industry shift toward balancing innovation with resilience in the face of rapidly shifting threats. 
Several cybercriminals are indeed using artificial intelligence (AI) and large language models (LLMs) to make ransomware campaigns more sophisticated and more convincing, crafting more convincing phishing emails, bypassing traditional security measures, and improving the precision with which victims are selected. In addition to increasing the stealth and efficiency of attackers, the stakes for organisational cybersecurity have increased as a result of these tools. 

Although AI is considered a weapon for adversaries, it is proving to be an essential ally in the defence against ransomware when integrated into security systems. By integrating AI into security systems, organisations are able to detect threats more quickly and accurately, which leads to quicker detection and response to ransomware attacks. 

Furthermore, AI helps enhance the containment and recovery efforts of incidents, leading to faster containment and a reduction in potential damage. Furthermore, AI helps to mitigate and recover from incidents more effectively. With AI coupled with real-time threat intelligence, security teams are able to adapt to evolving attack techniques, providing them with the agility to close the gap between offence and defence, making the cyber environment in which people live more and more automated.

In the wake of a series of high-profile ransomware attacks - most notably, those targeted at prominent brands like M&S - concerns have been raised that artificial intelligence may be contributing to a spike in cybercrime that has never been seen before. In spite of the fact that artificial intelligence is undeniably changing the threat landscape by streamlining phishing campaigns and automating attack workflows, its impact on ransomware operations has often been exaggerated. 

In practice, AI isn't really a revolutionary force at all, but rather a tool to accelerate tactics cybercriminals have relied on for years to come. Most ransomware groups continue to rely on proven, straightforward methods that offer speed, scalability, and consistent financial returns for their attacks. As far as successful ransomware campaigns are concerned, scammy emails, credential theft, and insider exploitation have continued to be the cornerstones of these campaigns, offering reliable results without requiring the use of advanced artificial intelligence. 

As security leaders are looking for effective ways to address these threats, they are focusing on getting a realistic perspective on how artificial intelligence is used within ransomware ecosystems. It has become increasingly evident that breach and attack simulation tools are critical assets for organisations as they enable them to identify vulnerabilities and close security gaps in advance of attackers exploiting them. 

There is a sense of balance in this approach, which emphasises the importance of bolstering foundational security controls while keeping pace with the incremental evolution of adversarial capabilities. Nevertheless, generative artificial intelligence is continuing to evolve in profound and often paradoxical ways as it continues to mature. In one way, it empowers defenders by automating routine security operations, detecting hidden patterns in complex data sets, and detecting vulnerabilities that might otherwise go undetected by the average defender. 

It also provides cybercriminals with the power to craft more sophisticated, targeted, scalable attacks, blurring the line between innovation and exploitation, providing them with powerful tools to craft more sophisticated, targeted, and scalable attacks. According to recent studies, over 80% of cyber incidents are caused by human error, which is why organisations need to harness artificial intelligence to strengthen their security posture to prevent future cyber attacks. 

AI is an excellent tool for cybersecurity leaders as it streamlines threat detection, reduces human oversight, and enables real-time response in real-time. There is, however, a danger that the same technologies may be adapted by adversaries to enhance phishing tactics, automate malware deployment, and orchestrate advanced intrusion strategies. The dual use of artificial intelligence has raised widespread concerns among executives due to its dual purpose. 

According to a recent survey, 84% of CEOs have expressed concern about generative AI being the source of widespread or catastrophic cyberattacks. Consequently, organisations are beginning to make a significant investment in AI-based cybersecurity, with projections showing a 43% increase in AI security budgets by 2025 as a result of this increase. 

In an increasingly complex digital environment, it is becoming increasingly recognised that even though generative AI introduces new vulnerabilities, it also holds the key to strengthening cyber resilience. This surge is indicative of a growing recognition of the need for generative AI. As artificial intelligence is increasing the speed and sophistication with which cyberattacks are taking place, it has never been more important than now to adhere to foundational cybersecurity practices. 

While artificial intelligence has unquestionably enhanced the tactics available to cybercriminals, allowing them to conduct more targeted phishing attempts, exploit vulnerabilities more quickly, and create more evasive malware, many of the core techniques have not changed. In other words, even though they have many similarities, the differences lie more in how they are executed, rather than in what they do. 

As such, rigorously and consistently applied traditional cybersecurity strategies remain critical bulwarks against even the threats that are enhanced by artificial intelligence. In addition to these foundational defences, multi-factor authentication (MFA), which is widely used, provides a vital safeguard against credential theft, particularly in light of the increasing use of artificial intelligence-generated phishing emails that mimic legitimate communication with astonishing accuracy - a powerful security measure that is critical today. 

As important as it is to maintain regular data backups, maintaining a secure backup mechanism also provides an effective fallback mechanism for ransomware, which is now capable of dynamically altering payloads to avoid detection. The most important element is to make sure that all systems and software are updated, as this prevents AI-enabled tools from exploiting known vulnerabilities. 

A Zero Trust architecture is becoming increasingly relevant as attackers with artificial intelligence move faster and stealthier than ever before. By assuming no implicit trust within the network and restricting lateral movement, this model greatly reduces the blast radius of any potential breach of the network and reduces the likelihood of the attack succeeding. 

A major upgrade is also required for email filtering systems, with AI-based tools that are better equipped to detect subtle nuances in phishing campaigns that have been successfully evading legacy solutions. It is also becoming more and more important for organisations to emphasise security awareness training to prevent breaches, as human error is still one of the leading causes. There is no better line of defence for a company than having employees trained to spot deceptive artificial intelligence-crafted deception.

Furthermore, the use of artificial intelligence-based anomaly detection systems is becoming increasingly important for detecting unusual behaviours that indicate a breach of security. In order to limit exposure and contain threats, segmentation, strict access control policies, and real-time monitoring are all complementary tools. However, it is important to note that even as AI has created new complexities in the threat landscape, it has not rendered traditional defences obsolete. 

Rather, these tried and true cybersecurity measures, augmented by intelligent automation and threat intelligence, are the cornerstones of resilient cybersecurity, not the opposite. Defending against adversaries powered by artificial intelligence requires not just speed but also strategic foresight and disciplined execution of proven strategies. 

As AI-powered cyberattacks become a bigger and more prevalent subject of discussion, organisations themselves are at risk from an unchecked and ungoverned use of artificial intelligence tools, a risk that is often overlooked. While much of the attention has been focused on how threat actors are capable of weaponising artificial intelligence, the internal vulnerabilities that arise from the unscheduled adoption of generative AI present a significant and present threat to the organisation. 

In what is referred to as "Shadow AI," employees are using tools like ChatGPT without formal authorisation or oversight, which circumvents established security protocols and could potentially expose sensitive corporate data. According to a recent study, nearly 40% of IT professionals admit that they have used generative AI tools without proper authorisation. 

Besides compromising governance efforts, such practices obscure visibility of data processing and handling, complicate incident response, and increase the organisation's vulnerability to attacks. The use of artificial intelligence by organisations is unregulated, coupled with inadequate data governance and poorly configured artificial intelligence services, resulting in a number of operational and security issues. 

The risks posed by internal AI tools must be mitigated by organisations treating them as if they were any enterprise technologies. Among the measures that must be taken to mitigate these risks is to establish robust governance frameworks, ensure the transparency of data flows, conduct regular audits, and provide cybersecurity training that addresses the dangers of shadow artificial intelligence, as well as ensure that leaders remain mindful of current threats to their organisations. 

Although artificial intelligence generates headlines, the most successful attacks continue to rely on the proven techniques - phishing, credential theft, and ransomware. The emphasis placed on the potential threats that could be driven by AI can distract attention from critical, foundational defences. In this context, complacency and misplaced priorities are the greatest risks, and not AI itself. 

 It remains true that maintaining a disciplined cyber hygiene, simulating attacks, and strengthening security fundamentals remain the most effective ways to combat ransomware in the long run. There is no doubt that artificial intelligence is not just a single threat or solution for cybersecurity, but rather a powerful force capable of strengthening as well as destabilising digital defences in an environment that is rapidly evolving. 

As organisations navigate this shifting landscape, it is imperative to have clarity, discipline, and strategic depth as they attempt to navigate this new terrain. Despite the fact that artificial intelligence may dominate headlines and influence funding decisions, it does not negate the importance of basic cybersecurity practices. 

What is needed is a recalibration of priorities as people move forward. Security leaders must build resilience against emerging technologies, rather than chasing the allure of emerging technologies alone. They need to adopt a realistic and layered approach to security, one that embraces AI as a tool while never losing sight of what consistently works. 

To achieve this goal, advanced automation, analytics, and tried-and-true defences must be integrated, governance around AI usage must be enforced, and access to data flows and user behaviour must remain tightly controlled. In addition, organisations need to realise that technological tools are only as powerful as the frameworks and people that support them. 

Threats are becoming increasingly automated, making it even more important to have human oversight. Training, informed leadership, and an environment that fosters a culture of accountability are not optional; they are imperative. In order for artificial intelligence to be effective, it must be part of a larger, more comprehensive security strategy that is based on visibility, transparency, and proactive risk management. 

As the battle against ransomware and AI-enhanced cyber threats continues, the key to success will not be whose tools have the greatest sophistication, but whose application of these tools will be consistent, purposeful, and foresightful. AI isn't a threat, but it's an opportunity to master it, regulate it internally, and never let innovation overshadow the fundamentals that keep security sustainable in the first place. Today's defenders have a winning formula: strong fundamentals, smart integration, and unwavering vigilance are the keys to their success.

Amazon’s Coding Tool Hacked — Experts Warn of Bigger Risks

 



A contemporary cyber incident involving Amazon’s AI-powered coding assistant, Amazon Q, has raised serious concerns about the safety of developer tools and the risks of software supply chain attacks.

The issue came to light after a hacker managed to insert harmful code into the Visual Studio Code (VS Code) extension used by developers to access Amazon Q. This tampered version of the tool was distributed as an official update on July 17 — potentially reaching thousands of users before it was caught.

According to media reports, the attacker submitted a code change request to the public code repository on GitHub using an unverified account. Somehow, the attacker gained elevated access and was able to add commands that could instruct the AI assistant to delete files and cloud resources — essentially behaving like a system cleaner with dangerous privileges.

The hacker later told reporters that the goal wasn’t to cause damage but to make a point about weak security practices in AI tools. They described their action as a protest against what they called Amazon’s “AI security theatre.”


Amazon’s response and the fix

Amazon acted smartly to address the breach. The company confirmed that the issue was tied to a known vulnerability in two open-source repositories, which have now been secured. The corrupted version, 1.84.0, has been replaced with version 1.85, which includes the necessary security fixes. Amazon stated that no customer data or systems were harmed.


Bigger questions about AI security

This incident highlights a growing problem: the security of AI-based developer tools. Experts warn that when AI systems like code assistants are compromised, they can be used to inject harmful code into software projects or expose users to unseen risks.

Cybersecurity professionals say the situation also exposes gaps in how open-source contributions are reviewed and approved. Without strict checks in place, bad actors can take advantage of weak points in the software release process.


What needs to change?

Security analysts are calling for stronger DevSecOps practices — a development approach that combines software engineering, cybersecurity, and operations. This includes:

• Verifying all updates through secure hash checks,

• Monitoring tools for unusual behaviour,

• Limiting system access permissions and

• Ensuring quick communication with users during incidents.

They also stress the need for AI-specific threat models, especially as AI agents begin to take on more powerful system-level tasks.

The breach is a wake-up call for companies using or building AI tools. As more businesses rely on intelligent systems to write, test, or deploy code, ensuring these tools are secure from the inside out is no longer optional, it’s essential.

AI-Driven Phishing Threats Loom After Massive Data Breach at Major Betting Platforms

 

A significant data breach impacting as many as 800,000 users from two leading online betting platforms has heightened fears over sophisticated phishing risks and the growing role of artificial intelligence in exploiting compromised personal data.

The breach, confirmed by Flutter Entertainment, the parent company behind Paddy Power and Betfair, exposed users’ IP addresses, email addresses, and activity linked to their gambling profiles.

While no payment or password information was leaked, cybersecurity experts warn that the stolen details could still enable highly targeted attacks. Flutter, which also owns brands like Sky Bet and Tombola, referred to the event as a “data incident” that has been contained. The company informed affected customers that there is, “nothing you need to do in response to this incident,” but still advised them to stay alert.

With an average of 4.2 million monthly users across the UK and Ireland, even partial exposure poses a serious risk.

Harley Morlet, chief marketing officer at Storm Guidance, emphasized: “With the advent of AI, I think it would actually be very easy to build out a large-scale automated attack. Basically, focusing on crafting messages that look appealing to those gamblers.”

Similarly, Tim Rawlins, director and senior adviser at the NCC Group, urged users to remain cautious: “You might re-enter your credit card number, you might re-enter your bank account details, those are the sort of things people need to be on the lookout for and be conscious of that sort of threat. If it's too good to be true, it probably is a fraudster who's coming after your money.”

Rawlins also noted that AI technology is making phishing emails increasingly convincing, particularly in spear-phishing campaigns where stolen data is leveraged to mimic genuine communications.

Experts caution that relying solely on free antivirus tools or standard Android antivirus apps offers limited protection. While these can block known malware, they are less effective against deceptive emails that trick users into voluntarily revealing sensitive information.

A stronger defense involves practicing layered security—maintaining skepticism, exercising caution, and following strict cyber hygiene habits to minimize exposure

Stop! Don’t Let That AI App Spy on Your Inbox, Photos, and Calls

 



Artificial intelligence is now part of almost everything we use — from the apps on your phone to voice assistants and even touchscreen menus at restaurants. What once felt futuristic is quickly becoming everyday reality. But as AI gets more involved in our lives, it’s also starting to ask for more access to our private information, and that should raise concerns.

Many AI-powered tools today request broad permissions, sometimes more than they truly need to function. These requests often include access to your email, contacts, calendar, messages, or even files and photos stored on your device. While the goal may be to help you save time, the trade-off could be your privacy.

This situation is similar to how people once questioned why simple mobile apps like flashlight or calculator apps — needed access to personal data such as location or contact lists. The reason? That information could be sold or used for profit. Now, some AI tools are taking the same route, asking for access to highly personal data to improve their systems or provide services.

One example is a new web browser powered by AI. It allows users to search, summarize emails, and manage calendars. But in exchange, it asks for a wide range of permissions like sending emails on your behalf, viewing your saved contacts, reading your calendar events, and sometimes even seeing employee directories at workplaces. While companies claim this data is stored locally and not misused, giving such broad access still carries serious risks.

Other AI apps promise to take notes during calls or schedule appointments. But to do this, they often request live access to your phone conversations, calendar, contacts, and browsing history. Some even go as far as reading photos on your device that haven’t been uploaded yet. That’s a lot of personal information for one assistant to manage.

Experts warn that these apps are capable of acting independently on your behalf, which means you must trust them not just to store your data safely but also to use it responsibly. The issue is, AI can make mistakes and when that happens, real humans at these companies might look through your private information to figure out what went wrong.

So before granting an AI app permission to access your digital life, ask yourself: is the convenience really worth it? Giving these tools full access is like handing over a digital copy of your entire personal history, and once it’s done, there’s no taking it back.

Always read permission requests carefully. If an app asks for more than it needs, it’s okay to say no.

Why Policy-Driven Cryptography Matters in the AI Era

 



In this modern-day digital world, companies are under constant pressure to keep their networks secure. Traditionally, encryption systems were deeply built into applications and devices, making them hard to change or update. When a flaw was found, either in the encryption method itself or because hackers became smarter, fixing it took time, effort, and risk. Most companies chose to live with the risk because they didn’t have an easy way to fix the problem or even fully understand where it existed.

Now, with data moving across various platforms, for instance cloud servers, edge devices, and personal gadgets — it’s no longer practical to depend on rigid security setups. Businesses need flexible systems that can quickly respond to new threats, government rules, and technological changes.

According to the IBM X‑Force 2025 Threat Intelligence Index, nearly one-third (30 %) of all intrusions in 2024 began with valid account credential abuse, making identity theft a top pathway for attackers.

This is where policy-driven cryptography comes in.


What Is Policy-Driven Crypto Agility?

It means building systems where encryption tools and rules can be easily updated or swapped out based on pre-defined policies, rather than making changes manually in every application or device. Think of it like setting rules in a central dashboard: when updates are needed, the changes apply across the network with a few clicks.

This method helps businesses react quickly to new security threats without affecting ongoing services. It also supports easier compliance with laws like GDPR, HIPAA, or PCI DSS, as rules can be built directly into the system and leave behind an audit trail for review.


Why Is This Important Today?

Artificial intelligence is making cyber threats more powerful. AI tools can now scan massive amounts of encrypted data, detect patterns, and even speed up the process of cracking codes. At the same time, quantum computing; a new kind of computing still in development, may soon be able to break the encryption methods we rely on today.

If organizations start preparing now by using policy-based encryption systems, they’ll be better positioned to add future-proof encryption methods like post-quantum cryptography without having to rebuild everything from scratch.


How Can Organizations Start?

To make this work, businesses need a strong key management system: one that handles the creation, rotation, and deactivation of encryption keys. On top of that, there must be a smart control layer that reads the rules (policies) and makes changes across the network automatically.

Policies should reflect real needs, such as what kind of data is being protected, where it’s going, and what device is using it. Teams across IT, security, and compliance must work together to keep these rules updated. Developers and staff should also be trained to understand how the system works.

As more companies shift toward cloud-based networks and edge computing, policy-driven cryptography offers a smarter, faster, and safer way to manage security. It reduces the chance of human error, keeps up with fast-moving threats, and ensures compliance with strict data regulations.

In a time when hackers use AI and quantum computing is fast approaching, flexible and policy-based encryption may be the key to keeping tomorrow’s networks safe.

How Tech Democratization Is Helping SMBs Tackle 2025’s Toughest Challenges

 

Small and medium-sized businesses (SMBs) are entering 2025 grappling with familiar hurdles: tight budgets, economic uncertainty, talent shortages, and limited cybersecurity resources. A survey of 300 decision-makers highlights how these challenges are pushing SMBs to seek smarter, more affordable tech solutions.

Technology itself ranks high on the list of SMB pain points. A 2023 Mastercard report (via Digital Commerce 360) showed that two-thirds of small-business owners saw seamless digital experiences as critical—but 25% were overwhelmed by the cost and complexity. The World Economic Forum's 2025 report echoed this, noting that SMBs are often “left behind” when it comes to transformative tech.

That’s changing fast. As enterprise-grade tools become more accessible, SMBs now have affordable, powerful options to bridge the tech gap and compete effectively.

1. Stronger, Smarter Networks
Downtime is expensive—up to $427/minute, says Pingdom. SMBs now have access to fast, reliable fiber internet with backup connections that kick in automatically. These networks support AI tools, cloud apps, IoT, and more—while offering secure, segmented Wi-Fi for teams, guests, and devices.

Case in point: Albemarle, North Carolina, deployed fiber internet with a cloud-based backup, ensuring critical systems stay online 24/7.

2. Cybersecurity That Fits the SMB Budget
Cyberattacks hit 81% of small businesses in the past year (Identity Theft Resource Center, 2024). Yet under half feel ready to respond, and many hesitate to invest due to cost. The good news: built-in firewalls, multifactor authentication, and scalable security layers are now more affordable than ever.

As Checker.ai founder Anup Kayastha told StartupNation, the company started with MFA and scaled security as they grew.

3. Big Brand Experiences, Small Biz Budgets
SMBs now have the digital tools to deliver seamless, omnichannel customer experiences—just like larger players. High-performance networks and cloud-based apps enable rich e-commerce journeys and AI-driven support that build brand presence and loyalty.

4. Predictable Pricing, Maximum Value
Tech no longer requires deep pockets. Today’s solutions bundle high-speed internet, cybersecurity, compliance, and productivity tools—often with self-service options to reduce IT overhead.

5. Built-In Tech Support
Forget costly consultants. Many SMB-friendly providers now offer local, on-site support as part of their packages—helping small businesses install, manage, and maintain systems with ease.

Google Gemini Bug Exploits Summaries for Phishing Scams


False AI summaries leading to phishing attacks

Google Gemini for Workspace can be exploited to generate email summaries that appear legitimate but include malicious instructions or warnings that direct users to phishing sites without using attachments or direct links.

Google Gemini for Workplace can be compromised to create email summaries that look real but contain harmful instructions or warnings that redirect users to phishing websites without using direct links or attachments. 

Similar attacks were reported in 2024 and afterwards; safeguards were pushed to stop misleading responses. However, the tactic remains a problem for security experts. 

Gemini for attack

A prompt-injection attack on the Gemini model was revealed via cybersecurity researcher Marco Figueoa, at 0din, Mozilla’s bug bounty program for GenAI tools. The tactic creates an email with a hidden directive for Gemini. The threat actor can hide malicious commands in the message body text at the end via CSS and HTML, which changes the font size to zero and color to white. 

According to Marco, who is GenAI Bug Bounty Programs Manager at Mozilla, “Because the injected text is rendered in white-on-white (or otherwise hidden), the victim never sees the instruction in the original message, only the fabricated 'security alert' in the AI-generated summary. Similar indirect prompt attacks on Gemini were first reported in 2024, and Google has already published mitigations, but the technique remains viable today.”

Gmail does not render the malicious instruction as there are no attachments or links present, and the message may reach the victim’s inbox. If the receiver opens the email and asks Gemini to make a summary of the received mail, the AI tool will parse the invisible directive and create the summary. Figueroa provides an example of Gemini following hidden prompts, accompanied by a security warning that the victim’s Gmail password and phone number may be compromised.

Impact

Supply-chain threats: CRM systems, automated ticketing emails, and newsletters can become injection vectors, changing one exploited SaaS account into hundreds of thousands of phishing beacons.

Cross-product surface: The same tactics applies to Gemini in Slides, Drive search, Docs and any workplace where the model is getting third-party content.

According to Marco, “Security teams must treat AI assistants as part of the attack surface and instrument them, sandbox them, and never assume their output is benign.”

Latest Malware "Mamona" Attacks Locally, Hides by Self Deletion

Latest Malware "Mamona" Attacks Locally, Hides by Self Deletion

Cybersecurity experts are tracing Mamona, a new ransomware strain that is famous for its stripped-down build and silent local execution. Experts believe that the ransomware prevents the usual command-and-control (C2) servers, choosing instead a self-contained method that moves past tools relying on network traffic analysis.  

The malware is executed locally on a Windows system as a standalone binary file. The offline approach reveals a blind spot in traditional defenses, raising questions about how even the best antivirus and detection mechanisms will work when there is no network.

Self-deletion and escape techniques make detection difficult

Once executed, it starts a three-second delay via a modified ping command, ”cmd.exe /C ping 127.0.0.7 -n 3 > Nul & Del /f /q.” After this, it self-deletes. The self-deletion helps to eliminate forensic artifacts that make it difficult for experts to track or examine the malware after it has been executed. 

The malware uses 127.0.0.7 instead of the popular 127.0.0.1, which helps in evading detection measures. This tactic escapes simple detection tests and doesn’t leave digital traces that older file-based scanners might tag. The malware also drops a ransom note titled README.HAes.txt and renames impacted files with the .HAes extension. This means the encryption was successful. 

“We integrated Sysmon with Wazuh to enrich logs from the infected endpoint and created Wazuh detection rules to identify malicious behaviour associated with Mamona ransomware,” said Wazuh in a blog post.

Spotting Mamona

Wazuh has alerted that the “plug-and-play” nature of the malware makes it easy for cybercriminals and helps in the commodization of ransomware. This change highlights an urgent need for robust inspections of what stands as the best ransomware protection when such attacks do not need remote control infrastructure. Wazu’s method to track Mamona involves combining Sysom for log capture and employing custom rules to flag particular behaviours like ransom note creation and ping-based delays.

According to TechRadar, “Rule 100901 targets the creation of the README.HAes.txt file, while Rule 100902 confirms the presence of ransomware when both ransom note activity and the delay/self-delete sequence appear together.”

CISA Lists Citrix Bleed 2 as Exploit, Gives One Day Deadline to Patch

CISA Lists Citrix Bleed 2 as Exploit, Gives One Day Deadline to Patch

CISA confirms bug exploit

The US Cybersecurity & Infrastructure Security Agency (CISA) confirms active exploitation of the CitrixBleed 2 vulnerability (CVE-2025-5777 in Citrix NetScaler ADC and Gateway. It has given federal parties one day to patch the bugs. This unrealistic deadline for deploying the patches is the first since CISA issued the Known Exploited Vulnerabilities (KEV) catalog, highlighting the severity of attacks abusing the security gaps. 

About the critical vulnerability

CVE-2025-5777 is a critical memory safety bug (out-of-bounds memory read) that gives hackers unauthorized access to restricted memory parts. The flaw affects NetScaler devices that are configured as an AAA virtual server or a Gateway. Citrix patched the vulnerabilities via the June 17 updates. 

After that, expert Kevin Beaumont alerted about the flaw’s capability for exploitation if left unaddressed, terming the bug as ‘CitrixBleed 2’ because it shared similarities with the infamous CitrixBleed bug (CVE-2023-4966), which was widely abused in the wild by threat actors.

What is the CitrixBleed 2 exploit?

According to Bleeping Computer, “The first warning of CitrixBleed 2 being exploited came from ReliaQuest on June 27. On July 7, security researchers at watchTowr and Horizon3 published proof-of-concept exploits (PoCs) for CVE-2025-5777, demonstrating how the flaw can be leveraged in attacks that steal user session tokens.”

The rise of exploits

During that time, experts could not spot the signs of active exploitation. Soon, the threat actors started to exploit the bug on a larger scale, and after the attack, they became active on hacker forums, “discussing, working, testing, and publicly sharing feedback on PoCs for the Citrix Bleed 2 vulnerability,” according to Bleeping Computers. 

Hackers showed interest in how to use the available exploits in attacks effectively. The hackers have become more active, and various exploits for the bug have been published.

Now that CISA has confirmed the widespread exploitation of CitrixBleed 2 in attacks, threat actors may have developed their exploits based on the recently released technical information. CISA has suggested to “apply mitigations per vendor instructions, follow applicable BOD 22-01 guidance for cloud services, or discontinue use of the product if mitigations are unavailable.”