Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label AI. Show all posts

Massive database of 250 million data leaked online for public access


Around a quarter of a billion identity records were left publicly accessible, exposing people located in seven countries- Saudi Arabia, the United Arab Emirates, Canada, Mexico, South Africa, Egypt, and Turkey. 

According to experts from Cybernews, three misconfigured servers, registered in the UAE and Brazil, hosting IP addresses, contained personal information such as “government-level” identity profiles. The leaked data included contact details, dates of birth, ID numbers, and home addresses. 

Cybernews experts who found the leak said the databases seemed to have similarities with the naming conventions and structure, which hinted towards the same source. But they could not identify the actor who was responsible for running the servers. 

“These databases were likely operated by a single party, due to the similar data structures, but there’s no attribution as to who controlled the data, or any hard links proving that these instances belonged to the same party,” they said. 

The leak is particularly concerning for citizens in South Africa, Egypt, and Turkey, as the databases there contained full-spectrum data. 

The leak would have exposed the database to multiple threats, such as phishing campaigns, scams, financial fraud, and abuses.

Currently, the database is not publicly accessible (a good sign). 

This is not the first incident where a massive database holding citizen data (250 million) has been exposed online. Cybernews’ research revealed that the entire Brazilian population might have been impacted by the breach.

Earlier, a misconfigured Elasticsearch instance included the data with details such as sex,  names, dates of birth, and Cadastro de Pessoas Físicas (CPF) numbers. This number is used to identify taxpayers in Brazil. 

Cybercriminals Weaponize AI for Large-Scale Extortion and Ransomware Attacks

 

AI company Anthropic has uncovered alarming evidence that cybercriminals are weaponizing artificial intelligence tools for sophisticated criminal operations. The company's recent investigation revealed three particularly concerning applications of its Claude AI: large-scale extortion campaigns, fraudulent recruitment schemes linked to North Korea, and AI-generated ransomware development. 

Criminal AI applications emerge 

In what Anthropic describes as an "unprecedented" case, hackers utilized Claude to conduct comprehensive reconnaissance across 17 different organizations, systematically gathering usernames and passwords to infiltrate targeted networks.

The AI tool autonomously executed multiple malicious functions, including determining valuable data for exfiltration, calculating ransom demands based on victims' financial capabilities, and crafting threatening language to coerce compliance from targeted companies. 

The investigation also uncovered North Korean operatives employing Claude to create convincing fake personas capable of passing technical coding evaluations during job interviews with major U.S. technology firms. Once successfully hired, these operatives leveraged the AI to fulfill various technical responsibilities on their behalf, potentially gaining access to sensitive corporate systems and information. 

Additionally, Anthropic discovered that individuals with limited technical expertise were using Claude to develop complete ransomware packages, which were subsequently marketed online to other cybercriminals for prices reaching $1,200 per package. 

Defensive AI measures 

Recognizing AI's potential for both offense and defense, ethical security researchers and companies are racing to develop protective applications. XBOW, a prominent player in AI-driven vulnerability discovery, has demonstrated significant success using artificial intelligence to identify software flaws. The company's integration of OpenAI's GPT-5 model resulted in substantial performance improvements, enabling the discovery of "vastly more exploits" than previous methods.

Earlier this year, XBOW's AI-powered systems topped HackerOne's leaderboard for vulnerability identification, highlighting the technology's potential for legitimate security applications. Multiple organizations focused on offensive and defensive strategies are now exploring AI agents to infiltrate corporate networks for defense and intelligence purposes, assisting IT departments in identifying vulnerabilities before malicious actors can exploit them. 

Emerging cybersecurity arms race 

The simultaneous adoption of AI technologies by both cybersecurity defenders and criminal actors has initiated what experts characterize as a new arms race in digital security. This development represents a fundamental shift where AI systems are pitted against each other in an escalating battle between protection and exploitation. 

The race's outcome remains uncertain, but security experts emphasize the critical importance of equipping legitimate defenders with advanced AI tools before they fall into criminal hands. Success in this endeavor could prove instrumental in thwarting the emerging wave of AI-fueled cyberattacks that are becoming increasingly sophisticated and autonomous. 

This evolution marks a significant milestone in cybersecurity, as artificial intelligence transitions from merely advising on attack strategies to actively executing complex criminal operations independently.

Cryptoexchange SwissBorg Suffers $41 Million Theft, Will Reimburse Users


According to SwissBorg, a cryptoexchange platform, $41 million worth of cryptocurrency was stolen from an external wallet used for its SOL earn strategy in a cyberattack that also affected a partner company. The company, which is based in Switzerland, acknowledged the industry reports of the attack but has stressed that the platform was not compromised. 

CEO Cyrus Fazel said that an external finance wallet of a partner was compromised. The incident happened due to hacking of the partner’s API, a process that lets software customers communicate with each other, impacting a single counterparty. It was not a compromise of SwissBorg, the company said on X. 

SwissBorg said that the hack has impacted fewer than 1% of users. “A partner API was compromised, impacting our SOL Earn Program (~193k SOL, <1% of users).  Rest assured, the SwissBorg app remains fully secure and all other funds in Earn programs are 100% safe,” it tweeted. The company said they are looking into the incident with other blockchain security firms. 

All other assets are secure and will compensate for any losses, and user balances in the SwissBorg app are not impacted. SOL Earn redemptions have been stopped as recovery efforts are undergoing. The company has also teamed up with law enforcement agencies to recover the stolen funds. A detailed report will be released after the investigations end. 

The exploit surfaced after a surge in crypto thefts, with more than $2.17 billion already stolen this year. Kiln, the partner company, released its own statement: “SwissBorg and Kiln are investigating an incident that may have involved unauthorized access to a wallet used for staking operations. The incident resulted in Solana funds being improperly removed from the wallet used for staking operations.” 

After the attack, “SwissBorg and Kiln immediately activated an incident response plan, contained the activity, and engaged our security partners,” it said in a blogpost, and that “SwissBorg has paused Solana staking transactions on the platform to ensure no other customers are impacted.”

Fazel posted a video about the incident, informing users that the platform had suffered multiple breaches in the past.

Antrhopic to use your chats with Claude to train its AI


Antrhopic to use your chats with Claude to train its AI

Anthropic announced last week that it will update its terms of service and privacy policy to allow the use of chats for training its AI model “Claude.” Users of all subscription levels- Claude Free, Max, Pro, and Code subscribers- will be impacted by this new update. Anthropic’s new Consumer Terms and Privacy Policy will take effect from September 28, 2025. 

But users who use Claude under licenses such as Work, Team, and Enterprise plans, Claude Education, and Claude Gov will be exempted. Besides this, third-party users who use the Claude API through Google Cloud’s Vertex AI and Amazon Bedrock will also not be affected by the new policy.

If you are a Claude user, you can delay accepting the new policy by choosing ‘not now’, however, after September 28, your user account will be opted in by default to share your chat transcript for training the AI model. 

Why the new policies?

The new policy has come after the genAI boom, thanks to the massive data that has prompted various tech companies to rethink their update policies (although quietly) and update their terms of service. With this, these companies can use your data to train their AI models or give it out to other companies to improve their AI bots. 

"By participating, you’ll help us improve model safety, making our systems for detecting harmful content more accurate and less likely to flag harmless conversations. You’ll also help future Claude models improve at skills like coding, analysis, and reasoning, ultimately leading to better models for all users," Anthropic said.

Concerns around user safety

Earlier this year, in July, Wetransfer, a famous file-sharing platform, fell into controversy when it changed its terms of service agreement, facing immediate backlash from its users and online community. WeTransfer wanted the files uploaded on its platform could be used for improving machine learning models. After the incident, the platform has been trying to fix things by removing “any mention of AI and machine learning from the document,” according to the Indian Express. 

With rising concerns over the use of personal data for training AI models that compromise user privacy, companies are now offering users the option to opt out of data training for AI models.

How cybersecurity debts can damage your organization and finances

How cybersecurity debts can damage your organization and finances

A new term has emerged in the tech industry: “cybersecurity debt.” Similar to technical debt, cybersecurity debt refers to the accumulation of unaddressed security bugs and outdated systems resulting from inadequate investments in cybersecurity services. 

Delaying these expenditures can provide short-term financial gains, but long-term repercussions can be severe, causing greater dangers and exponential costs.

What causes cybersecurity debt?

Cybersecurity debt happens when organizations don’t update their systems frequently, ignoring software patches and neglecting security improvements for short-term financial gains. Slowly, this leads to a backlog of bugs that threat actors can abuse- leading to severe consequences. 

Contrary to financial debt that accumulates predictable interest, cybersecurity debt compounds in uncertain and hazardous ways. Even a single ignored bug can cause a massive data breach, a regulatory fine that can cost millions, or a ransomware attack. 

A 2024 IBM study about data breaches cost revealed that the average data breach cost had increased to $4.9 million, a record high. And even worse, 83% of organizations surveyed had suffered multiple breaches, suggesting that many businesses keep operating with cybersecurity debt. The more an organization avoids addressing problems, the greater the chances of cyber threats.

What can CEOs do?

Short-term gain vs long-term security

CEOs and CFOs are under constant pressure to give strong quarterly profits and increase revenue. As cybersecurity is a “cost center” and non-revenue-generating expenditure, it is sometimes seen as a service where costs can be cut without severe consequences. 

A CEO or CFO may opt for this short-term security gain, failing to address the long-term risks involved with rising cybersecurity debt. In some cases, the consequences are only visible when a business suffers a data breach. 

Philip D. Harris, Research Director, GRC Software & Services, IDC, suggests, “Executive management and the board of directors must support the strategic direction of IT and cybersecurity. Consider implementing cyber-risk quantification to accomplish this goal. When IT and cybersecurity leaders speak to executives and board members, from a financial perspective, it is easier to garner interest and support for investments to reduce cybersecurity debt.”

Limiting cybersecurity debt

CEOs and leaders should consider reassessing the risks. This can be achieved by adopting a comprehensive approach that adds cybersecurity debt into an organization’s wider risk management plans.

Misuse of AI Agents Sparks Alarm Over Vibe Hacking


 

Once considered a means of safeguarding digital battlefields, artificial intelligence has now become a double-edged sword —a tool that can not only arm defenders but also the adversaries it was supposed to deter, giving them both a tactical advantage in the digital fight. According to Anthropic's latest Threat Intelligence Report for August 2025, shown below, this evolving reality has been painted in a starkly harsh light. 

It illustrates how cybercriminals are developing AI as a product of choice, no longer using it to support their attacks, but instead executing them as a central instrument of attack orchestration. As a matter of fact, according to the report, malicious actors are now using advanced artificial intelligence in order to automate phishing campaigns on a large scale, circumvent traditional security measures, and obtain sensitive information very efficiently, with very little human oversight needed. As a result of AI's precision and scalability, the threat landscape is escalating in troubling ways. 

By leveraging AI's accuracy and scalability, modern cyberattacks are being accelerated, reaching, and sophistication. A disturbing evolution of cybercrime is being documented by Anthropologic, as it turns out that artificial intelligence is no longer just used to assist with small tasks such as composing phishing emails or generating malicious code fragments, but is also serving as a force multiplier for lone actors, giving them the capacity to carry out operations at scale and with precision that was once reserved for organized criminal syndicates to accomplish. 

Investigators have been able to track down a sweeping extortion campaign back to a single perpetrator in one particular instance. This perpetrator used Claude Code's execution environment as a means of automating key stages of intrusion, such as reconnaissance, credential theft, and network penetration, to carry out the operation. The individual compromised at least 17 organisations, ranging from government agencies to hospitals to financial institutions, and he has made ransom demands that have sometimes exceeded half a million dollars in some instances. 

It was recently revealed that researchers have conceived of a technique called “vibe hacking” in which coding agents can be used not just as tools but as active participants in attacks, marking a profound shift in both cybercriminal activity’s speed and reach. It is believed by many researchers that the concept of “vibe hacking” has emerged as a major evolution in cyberattacks, as instead of exploiting conventional network vulnerabilities, it focuses on the logic and decision-making processes of artificial intelligence systems. 

In the year 2025, Andrej Karpathy started a research initiative called “vibe coding” - an experiment in artificial intelligence-generated problem-solving. Since then, the concept has been co-opted by cybercriminals to manipulate advanced language models and chatbots for unauthorised access, disruption of operations, or the generation of malicious outputs, originating from a research initiative. 

By using AI, as opposed to traditional hacking, in which technical defences are breached, this method exploits the trust and reasoning capabilities of machine learning itself, making detection especially challenging. Furthermore, the tactic is reshaping social engineering as well: attackers can create convincing phishing emails, mimic human speech, build fraudulent websites, create clones of voices, and automate whole scam campaigns at an unprecedented level using large language models that simulate human conversations with uncanny realism. 

With tools such as artificial intelligence-driven vulnerability scanners and deepfake platforms, the threat is amplified even further, creating a new frontier of automated deception, according to experts. In one notable variant of scamming, known as “vibe scamming,” adversaries can launch large-scale fraud operations in which they generate fake portals, manage stolen credentials, and coordinate follow-up communications all from a single dashboard, which is known as "vibe scamming." 

Vibe hacking is one of the most challenging cybersecurity tasks people face right now because it is a combination of automation, realism, and speed. The attackers are not relying on conventional ransomware tactics anymore; they are instead using artificial intelligence systems like Claude to carry out all aspects of an intrusion, from reconnaissance and credential harvesting to network penetration and data extraction.

A significant difference from earlier AI-assisted attacks was that Claude demonstrated "on-keyboard" capability as well, performing tasks such as scanning VPN endpoints, generating custom malware, and analysing stolen datasets to prioritise the victims with the highest payout potential. As soon as the system was installed, it created tailored ransom notes in HTML, containing the specific financial requirements, workforce statistics, and regulatory threats of each organisation, all based on the data that had been collected. 

The amount of payments requested ranged from $75,000 to $500,000 in Bitcoin, which illustrates that with the assistance of artificial intelligence, one individual could control the entire cybercrime network. Additionally, the report emphasises how artificial intelligence and cryptocurrency have increasingly become intertwined. For example, ransom notes include wallet addresses in ransom notes, and dark web forums are exclusively selling AI-generated malware kits in cryptocurrency. 

An investigation by the FBI has revealed that North Korea is increasingly using artificial intelligence (AI) to evade sanctions, which is used to secure fraudulent positions at Western tech companies by state-backed IT operatives who use it for the fabrication of summaries, passing interviews, debugging software, and managing day-to-day tasks. 

According to officials in the United States, these operations channel hundreds of millions of dollars every year into Pyongyang's technical weapon program, replacing years of training with on-demand artificial intelligence assistance. This reveals a troubling shift: artificial intelligence is not only enabling cybercrime but is also amplifying its speed, scale, and global reach, as evidenced by these revelations. A report published by Anthropological documents how Claude Code has been used not just for breaching systems, but for monetising stolen information at large scales as well. 

As a result of using the software, thousands of records containing sensitive identifiers, financial information, and even medical information were sifted through, and then customised ransom notes and multilayered extortion strategies were generated based on the victim's characteristics. As the company pointed out, so-called "agent AI" tools now provide attackers with both technical expertise and hands-on operational support, which effectively eliminates the need to coordinate teams of human operators, which is an important factor in preventing cyberattackers from taking advantage of these tools. 

Researchers warn that these systems can be dynamically adapted to defensive countermeasures, such as malware detection, in real time, thus making traditional enforcement efforts increasingly difficult. There are a number of cases to illustrate the breadth of abuse that occurs in the workplace, and there is a classifier developed by Anthropic to identify the behaviour. However, a series of case studies indicates this behaviour occurs in a multitude of ways. 

In the North Korean case, Claude was used to fabricate summaries and support fraudulent IT worker schemes. In the U.K., a criminal known as GTG-5004 was selling ransomware variants based on artificial intelligence on darknet forums; Chinese actors utilised artificial intelligence to compromise Vietnamese critical infrastructure; and Russian and Spanish-speaking groups were using the software to create malicious software and steal credit card information. 

In order to facilitate sophisticated fraud campaigns, even low-skilled actors have begun integrating AI into Telegram bots around romance scams as well as false identity services, significantly expanding the number of fraud campaigns available. A new report by Anthropic researchers Alex Moix, Ken Lebedev, and Jacob Klein argues that artificial intelligence, based on the results of their research, is continually lowering the barriers to entry for cybercriminals, enabling fraudsters to create profiles of victims, automate identity theft, and orchestrate operations at a speed and scale that is unimaginable with traditional methods. 

It is a disturbing truth that is highlighted in Anthropic’s report: although artificial intelligence was once hailed as a shield for defenders, it is now increasingly being used as a weapon, putting digital security at risk. Nevertheless, people must not retreat from AI adoption, but instead develop defensive strategies in parallel that are geared toward keeping up with AI adoption. Proactive guardrails must be set up in order to prevent artificial intelligence from being misused, including stricter oversight and transparency by developers, as well as continuous monitoring and real-time detection systems to recognise abnormal AI behaviour before it escalates into a serious problem. 

A company's resilience should go beyond its technical defences, and that means investing in employee training, incident response readiness, and partnerships that enable data sharing across sectors. In addition to this, governments are also under mounting pressure to update their regulatory frameworks in order to keep pace with the evolution of threat actors in terms of policy.

By harnessing artificial intelligence responsibly, people can still make it a powerful ally—automating defensive operations, detecting anomalies, and even predicting threats before they are even visible. In order to ensure that it continues in a manner that favours protection over exploitation, protecting not just individual enterprises, but the overall trust people have in the future of the digital world. 

A significant difference from earlier AI-assisted attacks was that Claude demonstrated "on-keyboard" capability as well, performing tasks such as scanning VPN endpoints, generating custom malware, and analysing stolen datasets in order to prioritise the victims with the highest payout potential. As soon as the system was installed, it created tailored ransom notes in HTML, containing the specific financial requirements, workforce statistics, and regulatory threats of each organisation, all based on the data that had been collected. 

The amount of payments requested ranged from $75,000 to $500,000 in Bitcoin, which illustrates that with the assistance of artificial intelligence, one individual could control the entire cybercrime network. Additionally, the report emphasises how artificial intelligence and cryptocurrency have increasingly become intertwined. 

For example, ransom notes include wallet addresses in ransom notes, and dark web forums are exclusively selling AI-generated malware kits in cryptocurrency. An investigation by the FBI has revealed that North Korea is increasingly using artificial intelligence (AI) to evade sanctions, which is used to secure fraudulent positions at Western tech companies by state-backed IT operatives who use it for the fabrication of summaries, passing interviews, debugging software, and managing day-to-day tasks. 

According to U.S. officials, these operations funnel hundreds of millions of dollars a year into Pyongyang's technical weapons development program, replacing years of training with on-demand AI assistance. All in all, these revelations indicate an alarming trend: artificial intelligence is not simply enabling cybercrime, but amplifying its scale, speed, and global reach. 

According to the report by Anthropic, Claude Code has been weaponised not only to breach systems, but also to monetise stolen data. This particular tool has been used in several instances to sort through thousands of documents containing sensitive information, including identifying information, financial details, and even medical records, before generating customised ransom notes and layering extortion strategies based on each victim's profile. 

The company explained that so-called “agent AI” tools are now providing attackers with both technical expertise and hands-on operational support, effectively eliminating the need for coordinated teams of human operators to perform the same functions. Despite the warnings of researchers, these systems are capable of dynamically adapting to defensive countermeasures like malware detection in real time, making traditional enforcement efforts increasingly difficult, they warned. 

Using a classifier built by Anthropic to identify this type of behaviour, the company has shared technical indicators with trusted partners in an attempt to combat the threat. The breadth of abuse is still evident through a series of case studies: North Korean operatives use Claude to create false summaries and maintain fraud schemes involving IT workers; a UK-based criminal with the name GTG-5004 is selling AI-based ransomware variants on darknet forums. 

Some Chinese actors use artificial intelligence to penetrate Vietnamese critical infrastructure, while Russians and Spanish-speaking groups use Claude to create malware and commit credit card fraud. The use of artificial intelligence in Telegram bots marketed for romance scams or synthetic identity services has even reached the level of low-skilled actors, allowing sophisticated fraud campaigns to become more accessible to the masses. 

A new report by Anthropic researchers Alex Moix, Ken Lebedev, and Jacob Klein argues that artificial intelligence, based on the results of their research, is continually lowering the barriers to entry for cybercriminals, enabling fraudsters to create profiles of victims, automate identity theft, and orchestrate operations at a speed and scale that is unimaginable with traditional methods. In the report published by Anthropic, it appears to be revealed that artificial intelligence is increasingly being used as a weapon to challenge the foundations of digital security, despite being once seen as a shield for defenders. 

There is a solution to this, but it is not in retreating from AI adoption, but by accelerating the parallel development of defensive strategies that are at the same pace as AI adoption. According to experts, proactive guardrails are necessary to ensure that AI deployments are monitored, developers are held more accountable, and there is continuous monitoring and real-time detection systems available that can be used to identify abnormal AI behaviour before it becomes a serious problemOrganisationss must not only focus on technical defences; they must also invest in employee training, incident response readiness, and partnerships that facilitate intelligence sharing between sectors as well.

Governments are also under increasing pressure to update regulatory frameworks to keep pace with the evolving threat actors, in order to ensure that policy is updated at the same pace as they evolve. By harnessing artificial intelligence responsibly, people can still make it a powerful ally—automating defensive operations, detecting anomalies, and even predicting threats before they are even visible. In order to ensure that it continues in a manner that favours protection over exploitation, protecting not just individual enterprises, but the overall trust people have in the future of the digital world.

PromptLock: the new AI-powered ransomware and what to do about it

 



Security researchers recently identified a piece of malware named PromptLock that uses a local artificial intelligence model to help create and run harmful code on infected machines. The finding comes from ESET researchers and has been reported by multiple security outlets; investigators say PromptLock can scan files, copy or steal selected data, and encrypt user files, with code for destructive deletion present but not active in analysed samples. 


What does “AI-powered” mean here?

Instead of a human writing every malicious script in advance, PromptLock stores fixed text prompts on the victim machine and feeds them to a locally running language model. That model then generates small programs, written in the lightweight Lua language, which the malware executes immediately. Researchers report the tool uses a locally accessible open-weight model called gpt-oss:20b through the Ollama API to produce those scripts. Because the AI runs on the infected computer rather than contacting a remote service, the activity can be harder to spot. 


How the malware works

According to the technical analysis, PromptLock is written in Go, produces cross-platform Lua scripts that work on Windows, macOS and Linux, and uses a SPECK 128-bit encryption routine to lock files in flagged samples. The malware’s prompts include a Bitcoin address that investigators linked to an address associated with the pseudonymous Bitcoin creator known as Satoshi Nakamoto. Early variants have been uploaded to public analysis sites, and ESET treats this discovery as a proof of concept rather than evidence of widespread live attacks. 


Why this matters

Two features make this approach worrying for defenders. First, generated scripts vary each time, which reduces the effectiveness of signature or behaviour rules that rely on consistent patterns. Second, a local model produces no network traces to cloud providers, so defenders lose one common source of detection and takedown. Together, these traits could make automated malware harder to detect and classify. 

Practical, plain steps to protect yourself:

1. Do not run files or installers you do not trust.

2. Keep current, tested backups offline or on immutable storage.

3. Maintain up-to-date operating system and antivirus software.

4. Avoid running untrusted local AI models or services on critical machines, and restrict access to local model APIs.

These steps will reduce the risk from this specific technique and from ransomware in general. 


Bottom line

PromptLock is a clear signal that attackers are experimenting with local AI to automate malicious tasks. At present it appears to be a work in progress and not an active campaign, but the researchers stress vigilance and standard defensive practices while security teams continue monitoring developments. 



Experts discover first-ever AI-powered ransomware called "PromptLock"

Experts discover first-ever AI-powered ransomware called "PromptLock"

A ransomware attack is an organization’s worst nightmare. Not only does it harm the confidentiality of the organizations and their customers, but it also drains money and causes damage to the reputation. Defenders have been trying to address this serious threat, but threat actors keep developing new tactics to launch attacks. To make things worse, we have a new AI-powered ransomware strain. 

First AI ransomware

Cybersecurity experts have found the first-ever AI-powered ransomware strain. Experts Peter Strycek and Anton Cherepanov from ESET found the strain and have termed it “PromptLock.” "During infection, the AI autonomously decides which files to search, copy, or encrypt — marking a potential turning point in how cybercriminals operate," ESET said.

The malware has not been spotted in any cyberattack as of yet, experts say. Promptlock appears to be in development and is poised for launch. 

Although cyber criminals used GenAI tools to create malware in the past, PromptLock is the first ransomware case that is based on an AI model. According to Cherepanov’s LinkedIn post, Promptlock exploits the gpt-oss:20b model from OpenAI through the Ollama API to make new scripts.

About PromptLock

Cherepanov’s LinkedIn post highlighted that the ransomware script can exfiltrate files and encrypt data, but may destroy files in the future. He said that “while multiple indicators suggest that the sample is a proof-of-concept (PoC) or a work-in-progress rather than an operational threat in the wild, we believe it is crucial to raise awareness within the cybersecurity community about such emerging risks.

AI and ransomware threat

According to Dark Reading’s conversation with ESET experts, the AI-based ransomware is a serious threat to security teams. Strycek and Cherepanov are trying to find out more about PromptLock, but they want to warn the security teams immediately about the ransomware. 

ESET on X noted that "the PromptLock ransomware is written in #Golang, and we have identified both Windows and Linux variants uploaded to VirusTotal."

Threat actors have started using AI tools to launch phishing campaigns by creating fake content and malicious websites, thanks to the rapid adoption across the industry. However, AI-powered ransomware will be a worse challenge for cybersecurity defenders.

CISOs fear material losses amid rising cyberattacks


Chief information security officers (CISOs) are worried about the dangers of a cyberattack, and there is an anxiety due to the material losses of data that organizations have suffered in the past year.

According to a report by Proofpoint, the majority of CISOs fear a material cyberattack in the next 12 months. These concerns highlight the increasing risks and cultural shifts among CISOs.

Changing roles of CISOs

“76% of CISOs anticipate a material cyberattack in the next year, with human risk and GenAI-driven data loss topping their concerns,” Proofpoint said. In this situation, corporate stakeholders are trying to get a better understanding of the risks involved when it comes to tech and whether they are safe or not. 

Experts believe that CISOs are being more open about these attacks, thanks to SEC disclosure rules, strict regulations, board expectations, and enquiries. The report surveyed 1,600 CISOs worldwide; all the organizations had more than 1000 employees. 

Doing business is a concern

The study highlights a rising concern about doing business amid incidents of cyberattacks. Although the majority of CISOs are confident about their cybersecurity culture, six out of 10 CISOs said their organizations are not prepared for a cyberattack. The majority of the CISOs were found in favour of paying ransoms to avoid the leak of sensitive data.

AI: Saviour or danger?

AI has risen both as a top concern as well as a top priority for CISOs. Two-thirds of CISOs believe that enabling GenAI tools is a top priority over the next two years, despite the ongoing risks. In the US, however, 80% CISOs worry about possible data breaches through GenAI platforms. 

With adoption rates rising, organizations have started to move from restriction to governance. “Most are responding with guardrails: 67% have implemented usage guidelines, and 68% are exploring AI-powered defenses, though enthusiasm has cooled from 87% last year. More than half (59%) restrict employee use of GenAI tools altogether,” Proofpoint said.

Malicous npm package exploit crypto wallets


Experts have found a malicious npm package that consists of stealthy features to deploy malicious code into pc apps targeting crypto wallets such as Exodus and Atomic. 

About the package

Termed as “nodejs-smtp,” the package imitates the genuine email library nodemailer with the same README descriptions, page styling, and tagline, bringing around 347 downloads since it was uploaded to the npm registry earlier this year by a user “nikotimon.” 

It is not available anymore. Socket experts Krill Boychenko said, "On import, the package uses Electron tooling to unpack Atomic Wallet's app.asar, replace a vendor bundle with a malicious payload, repackage the application, and remove traces by deleting its working directory.”

What is the CIS build kit?

The aim is to overwrite the recipient address with hard-coded wallets handled by a cybercriminal. The package delivers by working as an SMTP-based mailer while trying to escape developers’ attention. 

This has surfaced after ReversingLabs found an npm package called "pdf-to-office" that got the same results by releasing the “app.asar” archives linked to Exodus and Atomic wallets and changing the JavaScript file inside them to launch the clipper function. 

According to Boychenko, “this campaign shows how a routine import on a developer workstation can quietly modify a separate desktop application and persist across reboots. He also said that “by using import time execution and Electron packaging, a lookalike mailer becomes a wallet drainer that alters Atomic and Exodus on compromised Windows systems."

What next?

The campaign has exposed how a routine import on a developer's pc can silently change a different desktop application and stay alive in reboots. By exploiting the import time execution and Electron packaging, an identical mailer turns into a wallet drainer. Security teams should be careful of incoming wallet drainers deployed through package registries. 

Beware of SIM swapping attacks, your phone is at risk


In today’s digital world, most of our digital life is connected to our phone numbers, so keeping them safe becomes a necessity. Sad news: hackers don’t need your phone to access your number. 

What is SIM swapping?

Also known as SIMjacking, SIM swapping is a tactic where a cybercriminal convinces your ISP to port your phone number to their own SIM card. This results in the user losing access to their phone number and service provider, while the cybercriminal gains full access. 

To convince the ISP of a SIM swap, the threat actor has to know about you. They can get the information from data breaches available on the dark web. You might also get tricked by a phishing scam and end up giving your info, or the threat actor may harvest your social media in case you have public information. 

Once the information is received, the threat actor calls the customer support, requesting to move your number to a new SIM card. In most cases, your carrier doesn’t need much convincing. 

Threats concerning SIM swapping

An attacker with your phone number can impersonate you to friends and family, and extort money. Your phone security is also at risk, as most online services ask for your phone number for account recovery. 

SIM swapping is dangerous as SMS based two-factor-authentication is still in use. Many services require us to activate 2FA on our accounts, and sometimes through SMS. 

You can also check your carrier’s website to see if there’s any option to deactivate SIM change requests. This way, you can secure your phone number. 

But when this isn’t available with your carrier, look out for the option to enable a PIN or secret phrase. A few companies allow users to set these, and call you back to confirm about your account.

How to stay safe from SIM swapping?

Avoid using 2FA; use passkeys.

Use a SIM PIN for your phone to lock your SIM card.

Researchers Expose AI Prompt Injection Attack Hidden in Images

 

Researchers have unveiled a new type of cyberattack that can steal sensitive user data by embedding hidden prompts inside images processed by AI platforms. These malicious instructions remain invisible to the human eye but become detectable once the images are downscaled using common resampling techniques before being sent to a large language model (LLM).

The technique, designed by Trail of Bits experts Kikimora Morozova and Suha Sabi Hussain, builds on earlier research from a 2020 USENIX paper by TU Braunschweig, which first proposed the concept of image-scaling attacks in machine learning systems.

Typically, when users upload pictures into AI tools, the images are automatically reduced in quality for efficiency and cost optimization. Depending on the resampling method—such as nearest neighbor, bilinear, or bicubic interpolation—aliasing artifacts can emerge, unintentionally revealing hidden patterns if the source image was crafted with this purpose in mind.

In one demonstration by Trail of Bits, carefully engineered dark areas within a malicious image shifted colors when processed through bicubic downscaling. This transformation exposed black text that the AI system interpreted as additional user instructions. While everything appeared normal to the end user, the model silently executed these hidden commands, potentially leaking data or performing harmful tasks.

In practice, the team showed how this vulnerability could be exploited in Gemini CLI, where hidden prompts enabled the extraction of Google Calendar data to an external email address. With Zapier MCP configured to trust=True, the tool calls were automatically approved without requiring user consent.

The researchers emphasized that the success of such attacks depends on tailoring the malicious image to the specific downscaling algorithm used by each AI system. Their testing confirmed the method’s effectiveness against:

  1. Google Gemini CLI
  2. Vertex AI Studio (Gemini backend)
  3. Gemini’s web interface
  4. Gemini API via llm CLI
  5. Google Assistant on Android
  6. Genspark

Given the broad scope of this vulnerability, the team developed Anamorpher, an open-source tool (currently in beta) that can generate attack-ready images aligned with multiple downscaling methods.

To defend against this threat, Trail of Bits recommends that AI platforms enforce image dimension limits, provide a preview of the downscaled output before submission to an LLM, and require explicit user approval for sensitive tool calls—especially if text is detected in images.

"The strongest defense, however, is to implement secure design patterns and systematic defenses that mitigate impactful prompt injection beyond multi-modal prompt injection," the researchers said, pointing to their earlier paper on robust LLM design strategies.

Cybersecurity: The Top Business Risk Many Firms Still Struggle to Tackle

 



Cybersecurity has emerged as the biggest threat to modern enterprises, yet most organizations remain far from prepared to handle it. Business leaders are aware of the risks — financial losses, reputational harm, and operational disruptions but awareness has not translated into effective readiness.

A recent global survey conducted in early 2025 across North America, Western Europe, and Asia-Pacific highlights this growing concern. With 600 respondents, including IT and security professionals, the study found that while executives admit to weak points in their defenses, they often lack a unified plan to build true resilience.


Where businesses fall short

Companies tend to focus heavily on protecting their data, ensuring quality, security, and proper governance. While crucial, these efforts alone are not enough. A resilient business must also address application security, identity management, supply chain safeguards, infrastructure defenses, and the ability to continue operations during an attack. Unfortunately, many firms still fall behind in tying all these dimensions into a cohesive strategy.


Why this matters now

Cyberattacks are no longer rare, one-off incidents. Nearly two-thirds of the organizations surveyed experienced at least one damaging cyber event in the past year. About one-third suffered multiple breaches in that period. These attacks caused not just stolen data but also costly downtime, compliance issues, and long-lasting damage to trust.

The survey’s findings revealed that:

• 38% of organizations faced major operational disruption, with outages and downtime hitting productivity hardest.

• 33% reported financial losses linked directly to an attack.

• Around 30% to 31% saw personal or sensitive data exposed or compromised.

• Nearly a quarter of cases involved data corruption or encryption that could not be fully reversed.

• Legal consequences, public backlash, and compliance failures added further damage.

The message is clear: cybersecurity is not just a technical concern, it is a business survival issue.


The study also shows that three once-separate areas are beginning to merge. Backup and recovery systems, once viewed as insurance, are now central to cyber resilience. Cybersecurity tools are extending beyond perimeter defense to include recovery and continuity. At the same time, data governance and compliance pressures have become inseparable from security practices.

As artificial intelligence gains ground in enterprise operations, this convergence is likely to intensify. AI requires clean, reliable data to function, but it also introduces fresh security risks. Companies that cannot safeguard and recover their data risk losing competitiveness in an economy increasingly powered by digital intelligence.


No safe corners in digital infrastructure

Attackers are methodical and opportunistic. They exploit weak points wherever they exist whether in data systems, applications, or even AI workloads. Defenders must therefore strengthen every layer of their infrastructure. Yet, according to the survey, most organizations still leave gaps that skilled adversaries can exploit.

Cybersecurity is now the most significant risk enterprises face. And while business leaders are no longer in denial about the threat, too many remain underprepared. Building resilience requires more than just securing data; it demands a comprehensive, ongoing effort across every layer of the digital ecosystem.



Millions of Patient Records Compromised After Ransomware Strike on DaVita


 Healthcare Faces Growing Cyber Threats

A ransomware attack that affected nearly 2.7 million patients has been confirmed by kidney care giant DaVita, revealing that one of the most significant cyberattacks of the year has taken place. There are over 2,600 outpatient dialysis centres across the United States operated by the company, which stated that the breach was first detected on April 12, 2025, when the security team found unauthorised activity within the company's computer systems. In the aftermath of this attack, Interlock was revealed to have been responsible, marking another high-profile attack on the healthcare industry. 

Although DaVita stressed the uninterrupted delivery of patient care throughout the incident, and that all major systems have since been fully restored - according to an official notice issued on August 1 - a broad range of sensitive personal and clinical information was still exposed through the compromise. An attacker was able to gain access to a variety of information, such as name, address, date of birth, Social Security number, insurance data, clinical histories, dialysis treatment details, and laboratory results, among others. 

It represents a deep invasion of privacy for millions of patients who depend on kidney care for life-sustaining purposes and raises new concerns about the security of healthcare systems in general. 

Healthcare Becomes A Cyber Battlefield 

The hospital and healthcare industry, which has traditionally been seen as a place of healing, is becoming increasingly at the centre of digital warfare. Patient records are packed with rich financial and medical information, which can be extremely valuable on dark web markets, as compared to credit card information. 

While hospitals are under a tremendous amount of pressure to maintain uninterrupted access to their systems, any downtime in the system could threaten patients' lives, which makes them prime targets for ransomware attacks. 

Over the past few months, millions of patients worldwide have been affected by breaches that have ranged from the theft of medical records to ransomware-driven disruptions of services. As well as compromising privacy, these attacks have also disrupted treatment, shaken public trust, and increased financial burdens on healthcare organisations already stressed out by increasing demand. 

A troubling trend is emerging with the DaVita case: in the last few years, cybercriminals have progressively increased both the scale and sophistication of their campaigns, threatening patient safety and health. DaVita’s Ransomware Ordeal.  It was reported that DaVita had confirmed the breach in detail on August 21, 2025, and that it filed disclosures with the Office for Civil Rights of the U.S. Department of Health and Human Services. 

Intruders started attacking DaVita's facility on March 24, 2025, but were only removed by April 12 after DaVita's internal response teams contained the attack. Several reports indicate that Interlock, the ransomware gang that was responsible for the theft of the data, released portions of the data online after failing to negotiate with the firm. Although the critical dialysis services continued uninterrupted, as is a priority given the fact that dialysis is an essential treatment, the attack did temporarily disrupt laboratory systems. There was an exceptionally significant financial cost involved. 

According to DaVita's report for the second quarter of 2025, the breach had already incurred a total of $13.5 million in costs associated with it. Among these $1 million, $1 million has been allocated to patient care costs relating to the incident, while $12.5 million has been allocated to administrative recovery, system restoration, and cybersecurity services provided by professional third-party service providers. 

Expansion of the Investigation 

According to DaVita's Securities and Exchange Commission filings in April 2025, it first acknowledged that there had been a security incident, but it said that the scope of the data stolen had not yet been determined. During the months that followed, forensic analysis and investigations expanded. State Attorneys General were notified, and the extent of the problem began to be revealed: it was estimated that at least one million patients were affected by the virus. As more information came to light, the figures grew, with OCR's breach portal later confirming 2,688,826 victims. 

DaVita, based on internal assessments, believed that the actual number of victims may be slightly lower, closer to 2.4 million, and the agency intends to update its portal in accordance with those findings. Although the company is struggling with operational strains, it has assured its patients that it will continue providing dialysis services through its 3,000 outpatient centres and home-based programs worldwide – a sign of stability in the face of crisis, given that kidney failure patients require life-saving treatment that cannot be avoided. 

Even so, the attack underscored just how severe financial and reputational damage such incidents can have. This will mean that the cost of restoring systems, engaging cybersecurity experts and providing patients with resources such as credit monitoring and data protection will likely continue to climb in the coming months. 

Data Theft And Interlock’s Role 

It appears that Interlock has become one of the most aggressive ransomware groups out there since it appeared in 2024. In the DaVita case, it is said that the gang stole nearly 1.5 terabytes of data, including approximately 700,000 files. In addition to the patient records, the stolen files were also suspected to contain insurance documents, user credentials, and financial information as well. 

A failed negotiation with DaVita caused Interlock to publish parts of the data on its dark web portal, after which parts of the data were published. On June 18, DaVita confirmed that some of the files were genuine, tracing them back to the dialysis laboratory systems they use. As part of its public statement, the company stated that it had acknowledged that the lab's database had been accessed by unauthorised persons and that it would notify both current and former patients. 

Additionally, DaVita has begun to provide complimentary credit monitoring services as part of its efforts to reduce risks. Interlock's services go well beyond DaVita as well. Several universities in the United Kingdom have been attacked by a remote access trojan referred to as NodeSnake, which was deployed by the group in recent campaigns. 

Recent reports indicate that the gang has also claimed responsibility for various attacks on major U.S. healthcare providers, including a major organisation with more than 120 outpatient facilities and 15,000 employees, known as Kettering Health. Cyberattacks on healthcare have already proven to be a sobering reminder of how varied and destructive they can be. Each major breach has its own particular lessons that need to be taken into account:

The Ascension case shows how a small mistake made by a single employee can escalate into a huge problem that affects every employee. The Yale New Haven Health System shows that institutions that have well-prepared strategies are vulnerable to persistent adversaries despite their best efforts. It was revealed by Episource that third-party and supply chain vulnerabilities can result in significant damage to a network, showing how the impact of a single vendor breach may ripple outward. 

Putting one example on display, DaVita shows how the disruption caused by ransomware is different from other disruptions, as it involves both data theft and operational paralysis. There have been incidents when hackers have accessed sensitive healthcare records at scale, but there have also been incidents where simple data configuration issues have led to these breaches.

In view of these incidents, it is clear that compliance-based checklists and standard security frameworks may not be sufficient for the industry anymore. Instead, the industry must be more proactive and utilise intelligence-driven defences that anticipate threats rather than merely reacting to them as they occur. 

The Road Ahead For Healthcare Security 

The DaVita breach is an example of a growing consensus among healthcare providers that their cybersecurity strategies must be strengthened to match the sophistication of modern attackers. 

Cybercriminals value patient records as one of their most valuable assets, and every time this happens, patients' trust in their providers is undermined directly. Additionally, the operational stakes are higher than in most industries, as any disruption can put patients' lives at risk, which is why every disruption can be extremely dangerous. 

Healthcare organisations in emerging countries, as well as hospitals in India, need to invest in layered defences, integrate threat intelligence platforms, and strengthen supply chain monitoring, according to security experts. Increasingly, proactive approaches are viewed as a necessity rather than an option for managing attack surfaces, prioritising vulnerabilities, and continually monitoring the dark web. Consequently, the DaVita case is more than just an example of how a single company suffered from ransomware. 

It's also a part of a wider pattern shaping what the future of healthcare will look like. There is no doubt that in this digital age, where a breach of any record can lead to death or injury, it is imperative to have foresight, invest in cybersecurity, and recognise that it is on an equal footing with patient care. It has become evident that healthcare cybersecurity needs to evolve beyond reactive measures and fragmented defences as a result of these developments. 

In today's world, digital security cannot simply be treated as a side concern, but rather must be integrated into the very core of a patient care strategy, which is why the industry must pay close attention to it. Taking a forward-looking approach to cyber hygiene should prioritise investments in continuous cyber hygiene, workforce awareness in cybersecurity, and leveraging new technologies such as zero-trust frameworks, advanced threat intelligence platforms, and artificial intelligence (AI)-driven anomaly detection systems. 

The importance of cross-industry collaboration cannot be overstated: it requires shared standards to be established and the exchange of real-time intelligence to be achieved, so hospitals, vendors, regulators, and cybersecurity providers can collectively resist adversaries who operate no matter what borders or industries are involved.

By reducing risks, such measures will also allow people to build patient trust, reduce recovery costs, and ensure uninterrupted delivery of essential care, as well as create long-term value. In the healthcare sector that is becoming increasingly digitalised and interdependent, the organisations that proactively adopt layered defences and transparent communication practices will not only be able to mitigate threats but also position themselves as leaders in a hostile cyber environment that is ripe with cyber threats. 

Clearly, if the patients' lives are to be protected in the future, the protection of their data must equally be paramount.

AI Agents and the Rise of the One-Person Unicorn

 


Building a unicorn has been synonymous for decades with the use of a large team of highly skilled professionals, years of trial and error, and significant investments in venture capital. That is the path to building a unicorn, which has a value of over a billion dollars. Today, however, there is a fundamental shift in the established model in which people live. As AI agentic systems develop rapidly, shaped in part by OpenAI's vision of autonomous digital agents, one founder will now be able to accomplish what once required an entire team of workers.

It is evident in today's emerging landscape that the concept of "one-person unicorn" is no longer just an abstract concept, but rather a real possibility, as artificial intelligence agents expand their role beyond mere assistants, becoming transformative partners that push the boundaries of individual entrepreneurship. In spite of the fact that artificial intelligence has long been part of enterprise strategies for a long time, Agentic Artificial Intelligence marks the beginning of a significant shift. 

Aside from conventional systems, which primarily analyse data and provide recommendations, these autonomous agents can act independently to make strategic decisions and directly affect the outcome of their business decisions without needing any human intervention at all. This shift is not merely theoretical—it is already reshaping organisational practices on a large scale.

It has been revealed that the extent to which generative AI is being adopted is based on a recent survey conducted among 1,000 IT decision makers in the United States, the United Kingdom, Germany, and Australia. Ninety per cent of the survey respondents indicated that their companies have incorporated generative AI into their IT strategies, and half have already implemented AI agents. 

A further 32 per cent are preparing to follow suit shortly, according to the survey. In this new era of artificial intelligence, defining itself no longer by passive analytics or predictive modelling, but by autonomous agents capable of grasping objectives, evaluating choices, and executing tasks without the need for human intervention, people are seeing a new phase of AI emerge. 

With the advent of artificial intelligence, agents are no longer limited to providing assistance; they are now capable of orchestrating complex workflows across fragmented systems, adapting constantly to changing environments, and maximising outcomes on a real-time basis. With this development, there is more to it than just automation. It represents a shift from static digitisation to dynamic, context-aware execution, effectively transforming judgment into a digital function. 

Leading companies are increasingly comparing the impact of this transformation with the Internet's, but there is a possibility that the reach of this transformation may be even greater. Whereas the internet revolutionised external information flows, artificial intelligence is transforming internal operations and decision-making ecosystems. 

As a result of such advances, healthcare diagnostics are guided and predictive interventions are enabled; manufacturing is creating self-optimized production systems; and legal and compliance are simulating scenarios in order to reduce risk and accelerate decisions in order to reduce risk. This advancement is more than just boosting productivity – it has the potential to lay the foundations of new business models that are based on embedded, distributed intelligence. 

According to Google CEO Sundar Pichai, artificial intelligence is poised to affect “every sector, every industry, every aspect of our lives,” making the case that the technology is a defining force of our era, a reminder of the technological advances of this era. Agentic AI is characterised by its ability to detect subtle patterns of behaviour and interactions between services that are often difficult for humans to observe. This capability has already been demonstrated in platforms such as Salesforce's Interaction Explorer, which allows AI agents to detect repeated customer frustrations or ineffective policy responses and propose corrective actions, resulting in the creation of these platforms. 

Therefore, these systems become strategic advisors, which are capable of identifying risks, flagging opportunities, and making real-time recommendations to improve operations, rather than simply being back-office tools. Combined with the ability to coordinate between agents, the technology can go even further, allowing for automatic cross-functional enhanced functionality that speeds up business processes and efficiency. 

As part of this movement, leading companies like Salesforce, Google, and Accenture are combining complementary strengths to provide a variety of artificial intelligence-driven solutions ranging from multilingual customer support to predictive issue resolution to intelligent automation, integrating Salesforce's CRM ecosystem with Google Cloud's Gemini models and Accenture's sector-specific expertise. 

Moreover, with the availability of such tools, innovation is no longer confined to engineers alone; subject matter experts across a wide range of industries can now drive adoption and shape the next wave of enterprise transformation, since they have the means to do so. In order to be competitive, an organisation must not simply rely on pre-built templates. 

Instead, it must be able to customise its Agentic AI system according to its unique identity and needs. As a result of the use of natural language prompts, requirement documents, and workflow diagrams, businesses can tailor agent behaviours without having to rely on long development cycles, large budgets, or a lot of technical expertise. 

In the age of no-code and natural language interfaces, the ability to customise agents is shifting from developers to business users, ensuring that agents reflect the company's distinctive values, brand voice, and philosophy, moving the power of customisation from developers to business users. Moreover, advances in multimodality are allowing AI to be used in new ways beyond text, including voice, images, videos, and sensors. Through this evolution, agents will be able to interpret customer intent more deeply, providing them with more personalised and contextually relevant assistance based on customer intent. 

In addition, customers are now able to upload photos of defective products rather than type lengthy descriptions, or receive support via short videos rather than pages of text if they have a problem with a product. A crucial aspect of these agents is that they retain memories across their interactions, so they can constantly adapt to individual behaviours, making digital engagement less transactional and more like an ongoing, human-centred conversation, rather than a transaction. 

There are many implications beyond operational efficiency and cost reduction that are being brought about by Agentic AI. As a result of this transformation, a radical redefining of work, value creation, and even entrepreneurship itself is becoming apparent. With the capability of these systems enabling companies as well as individuals to utilise distributed intelligence, they are redefining the boundaries between human and machine collaboration, and they are not just reshaping workflows—they are redefining the boundaries of human and machine collaboration. 

A future in which scale and impact are no longer determined by headcount, but rather by the sophisticated capabilities of digital agents working alongside a single visionary, is what people are seeing in the one-person unicorn. While this transformation is bringing about societal changes, it also raises a number of concerns. The increasing delegating of decision-making tasks to autonomous agents raises questions about accountability, ethics, job displacement, and systemic risks. 

In this time and age, regulators, policymakers, and industry leaders must establish guardrails that ensure that the benefits of artificial intelligence do not further deepen inequalities or erode trust by balancing innovation with responsibility. The challenge for companies lies in deploying these tools not only in a fast and efficient manner, but also by their values, branding, and social responsibilities. It is not just the technical advance of autonomous agents that makes this moment historic, but also the cultural and economic pivot they signal that makes it so. 

Likewise to the internet, which democratized access to information in the past, artificial intelligence agents are poised to democratize access to judgment, strategy, and execution, which were traditionally restricted to larger organisations. Using it, enterprises can achieve new levels of agility and competitiveness, while individuals can achieve a greater amount of what they can accomplish. Agentic intelligence is not just an incremental upgrade to existing systems, but an entire shift that determines how the digital economy will function in the future, a shift which will define the next chapter in the history of our society.

How ChatGPT prompt can allow cybercriminals to steal your Google Drive data


Chatbots and other AI tools have made life easier for threat actors. A recent incident highlighted how ChatGPT can be exploited to obtain API keys and other sensitive data from cloud platforms.

Prompt injection attacks leads to cloud access

Experts have discovered a new prompt injection attack that can turn ChatGPT into a hacker’s best friend in data thefts. Known as AgentFlayer, the exploit uses a single document to hide “secret” prompt instructions that target OpenAI’s chatbot. An attacker can share what appears to be a harmless document with victims through Google Drive, without any clicks.

Zero-click threat: AgentFlayer

AgentFlayer is a “zero-click” threat as it abuses a vulnerability in Connectors, for instance, a ChatGPT feature that connects the assistant to other applications, websites, and services. OpenAI suggests that Connectors supports a few of the world’s most widely used platforms. This includes cloud storage platforms such as Microsoft OneDrive and Google Drive.

Experts used Google Drive to expose the threats possible from chatbots and hidden prompts. 

GoogleDoc used for injecting prompt

The malicious document has a 300-word hidden malicious prompt. The text is size one, formatted in white to hide it from human readers but visible to the chatbot.

The prompt used to showcase AgentFlayer’s attacks prompts ChatGPT to find the victim’s Google Drive for API keys, link them to a tailored URL, and an external server. When the malicious document is shared, the attack is launched. The threat actor gets the hidden API keys when the target uses ChatGPT (the Connectors feature has to be enabled).

Othe cloud platforms at risk too

AgentFlayer is not a bug that only affects the Google Cloud. “As with any indirect prompt injection attack, we need a way into the LLM's context. And luckily for us, people upload untrusted documents into their ChatGPT all the time. This is usually done to summarize files or data, or leverage the LLM to ask specific questions about the document’s content instead of parsing through the entire thing by themselves,” said expert Tamir Ishay Sharbat from Zenity Labs.

“OpenAI is already aware of the vulnerability and has mitigations in place. But unfortunately, these mitigations aren’t enough. Even safe-looking URLs can be used for malicious purposes. If a URL is considered safe, you can be sure an attacker will find a creative way to take advantage of it,” Zenith Labs said in the report.

Akira ransomware turns off Windows Defender to install malware on Windows devices

Akira ransomware turns off Windows Defender to install malware on Windows devices

Akira ransomware strikes again. This time, it has abused an Intel CPU tuning driver to stop Microsoft Defender in attacks from EDRs and security tools active on target devices.

Windows defender turned off for attacks

The exploited driver is called “rwdrv.sys” (used by ThrottleStop), which the hackers list as a service that allows them to gain kernel-level access. The driver is probably used to deploy an additional driver called “hlpdrv.sys,” a hostile tool that modifies Windows Defender to shut down its safety features.

'Bring your own vulnerable driver' attack

Experts have termed the attack “Bring your vulnerable driver (BYOVD), where hackers use genuine logged-in drivers that have known bugs that can be exploited to get privilege escalation. The driver is later used to deploy a hostile that turns off Microsoft Defender. According to the experts, the additional driver hlpdrv.sys is “similarly registered as a service. When executed, it modifies the DisableAntiSpyware settings of Windows Defender within \REGISTRY\MACHINE\SOFTWARE\Policies\Microsoft\Windows Defender\DisableAntiSpyware.” The malware achieves this by executing regedit.exe. 

Discovery of the Akira ransomware attack

The technique was observed by Guidepoint Security, which noticed repeated exploitation of the rwdrv.sys driver in Akira ransomware attacks. The experts flagged this tactic due to its ubiquity in the latest Akira ransomware incidents. “This high-fidelity indicator can be used for proactive detection and retroactive threat hunting,” the report said. 

To assist security experts in stopping these attacks, Guidepoint Security has offered a YARA rule for hlpdrv.sys and complete indicators of compromise (IoCs) for the two drivers, as well as their file paths and service names.

SonicWall VPN attack

Akira ransomware was also recently associated with SonicWall VPN attacks. The threat actor used an unknown bug. According to Guidepoint Security, it could not debunk or verify the abuse of a zero-day flaw in SonicWall VPNs by the Akira ransomware gang. Addressing the reports, SonicWall has advised to turn off SSLVPN, use two-factor authentication (2FA), remove inactive accounts, and enable Botnet/Geo-IP safety.

The DFIR report has also released a study of the Akira ransomware incidents, revealing the use of Bumblebee malware loader deployed through trojanized MSI loaders of IT software tools.