Cybersecurity experts have found the first-ever AI-powered ransomware strain. Experts Peter Strycek and Anton Cherepanov from ESET found the strain and have termed it “PromptLock.” "During infection, the AI autonomously decides which files to search, copy, or encrypt — marking a potential turning point in how cybercriminals operate," ESET said.
The malware has not been spotted in any cyberattack as of yet, experts say. Promptlock appears to be in development and is poised for launch.
Although cyber criminals used GenAI tools to create malware in the past, PromptLock is the first ransomware case that is based on an AI model. According to Cherepanov’s LinkedIn post, Promptlock exploits the gpt-oss:20b model from OpenAI through the Ollama API to make new scripts.
Cherepanov’s LinkedIn post highlighted that the ransomware script can exfiltrate files and encrypt data, but may destroy files in the future. He said that “while multiple indicators suggest that the sample is a proof-of-concept (PoC) or a work-in-progress rather than an operational threat in the wild, we believe it is crucial to raise awareness within the cybersecurity community about such emerging risks.
According to Dark Reading’s conversation with ESET experts, the AI-based ransomware is a serious threat to security teams. Strycek and Cherepanov are trying to find out more about PromptLock, but they want to warn the security teams immediately about the ransomware.
ESET on X noted that "the PromptLock ransomware is written in #Golang, and we have identified both Windows and Linux variants uploaded to VirusTotal."
Threat actors have started using AI tools to launch phishing campaigns by creating fake content and malicious websites, thanks to the rapid adoption across the industry. However, AI-powered ransomware will be a worse challenge for cybersecurity defenders.
A recent cyber incident has brought to light how one weak link in software integrations can expose sensitive business information. Salesloft, a sales automation platform, confirmed that attackers exploited its Drift chat integration with Salesforce to steal tokens that granted access to customer environments.
Between August 8 and August 18, 2025, threat actors obtained OAuth and refresh tokens connected to the Drift–Salesforce integration. These tokens work like digital keys, allowing connected apps to access Salesforce data without repeatedly asking for passwords. Once stolen, the tokens were used to log into Salesforce accounts and extract confidential data.
According to Salesloft, the attackers specifically searched for credentials such as Amazon Web Services (AWS) keys, Snowflake access tokens, and internal passwords. The company said the breach only impacted customers who used the Drift–Salesforce connection, while other integrations were unaffected. As a precaution, all tokens for this integration were revoked, forcing customers to reauthenticate before continuing use.
Google’s Threat Intelligence team, which is monitoring the attackers under the name UNC6395, reported that the group issued queries inside Salesforce to collect sensitive details hidden in support cases. These included login credentials, API keys, and cloud access tokens. Investigators noted that while the attackers tried to cover their tracks by deleting query jobs, the activity still appears in Salesforce logs.
To disguise their operations, the hackers used anonymizing tools like Tor and commercial hosting services. Google also identified user-agent strings and IP addresses linked to the attack, which organizations can use to check their logs for signs of compromise.
Security experts are urging affected administrators to rotate credentials immediately, review Salesforce logs for unusual queries, and search for leaked secrets by scanning for terms such as “AKIA” (used in AWS keys), “Snowflake,” “password,” or “secret.” They also recommend tightening access controls on third-party apps, limiting token permissions, and shortening session times to reduce future risk.
While some extortion groups have publicly claimed responsibility for the attack, Google stated there is no clear evidence tying them to this breach. The investigation is still ongoing, and attribution remains uncertain.
This incident underlines the broader risks of SaaS integrations. Connected apps are often given high levels of access to critical business platforms. If those credentials are compromised, attackers can bypass normal login protections and move deeper into company systems. As businesses continue relying on cloud applications, stronger governance of integrations and closer monitoring of token use are becoming essential.
According to a report by Proofpoint, the majority of CISOs fear a material cyberattack in the next 12 months. These concerns highlight the increasing risks and cultural shifts among CISOs.
“76% of CISOs anticipate a material cyberattack in the next year, with human risk and GenAI-driven data loss topping their concerns,” Proofpoint said. In this situation, corporate stakeholders are trying to get a better understanding of the risks involved when it comes to tech and whether they are safe or not.
Experts believe that CISOs are being more open about these attacks, thanks to SEC disclosure rules, strict regulations, board expectations, and enquiries. The report surveyed 1,600 CISOs worldwide; all the organizations had more than 1000 employees.
The study highlights a rising concern about doing business amid incidents of cyberattacks. Although the majority of CISOs are confident about their cybersecurity culture, six out of 10 CISOs said their organizations are not prepared for a cyberattack. The majority of the CISOs were found in favour of paying ransoms to avoid the leak of sensitive data.
AI has risen both as a top concern as well as a top priority for CISOs. Two-thirds of CISOs believe that enabling GenAI tools is a top priority over the next two years, despite the ongoing risks. In the US, however, 80% CISOs worry about possible data breaches through GenAI platforms.
With adoption rates rising, organizations have started to move from restriction to governance. “Most are responding with guardrails: 67% have implemented usage guidelines, and 68% are exploring AI-powered defenses, though enthusiasm has cooled from 87% last year. More than half (59%) restrict employee use of GenAI tools altogether,” Proofpoint said.