First AI-Powered Ransomware ‘PromptLock’ Emerges, Using OpenAI gpt-oss-20b Model for Encryption
A newly uncovered ransomware strain is making headlines as the first to integrate an artificial intelligence model for malicious operations.
Named “PromptLock” by the ESET Research team, the malware employs OpenAI’s gpt-oss:20b model through the Ollama API to generate customized cross-platform Lua scripts, marking a significant evolution in ransomware development.
Although currently believed to be a proof-of-concept (PoC) with no evidence of active deployment, the architecture highlights how cybercriminals are beginning to embed local large language models (LLMs) into malware to create more adaptive and evasive threats.
Unlike traditional ransomware, which relies on pre-built malicious code, PromptLock operates differently. Written in Golang, with both Windows and Linux variants found on VirusTotal, it sends hard-coded prompts to a locally running AI model.
Network analysis revealed POST requests to a local Ollama API endpoint (172.42.0[.]253:8443), where the AI was instructed to act as a “Lua code generator.”
The prompts directed the AI to generate malicious scripts capable of:
- System Enumeration – Collecting OS details, usernames, hostnames, and directories across Windows, Linux, and macOS.
- File System Inspection – Scanning drives, locating sensitive files, and identifying PII or confidential data.
- Data Exfiltration & Encryption – Running Lua-generated scripts to steal and encrypt information.
To execute encryption, the malware deploys the SPECK 128-bit block cipher, chosen for its lightweight flexibility.
ESET researchers noted that PromptLock appears unfinished, with functions like data destruction defined but not yet implemented.
Adding to its oddity, one prompt contained a Bitcoin address linked to Satoshi Nakamoto, likely a placeholder or diversion tactic.
Despite being a PoC, ESET disclosed the malware due to its implications. “We believe it is our responsibility to inform the cybersecurity community about such developments,” the researchers stated.
Experts warn that as local LLMs grow more accessible, attackers may increasingly rely on AI to dynamically generate malware on compromised systems. This shift could redefine how security teams approach ransomware defense, making proactive monitoring essential.