PromptLock: the new AI-powered ransomware and what to do about it
Security researchers recently identified a piece of malware named PromptLock that uses a local artificial intelligence model to help create and run harmful code on infected machines. The finding comes from ESET researchers and has been reported by multiple security outlets; investigators say PromptLock can scan files, copy or steal selected data, and encrypt user files, with code for destructive deletion present but not active in analysed samples.
What does “AI-powered” mean here?
Instead of a human writing every malicious script in advance, PromptLock stores fixed text prompts on the victim machine and feeds them to a locally running language model. That model then generates small programs, written in the lightweight Lua language, which the malware executes immediately. Researchers report the tool uses a locally accessible open-weight model called gpt-oss:20b through the Ollama API to produce those scripts. Because the AI runs on the infected computer rather than contacting a remote service, the activity can be harder to spot.
How the malware works
According to the technical analysis, PromptLock is written in Go, produces cross-platform Lua scripts that work on Windows, macOS and Linux, and uses a SPECK 128-bit encryption routine to lock files in flagged samples. The malware’s prompts include a Bitcoin address that investigators linked to an address associated with the pseudonymous Bitcoin creator known as Satoshi Nakamoto. Early variants have been uploaded to public analysis sites, and ESET treats this discovery as a proof of concept rather than evidence of widespread live attacks.
Why this matters
Two features make this approach worrying for defenders. First, generated scripts vary each time, which reduces the effectiveness of signature or behaviour rules that rely on consistent patterns. Second, a local model produces no network traces to cloud providers, so defenders lose one common source of detection and takedown. Together, these traits could make automated malware harder to detect and classify.
Practical, plain steps to protect yourself:
1. Do not run files or installers you do not trust.
2. Keep current, tested backups offline or on immutable storage.
3. Maintain up-to-date operating system and antivirus software.
4. Avoid running untrusted local AI models or services on critical machines, and restrict access to local model APIs.
These steps will reduce the risk from this specific technique and from ransomware in general.
Bottom line
PromptLock is a clear signal that attackers are experimenting with local AI to automate malicious tasks. At present it appears to be a work in progress and not an active campaign, but the researchers stress vigilance and standard defensive practices while security teams continue monitoring developments.