Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Ollama API attack. Show all posts

PromptLock: the new AI-powered ransomware and what to do about it

 



Security researchers recently identified a piece of malware named PromptLock that uses a local artificial intelligence model to help create and run harmful code on infected machines. The finding comes from ESET researchers and has been reported by multiple security outlets; investigators say PromptLock can scan files, copy or steal selected data, and encrypt user files, with code for destructive deletion present but not active in analysed samples. 


What does “AI-powered” mean here?

Instead of a human writing every malicious script in advance, PromptLock stores fixed text prompts on the victim machine and feeds them to a locally running language model. That model then generates small programs, written in the lightweight Lua language, which the malware executes immediately. Researchers report the tool uses a locally accessible open-weight model called gpt-oss:20b through the Ollama API to produce those scripts. Because the AI runs on the infected computer rather than contacting a remote service, the activity can be harder to spot. 


How the malware works

According to the technical analysis, PromptLock is written in Go, produces cross-platform Lua scripts that work on Windows, macOS and Linux, and uses a SPECK 128-bit encryption routine to lock files in flagged samples. The malware’s prompts include a Bitcoin address that investigators linked to an address associated with the pseudonymous Bitcoin creator known as Satoshi Nakamoto. Early variants have been uploaded to public analysis sites, and ESET treats this discovery as a proof of concept rather than evidence of widespread live attacks. 


Why this matters

Two features make this approach worrying for defenders. First, generated scripts vary each time, which reduces the effectiveness of signature or behaviour rules that rely on consistent patterns. Second, a local model produces no network traces to cloud providers, so defenders lose one common source of detection and takedown. Together, these traits could make automated malware harder to detect and classify. 

Practical, plain steps to protect yourself:

1. Do not run files or installers you do not trust.

2. Keep current, tested backups offline or on immutable storage.

3. Maintain up-to-date operating system and antivirus software.

4. Avoid running untrusted local AI models or services on critical machines, and restrict access to local model APIs.

These steps will reduce the risk from this specific technique and from ransomware in general. 


Bottom line

PromptLock is a clear signal that attackers are experimenting with local AI to automate malicious tasks. At present it appears to be a work in progress and not an active campaign, but the researchers stress vigilance and standard defensive practices while security teams continue monitoring developments. 



First AI-Powered Ransomware ‘PromptLock’ Emerges, Using OpenAI gpt-oss-20b Model for Encryption

 


A newly uncovered ransomware strain is making headlines as the first to integrate an artificial intelligence model for malicious operations.

Named “PromptLock” by the ESET Research team, the malware employs OpenAI’s gpt-oss:20b model through the Ollama API to generate customized cross-platform Lua scripts, marking a significant evolution in ransomware development.

Although currently believed to be a proof-of-concept (PoC) with no evidence of active deployment, the architecture highlights how cybercriminals are beginning to embed local large language models (LLMs) into malware to create more adaptive and evasive threats.

Unlike traditional ransomware, which relies on pre-built malicious code, PromptLock operates differently. Written in Golang, with both Windows and Linux variants found on VirusTotal, it sends hard-coded prompts to a locally running AI model.

Network analysis revealed POST requests to a local Ollama API endpoint (172.42.0[.]253:8443), where the AI was instructed to act as a “Lua code generator.”

The prompts directed the AI to generate malicious scripts capable of:
  1. System Enumeration – Collecting OS details, usernames, hostnames, and directories across Windows, Linux, and macOS.
  2. File System Inspection – Scanning drives, locating sensitive files, and identifying PII or confidential data.
  3. Data Exfiltration & Encryption – Running Lua-generated scripts to steal and encrypt information.

To execute encryption, the malware deploys the SPECK 128-bit block cipher, chosen for its lightweight flexibility.

ESET researchers noted that PromptLock appears unfinished, with functions like data destruction defined but not yet implemented.

Adding to its oddity, one prompt contained a Bitcoin address linked to Satoshi Nakamoto, likely a placeholder or diversion tactic.

Despite being a PoC, ESET disclosed the malware due to its implications. “We believe it is our responsibility to inform the cybersecurity community about such developments,” the researchers stated.

Experts warn that as local LLMs grow more accessible, attackers may increasingly rely on AI to dynamically generate malware on compromised systems. This shift could redefine how security teams approach ransomware defense, making proactive monitoring essential.