LameHug, a novel malware family, generates commands for execution on compromised Windows systems using a large language model (LLM).
Russia-backed threat group APT28 (also known as Sednit, Sofacy, Pawn Storm, Fancy Bear, STRONTIUM, Tsar Team, and Forest Blizzard) was attributed for the assaults after LameHug was identified by Ukraine's national cyber incident response team (CERT-UA). Written in Python, the malware communicates with the Qwen 2.5-Coder-32B-Instruct LLM via the Hugging Face API, which allows it to generate commands in response to prompts.
Alibaba Cloud developed the LLM, which is open-source and designed to produce code, reason, and follow coding-focused instructions. It can translate natural language descriptions into executable code (in several languages) or shell commands. CERT-UA discovered LameHug after receiving reports on July 10 of malicious emails received from hacked accounts impersonating ministry officials and attempting to disseminate malware to executive government organisations.
The emails include a ZIP attachment that contains a LameHub loader. CERT-UA identified at least three variants: 'Attachment.pif,' 'AI_generator_uncensored_Canvas_PRO_v0.9.exe,' and 'image.py.’
With a medium degree of confidence, the Ukrainian agency links this action to the Russian threat group APT28. In the reported attacks, LameHug was tasked with carrying out system reconnaissance and data theft directives generated dynamically by the LLM. LameHug used these AI-generated instructions to gather system information and save it to a text file (info.txt), recursively search for documents in critical Windows directories (Documents, Desktop, Downloads), then exfiltrate the data over SFTP or HTTP POST.
LameHug is the first publicly known malware that uses LLM to carry out the attacker's duties. From a technical standpoint, this could signal a new attack paradigm in which threat actors can modify their techniques throughout a compromise without requiring new payloads.
Furthermore, employing Hugging Face infrastructure for command and control may help to make communication more stealthy, allowing the intrusion to remain undetected for a longer period of time. The malware can also avoid detection by security software or static analysis tools that search for hardcoded commands by employing dynamically generated commands. CERT-UA did not specify if LameHug's execution of the LLM-generated commands was successful.