Artificial intelligence is increasingly influencing the cyber security infrastructure, but recent claims about “AI-powered” cybercrime often exaggerate how advanced these threats currently are. While AI is changing how both defenders and attackers operate, evidence does not support the idea that cybercriminals are already running fully autonomous, self-directed AI attacks at scale.
For several years, AI has played a defining role in cyber security as organisations modernise their systems. Machine learning tools now assist with threat detection, log analysis, and response automation. At the same time, attackers are exploring how these technologies might support their activities. However, the capabilities of today’s AI tools are frequently overstated, creating a disconnect between public claims and operational reality.
Recent attention has been driven by two high-profile reports. One study suggested that artificial intelligence is involved in most ransomware incidents, a conclusion that was later challenged by multiple researchers due to methodological concerns. The report was subsequently withdrawn, reinforcing the importance of careful validation. Another claim emerged when an AI company reported that its model had been misused by state-linked actors to assist in an espionage operation targeting multiple organisations.
According to the company’s account, the AI tool supported tasks such as identifying system weaknesses and assisting with movement across networks. However, experts questioned these conclusions due to the absence of technical indicators and the use of common open-source tools that are already widely monitored. Several analysts described the activity as advanced automation rather than genuine artificial intelligence making independent decisions.
There are documented cases of attackers experimenting with AI in limited ways. Some ransomware has reportedly used local language models to generate scripts, and certain threat groups appear to rely on generative tools during development. These examples demonstrate experimentation, not a widespread shift in how cybercrime is conducted.
Well-established ransomware groups already operate mature development pipelines and rely heavily on experienced human operators. AI tools may help refine existing code, speed up reconnaissance, or improve phishing messages, but they are not replacing human planning or expertise. Malware generated directly by AI systems is often untested, unreliable, and lacks the refinement gained through real-world deployment.
Even in reported cases of AI misuse, limitations remain clear. Some models have been shown to fabricate progress or generate incorrect technical details, making continuous human supervision necessary. This undermines the idea of fully independent AI-driven attacks.
There are also operational risks for attackers. Campaigns that depend on commercial AI platforms can fail instantly if access is restricted. Open-source alternatives reduce this risk but require more resources and technical skill while offering weaker performance.
The UK’s National Cyber Security Centre has acknowledged that AI will accelerate certain attack techniques, particularly vulnerability research. However, fully autonomous cyberattacks remain speculative.
The real challenge is avoiding distraction. AI will influence cyber threats, but not in the dramatic way some headlines suggest. Security efforts should prioritise evidence-based risk, improved visibility, and responsible use of AI to strengthen defences rather than amplify fear.