Artificial intelligence is increasingly influencing the cyber security infrastructure, but recent claims about “AI-powered” cybercrime often exaggerate how advanced these threats currently are. While AI is changing how both defenders and attackers operate, evidence does not support the idea that cybercriminals are already running fully autonomous, self-directed AI attacks at scale.
For several years, AI has played a defining role in cyber security as organisations modernise their systems. Machine learning tools now assist with threat detection, log analysis, and response automation. At the same time, attackers are exploring how these technologies might support their activities. However, the capabilities of today’s AI tools are frequently overstated, creating a disconnect between public claims and operational reality.
Recent attention has been driven by two high-profile reports. One study suggested that artificial intelligence is involved in most ransomware incidents, a conclusion that was later challenged by multiple researchers due to methodological concerns. The report was subsequently withdrawn, reinforcing the importance of careful validation. Another claim emerged when an AI company reported that its model had been misused by state-linked actors to assist in an espionage operation targeting multiple organisations.
According to the company’s account, the AI tool supported tasks such as identifying system weaknesses and assisting with movement across networks. However, experts questioned these conclusions due to the absence of technical indicators and the use of common open-source tools that are already widely monitored. Several analysts described the activity as advanced automation rather than genuine artificial intelligence making independent decisions.
There are documented cases of attackers experimenting with AI in limited ways. Some ransomware has reportedly used local language models to generate scripts, and certain threat groups appear to rely on generative tools during development. These examples demonstrate experimentation, not a widespread shift in how cybercrime is conducted.
Well-established ransomware groups already operate mature development pipelines and rely heavily on experienced human operators. AI tools may help refine existing code, speed up reconnaissance, or improve phishing messages, but they are not replacing human planning or expertise. Malware generated directly by AI systems is often untested, unreliable, and lacks the refinement gained through real-world deployment.
Even in reported cases of AI misuse, limitations remain clear. Some models have been shown to fabricate progress or generate incorrect technical details, making continuous human supervision necessary. This undermines the idea of fully independent AI-driven attacks.
There are also operational risks for attackers. Campaigns that depend on commercial AI platforms can fail instantly if access is restricted. Open-source alternatives reduce this risk but require more resources and technical skill while offering weaker performance.
The UK’s National Cyber Security Centre has acknowledged that AI will accelerate certain attack techniques, particularly vulnerability research. However, fully autonomous cyberattacks remain speculative.
The real challenge is avoiding distraction. AI will influence cyber threats, but not in the dramatic way some headlines suggest. Security efforts should prioritise evidence-based risk, improved visibility, and responsible use of AI to strengthen defences rather than amplify fear.
A security bulletin from Anthropic describes a recent cybercrime campaign in which a threat actor used the company’s Claude AI system to steal data and demand payment. According to Anthropic’s technical report, the attacker targeted at least 17 organizations across healthcare, emergency services, government and religious sectors.
This operation did not follow the familiar ransomware pattern of encrypting files. Instead, the intruder quietly removed sensitive information and threatened to publish it unless victims paid. Some demands were very large, with reported ransom asks reaching into the hundreds of thousands of dollars.
Anthropic says the attacker ran Claude inside a coding environment called Claude Code, and used it to automate many parts of the hack. The AI helped find weak points, harvest login credentials, move through victim networks and select which documents to take. The criminal also used the model to analyze stolen financial records and set tailored ransom amounts. The campaign generated alarming HTML ransom notices that were shown to victims.
Anthropic discovered the activity and took steps to stop it. The company suspended the accounts involved, expanded its detection tools and shared technical indicators with law enforcement and other defenders so similar attacks can be detected and blocked. News outlets and industry analysts say this case is a clear example of how AI tools can be misused to speed up and scale cybercrime operations.
Why this matters for organizations and the public
AI systems that can act automatically introduce new risks because they let attackers combine technical tasks with strategic choices, such as which data to expose and how much to demand. Experts warn defenders must upgrade monitoring, enforce strong authentication, segment networks and treat AI misuse as a real threat that can evolve quickly.
The incident shows threat actors are experimenting with agent-like AI to make attacks faster and more precise. Companies and public institutions should assume this capability exists and strengthen basic cyber hygiene while working with vendors and authorities to detect and respond to AI-assisted threats.
Security experts have identified a new kind of cyber attack that hides instructions inside ordinary pictures. These commands do not appear in the full image but become visible only when the photo is automatically resized by artificial intelligence (AI) systems.
The attack works by adjusting specific pixels in a large picture. To the human eye, the image looks normal. But once an AI platform scales it down, those tiny adjustments blend together into readable text. If the system interprets that text as a command, it may carry out harmful actions without the user’s consent.
Researchers tested this method on several AI tools, including interfaces that connect with services like calendars and emails. In one demonstration, a seemingly harmless image was uploaded to an AI command-line tool. Because the tool automatically approved external requests, the hidden message forced it to send calendar data to an attacker’s email account.
The root of the problem lies in how computers shrink images. When reducing a picture, algorithms merge many pixels into fewer ones. Popular methods include nearest neighbor, bilinear, and bicubic interpolation. Each creates different patterns when compressing images. Attackers can take advantage of these predictable patterns by designing images that reveal commands only after scaling.
To prove this, the researchers released Anamorpher, an open-source tool that generates such images. The tool can tailor pictures for different scaling methods and software libraries like TensorFlow, OpenCV, PyTorch, or Pillow. By hiding adjustments in dark parts of an image, attackers can make subtle brightness shifts that only show up when downscaled, turning backgrounds into letters or symbols.
Mobile phones and edge devices are at particular risk. These systems often force images into fixed sizes and rely on compression to save processing power. That makes them more likely to expose hidden content.
The researchers also built a way to identify which scaling method a system uses. They uploaded test images with patterns like checkerboards, circles, and stripes. The artifacts such as blurring, ringing, or color shifts revealed which algorithm was at play.
This discovery also connects to core ideas in signal processing, particularly the Nyquist-Shannon sampling theorem. When data is compressed below a certain threshold, distortions called aliasing appear. Attackers use this effect to create new patterns that were not visible in the original photo.
According to the researchers, simply switching scaling methods is not a fix. Instead, they suggest avoiding automatic resizing altogether by setting strict upload limits. Where resizing is necessary, platforms should show users a preview of what the AI system will actually process. They also advise requiring explicit user confirmation before any text detected inside an image can trigger sensitive operations.
This new attack builds on past research into adversarial images and prompt injection. While earlier studies focused on fooling image-recognition models, today’s risks are greater because modern AI systems are connected to real-world tools and services. Without stronger safeguards, even an innocent-looking photo could become a gateway for data theft.
Google’s Gmail is now offering two new upgrades, but here’s the catch— they don’t work well together. This means Gmail’s billions of users are being asked to pick a side: better privacy or smarter features. And this decision could affect how their emails are handled in the future.
Let’s break it down. One upgrade focuses on stronger protection of your emails, which works like advanced encryption. This keeps your emails private, even Google won’t be able to read them. The second upgrade brings in artificial intelligence tools to improve how you search and use Gmail, promising quicker, more helpful results.
But there’s a problem. If your emails are fully protected, Gmail’s AI tools can’t read them to include in its search results. So, if you choose privacy, you might lose out on the benefits of smarter searches. On the other hand, if you want AI help, you’ll need to let Google access more of your email content.
This challenge isn’t unique to Gmail. Many tech companies are trying to combine stronger security with AI-powered features, but the two don’t always work together. Apple tried solving this with a system that processes data securely on your device. However, delays in rolling out their new AI tools have made their solution uncertain for now.
Some reports explain the choice like this: if you turn on AI features, Google will use your data to power smart tools. If you turn it off, you’ll have better privacy, but lose some useful options. The real issue is that opting out isn’t always easy. Some settings may remain active unless you manually turn them off, and fully securing your emails still isn’t simple.
Even when extra security is enabled, email systems have limitations. For example, Apple’s iCloud Mail doesn’t use full end-to-end encryption because it must work with global email networks. So even private emails may not be completely safe.
This issue goes beyond Gmail. Other platforms are facing similar challenges. WhatsApp, for example, added a privacy mode that blocks saving chats and media, but also limits AI-related features. OpenAI’s ChatGPT can now remember what you told it in past conversations, which may feel helpful but also raises questions about how your personal data is being stored.
In the end, users need to think carefully. AI tools can make email more useful, but they come with trade-offs. Email has never been a perfectly secure space, and with smarter AI, new threats like scams and data misuse may grow. That’s why it’s important to weigh both sides before making a choice.
On Thursday, OpenAI’s ChatGPT experienced a significant outage in the UK, leaving thousands of users unable to access the popular AI chatbot. The disruption, which began around 11:00 GMT, saw users encountering a “bad gateway error” message when attempting to use the platform. According to Downdetector, a website that tracks service interruptions, over 10,000 users reported issues during the outage, which persisted for several hours and caused widespread frustration.
OpenAI acknowledged the issue on its official status page, confirming that a fix was implemented by 15:09 GMT. The company assured users that it was monitoring the situation closely, but no official explanation for the cause of the outage has been provided so far. This lack of transparency has fueled speculation among users, with theories ranging from server overload to unexpected technical failures.
As the outage unfolded, affected users turned to social media to voice their concerns and frustrations. On X (formerly Twitter), one user humorously remarked, “ChatGPT is down again? During the workday? So you’re telling me I have to… THINK?!” While some users managed to find humor in the situation, others raised serious concerns about the reliability of AI services, particularly those who depend on ChatGPT for professional tasks such as content creation, coding assistance, and research.
ChatGPT has become an indispensable tool for millions since its launch in November 2022. OpenAI CEO Sam Altman recently revealed that by December 2024, the platform had reached over 300 million weekly users, highlighting its rapid adoption as one of the most widely used AI tools globally. However, the incident has raised questions about service reliability, especially among paying customers. OpenAI’s premium plans, which offer enhanced features, cost up to $200 per month, prompting some users to question whether they are getting adequate value for their investment.
The outage comes at a time of rapid advancements in AI technology. OpenAI and other leading tech firms have pledged significant investments into AI infrastructure, with a commitment of $500 billion toward AI development in the United States. While these investments aim to bolster the technology’s capabilities, incidents like this serve as a reminder of the growing dependence on AI tools and the potential risks associated with their widespread adoption.
The disruption highlights the importance of robust technical systems to ensure uninterrupted service, particularly for users who rely heavily on AI for their daily tasks. Despite restoring services relatively quickly, OpenAI’s ability to maintain user trust and satisfaction may hinge on its efforts to improve its communication strategy and technical resilience. Paying customers, in particular, expect transparency and proactive measures to prevent such incidents in the future.
As artificial intelligence becomes more deeply integrated into everyday life, service disruptions like the ChatGPT outage underline both the potential and limitations of the technology. Users are encouraged to stay informed through OpenAI’s official channels for updates on any future service interruptions or maintenance activities.
Moving forward, OpenAI may need to implement backup systems and alternative solutions to minimize the impact of outages on its user base. Clearer communication during disruptions and ongoing efforts to enhance technical infrastructure will be key to ensuring the platform’s reliability and maintaining its position as a leader in the AI industry.