Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label DIG AI. Show all posts

Darknet AI Tool DIG AI Fuels Automated Cybercrime, Researchers Warn

 

Cybersecurity researchers have identified a new darknet-based artificial intelligence tool that allows threat actors to automate cyberattacks, generate malicious code and produce illegal content, raising concerns about the growing criminal misuse of AI. 

The tool, known as DIG AI, was uncovered by researchers at Resecurity and first detected on September 29, 2025. Investigators said its use expanded rapidly during the fourth quarter, particularly over the holiday season, as cybercriminals sought to exploit reduced vigilance and higher online activity. 

DIG AI operates on the Tor network and does not require user registration, enabling anonymous access. Unlike mainstream AI platforms, it has no content restrictions or safety controls, researchers said. 

The service offers multiple models, including an uncensored text generator, a text model believed to be based on a modified version of ChatGPT Turbo, and an image generation model built on Stable Diffusion. 

Resecurity said the platform is promoted by a threat actor using the alias “Pitch” on underground marketplaces, alongside listings for drugs and stolen financial data. The tool is offered for free with optional paid tiers that provide faster processing, a structure researchers described as a crime-as-a-service model. 

Analysts said DIG AI can generate functional malicious code, including obfuscated JavaScript backdoors that act as web shells. Such code can be used to steal user data, redirect traffic to phishing sites or deploy additional malware. 

While more complex tasks can take several minutes due to limited computing resources, paid options are designed to reduce delays. Beyond cybercrime, researchers warned the tool has been used to produce instructions for making explosives and illegal drugs. 

The image generation model, known as DIG Vision, was found capable of creating synthetic child sexual abuse material or altering real images, posing serious challenges for law enforcement and child protection efforts. 

Resecurity said DIG AI reflects a broader rise in so-called dark or jailbroken large language models, following earlier tools such as FraudGPT and WormGPT. 

Mentions of malicious AI tools on cybercrime forums increased by more than 200% between 2024 and 2025, the firm said. 

Researchers warned that as AI-driven attack tools become easier to access, they could be used to support large-scale cyber operations and real-world harm, particularly ahead of major global events scheduled for 2026.