Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Evil AI models. Show all posts

Inside the Realm of Black Market AI Chatbots


With AI tools helping organizations and online users in a tremendously proficient way, there are obvious dark-sides of this trending technology. One of them being the notorious versions of AI bots.

A user may as well gain access to one such ‘evil’ version of OpenAI’s ChatGPT. While these AI versions may not necessarily by legal in some parts of the world, it could be pricey. 

Gaining Access to Black Market AI Chatbots

Gaining access to the evil chatbot versions could be tricky. To do so, a user must find the right web forum with the right users. To be sure, these users might have posted the marketed a private and powerful large language model (LLM). One can get in touch with these users in encrypted messaging services like Telegram, where they might ask for a few hundred crypto dollars for an LLM. 

After gaining the access, users can now do anything, especially the ones that are prohibited in ChatGPT and Google’s Bard, like having conversation with the AI on how to make pipe bombs or cook meth, engaging in discussions about any illegal or morally questionable subject under the sun, or even using it to finance phishing schemes and other cybercrimes.

“We’ve got folks who are building LLMs that are designed to write more convincing phishing email scams or allowing them to code new types of malware because they’re trained off of the code from previously available malware[…]Both of these things make the attacks more potent, because they’re trained off of the knowledge of the attacks that came before them,” says Dominic Sellitto, a cybersecurity and digital privacy researcher at the University of Buffalo.

These models are becoming more prevalent, strong, and challenging to regulate. They also herald the opening of a new front in the war on cybercrime, one that cuts far beyond text generators like ChatGPT and into the domains of audio, video, and graphics. 

“We’re blurring the boundaries in many ways between what is artificially generated and what isn’t[…]“The same goes for the written text, and the same goes for images and everything in between,” explained Sellitto.

Phishing for Trouble

Phishing emails, which demand that a user provide their financial information immediately to the Social Security Administration or their bank in order to resolve a fictitious crisis, cost American consumers close to $8.8 billion annually. The emails may contain seemingly innocuous links that actually download malware or viruses, allowing hackers to take advantage of any sensitive data directly from the victim's computer.

Fortunately, these phishing mails are quite easy to detect. In case they have not yet found their way to a user’s spam folder, one can easily identify them on the basis of their language, which may be informal and grammatically incorrect wordings that any legit financial firm would never use. 

However, with ChatGPT, it is becoming difficult to spot any error in the phishing mails, bringing about a veritable AI generative boom. 

“The technology hasn’t always been available on digital black markets[…]It primarily started when ChatGPT became mainstream. There were some basic text generation tools that might have used machine learning but nothing impressive,” Daniel Kelley, a former black hat computer hacker and cybersecurity consultant explains.

According to Kelley, these LLMs come in a variety of forms, including BlackHatGPT, WolfGPT, and EvilGPT. He claimed that many of these models, despite their nefarious names, are actually just instances of AI jailbreaks, a word used to describe the deft manipulation of already-existing LLMs such as ChatGPT to achieve desired results. Subsequently, these models are encapsulated within a customized user interface, creating the impression that ChatGPT is an entirely distinct chatbot.

However, this does not make AI models any less harmful. In fact, Kelley believes that one particular model is both one of the most evil and genuine ones: According to one description of WormGPT on a forum promoting the model, it is an LLM made especially for cybercrime that "lets you do all sorts of illegal stuff and easily sell it online in the future."

Both Kelley and Sellitto agrees that WormGPT could be used in business email compromise (BEC) attacks, a kind of phishing technique in which employees' information is stolen by pretending to be a higher-up or another authority figure. The language that the algorithm generates is incredibly clear, with precise grammar and sentence structure making it considerably more difficult to spot at first glance.

One must also take this into account that with easier access to the internet, really anyone can download these notorious AI models, making it easier to be disseminated. It is similar to a service that offers same-day mailing for buying firearms and ski masks, only that these firearms and ski masks are targeted at and built for criminals.