Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label WormGPT. Show all posts

Inside the Realm of Black Market AI Chatbots


With AI tools helping organizations and online users in a tremendously proficient way, there are obvious dark-sides of this trending technology. One of them being the notorious versions of AI bots.

A user may as well gain access to one such ‘evil’ version of OpenAI’s ChatGPT. While these AI versions may not necessarily by legal in some parts of the world, it could be pricey. 

Gaining Access to Black Market AI Chatbots

Gaining access to the evil chatbot versions could be tricky. To do so, a user must find the right web forum with the right users. To be sure, these users might have posted the marketed a private and powerful large language model (LLM). One can get in touch with these users in encrypted messaging services like Telegram, where they might ask for a few hundred crypto dollars for an LLM. 

After gaining the access, users can now do anything, especially the ones that are prohibited in ChatGPT and Google’s Bard, like having conversation with the AI on how to make pipe bombs or cook meth, engaging in discussions about any illegal or morally questionable subject under the sun, or even using it to finance phishing schemes and other cybercrimes.

“We’ve got folks who are building LLMs that are designed to write more convincing phishing email scams or allowing them to code new types of malware because they’re trained off of the code from previously available malware[…]Both of these things make the attacks more potent, because they’re trained off of the knowledge of the attacks that came before them,” says Dominic Sellitto, a cybersecurity and digital privacy researcher at the University of Buffalo.

These models are becoming more prevalent, strong, and challenging to regulate. They also herald the opening of a new front in the war on cybercrime, one that cuts far beyond text generators like ChatGPT and into the domains of audio, video, and graphics. 

“We’re blurring the boundaries in many ways between what is artificially generated and what isn’t[…]“The same goes for the written text, and the same goes for images and everything in between,” explained Sellitto.

Phishing for Trouble

Phishing emails, which demand that a user provide their financial information immediately to the Social Security Administration or their bank in order to resolve a fictitious crisis, cost American consumers close to $8.8 billion annually. The emails may contain seemingly innocuous links that actually download malware or viruses, allowing hackers to take advantage of any sensitive data directly from the victim's computer.

Fortunately, these phishing mails are quite easy to detect. In case they have not yet found their way to a user’s spam folder, one can easily identify them on the basis of their language, which may be informal and grammatically incorrect wordings that any legit financial firm would never use. 

However, with ChatGPT, it is becoming difficult to spot any error in the phishing mails, bringing about a veritable AI generative boom. 

“The technology hasn’t always been available on digital black markets[…]It primarily started when ChatGPT became mainstream. There were some basic text generation tools that might have used machine learning but nothing impressive,” Daniel Kelley, a former black hat computer hacker and cybersecurity consultant explains.

According to Kelley, these LLMs come in a variety of forms, including BlackHatGPT, WolfGPT, and EvilGPT. He claimed that many of these models, despite their nefarious names, are actually just instances of AI jailbreaks, a word used to describe the deft manipulation of already-existing LLMs such as ChatGPT to achieve desired results. Subsequently, these models are encapsulated within a customized user interface, creating the impression that ChatGPT is an entirely distinct chatbot.

However, this does not make AI models any less harmful. In fact, Kelley believes that one particular model is both one of the most evil and genuine ones: According to one description of WormGPT on a forum promoting the model, it is an LLM made especially for cybercrime that "lets you do all sorts of illegal stuff and easily sell it online in the future."

Both Kelley and Sellitto agrees that WormGPT could be used in business email compromise (BEC) attacks, a kind of phishing technique in which employees' information is stolen by pretending to be a higher-up or another authority figure. The language that the algorithm generates is incredibly clear, with precise grammar and sentence structure making it considerably more difficult to spot at first glance.

One must also take this into account that with easier access to the internet, really anyone can download these notorious AI models, making it easier to be disseminated. It is similar to a service that offers same-day mailing for buying firearms and ski masks, only that these firearms and ski masks are targeted at and built for criminals.

WormGPT: AI Tool Developed for Cybercrime Actors


Cybersecurity experts have raised concerns against the rapidly emerging malicious AI tool: WormGPT. The AI tool is specifically developed for cybercrime actors, to assist them in their operations and create sophisticated attacks on an unprecedented scale.

While AI has made significant strides in various areas, it is increasingly apparent that technology might be abused in the world of cybercrime. WormGPT has built-in safeguards to prevent its nefarious usage, in contrast to its helpful counterparts like OpenAI's ChatGPT, raising concerns about the potential destruction it could cause in the digital environment.

What is WormGPT

WormGPT, developed by anonymous creators is an AI chatbot, similar to OpenAI’s ChatGPT. However, the one aspect that differentiates it from other chatbots is: that it lacks the protective measures that prevent its exploitation. The conspicuous lack of safeguards has raised concerns among cybersecurity experts and researchers. Due to the diligence of Daniel Kelley, a former hacker and prominent cybersecurity business Slash Next, this malicious AI tool has been brought to the notice of the cybersecurity community. In the murky recesses of cybercrime sites, they found adverts for WormGPT, which revealed a lurking danger.

How Does WormGPT Function? 

Apparently, hackers gain access to WormGPT via the dark web, further acquiring access to a web interface where they can enter commands and gain responses almost resembling the human language. This malware focuses mostly on business email compromise assaults and phishing emails, two types of cyberattacks that can have catastrophic results.

WormGPT aids hackers in crafting phishing emails, that could convince victims into taking actions that will compromise their security. The fabrication of persuading emails that appear to be from a company's CEO is a noteworthy example of this. These emails might demand payment from an employee for a fake invoice. WormGPT's sophisticated writing is more convincing and can mimic reliable people in a business email system since it draws from a large database of human-written information.

The Alarming Reach of ChatGPT

One of the major concerns regarding WormGPT among cybersecurity experts is its reach. Since the AI tool is readily available on the dark web, more and more threat actors are utilizing it for conducting malicious activities in cyberspace. Implying the AI tool suggests that far-reaching, large-scale attacks are on their way that could potentially affect more individuals, organizations and even state agencies. 

A Wake-up-call for the Tech Industry

The advent of WormGPT acts as a severe wake-up call for the IT sector and the larger cybersecurity community. While there is no denying that AI has advanced significantly, it has also created obstacles that have never before existed. While the designers of sophisticated AI systems like ChatGPT celebrate their achievements and widespread use, they also have a duty to address possible compromises of their innovations. WormGPT's lack of protections highlights how urgent it is to have strong ethical standards and safeguards for AI technology.  

FraudGPT: ChatGPT's Evil Face

 

Threat actors are promoting the FraudGPT artificial intelligence (AI) tool, which follows in the footsteps of WormGPT, on a number of Telegram channels and dark web marketplaces.

This is an AI bot, solely designed for malicious purposes, such as designing spear phishing emails, developing cracking tools, carding, and so on, Netenrich security researcher Rakesh Krishnan noted in a report published Tuesday.

The cybersecurity company said that as of July 22, 2023, the subscription cost was $200 per month (or $1,000 for six months and $1,700 for a year). 

The actor, who uses the online moniker CanadianKingpin, claims that the alternative to Chat GPT is "designed to provide a wide range of exclusive tools, features, and capabilities tailored to anyone's individuals with no boundaries." 

The author also claims that the tool can be used to generate malicious code, develop undetectable malware, and uncover leaks and vulnerabilities, and that it has over 3,000 confirmed sales and reviews. The original large language model (LLM) used to design the system is currently unknown.

The change coincides with the threat actors' growing reliance on OpenAI ChatGPT-like AI technologies to create new adversarial versions that are specifically designed to encourage all forms of cybercriminal activities without any limitations. 

"While organizations can create ChatGPT (and other tools) with ethical safeguards, it isn't a difficult feat to reimplement the same technology without those safeguards," Krishnan added. "Implementing a defence-in-depth strategy with all the security telemetry available for fast analytics has become all the more essential to finding these fast-moving threats before a phishing email can turn into ransomware or data exfiltration." 

Ari Jacoby, CEO of Deduce, Inc., and a cybersecurity specialist, believes that AI-powered fraud will render classic fraud-prevention systems obsolete, necessitating a new wave of detection and prevention to combat the sophistication provided by AI tools. Top of mind? Employing AI for good by providing businesses with data-driven countermeasures. Second, instead of focusing on individual weaknesses, big data patterns should be measured and monitored to identify waves of fraud.

Evil Unleashed: Meet WormGPT Chat's Wicked Twin

 


Over 100 million users have signed up for ChatGPT since it launched last year, making it one of the top ten most popular apps in the world. Artificial intelligence has taken the world by storm in recent years with OpenAI's chatbots. In the wake of Bing Chat and Google Bard, Microsoft and Google have created follow-up products inspired by Bing Chat. A revolutionary AI is in town - WormGPT, which you could say is here to make your life easier, but it's not here to help you. 

A worm-like AI chatbot called WormGPT has not been designed to bring amusingly wriggly invertebrate AI assistance to the feline-specific ChatGPT, but rather to provide a fun twist on the traditional chatbot. It's a far more malicious and unethical tool that is designed without ethics to be of any use to anyone. A popular advantage of this product is that it boosts productivity, raises effectiveness, and lowers the entry barrier for your average cybercriminal to gain access.  

A hacker came up with WormGPT which is an artificial intelligence (AI) model used to create a malicious computer program. It poses a lot of danger to individuals and companies alike. It is imperative to note that WormGPT is different from its counterpart, ChatGPT, which is designed to help. ChatGPT has an excellent intention, whereas WormGPT is designed to attack large amounts of people. 

This "sophisticated AI model," independently verified by cyber security firm SlashNext, was malicious. SlashNext alleges that the model was trained using a wide range of data sources, with a specific focus on malware-related data as part of its data-gathering process. In the case of GPT-J programming language software, the risks associated with AI modules can be exemplified by the threat of harming even those not well-versed in them.

Researchers from the International Center for Computer Security conducted experiments using phishing emails to better understand WormGPT risks. Despite being highly persuasive, the model also showed strategic cunning to generate persuasive emails. This was strategic. It is important to note that this indicates that sophisticated phishing attacks and business email compromises (BECs) are possible. 

In the last couple of years, experts, government officials, and even the creator of ChatGPT, along with the developers of WormGPT have recognized the dangers of AI tools such as ChatGPT and WormGPT. Their point of view has been that the public must be protected from misuse of these technologies through the adoption of regulations. There have also been warnings from Europol, the international organization that is meant to support law enforcement authorities in preventing the misuse of large language models (LLMs) such as ChatGPT for fraud, impersonation, and social engineering purposes. 

The primary concern with AI tools such as ChatGPT is their ability to automatically generate highly authentic text in response to a user prompt, which is what makes them so appealing to researchers.

The fact that they are so popular for phishing attacks makes them extremely useful. Phishing scams used to be very easy to detect because they had obvious grammatical and spelling errors that allowed them to be detected readily. The major advancement in artificial intelligence has provided a powerful tool for impersonating organizations and people in an extremely realistic manner, thanks to advances in AI. The above situation is even true for those who understand English at a basic level. 

The acquisition of WormGPT Large Language Model (LLM) style ChatGPT for only $60 a month on the dark web has now made it possible to access WormGPT services. Without any ethical or moral limits, it is now possible to access its services. The chatbot is a version of degenerate generative artificial intelligence; in other words, it is not subject to the same filters as its counterpart – the ChatGPT – that is imposed by corporations such as Google, Facebook, and even OpenAI. NordVPN's IT security experts have already described ChatGPT as the "evil twin" of ChatGPT.

It is probably the most powerful hacking tool available in the world at the moment. The WormGPT tool was designed by a skilled hacker who built it on top of open-source LLM GPT-J as of 2021. 

During the testing process of WormGPT, SlashNext discovered some disturbing results that need to be addressed. A phishing email would be very difficult for a human to detect since it is so convincing, but WormGPT went above and beyond just to come up with something convincing, it even put together a very sophisticated way of combining all the phishing email elements to deceive potential victims. 

The purpose of WormGPT is to protect your computer from any sort of attack by your adversaries. WormGPT was able to achieve this through a series of cat-and-mouse games with OpenAI, which Adrianus Warmenhoven explained to us. It can be said that this is the result of a company trying to circumvent the ever-expanding provisions imposed by the government. This is to protect itself from legal liability. It was a method used by the LLM to impart information on illegal activity into seemingly innocuous texts, such as family letters and other correspondences, as part of the training process. 

Cybercriminals will no longer have to be restricted to subverting Open AI, as explained by the expert. With WormGPT they will no longer be required to do so. As a result, they can effectively make this technology evolve based on their own needs, and this, in turn, will transform the world of Artificial Intelligence into a true wild west that is becoming increasingly populated by humans. 

It is without a doubt that they will have to choose from an array of ever-advancing, ever-improving models being offered to ne'er-do-wells shortly, with the first AI chatbot the majority of ne'er-do-wells will have to use to assist them with their criminal acts. 

There is no doubt that Artificial Intelligence will become an increasingly important tool in preventing AI-generated cybercrime in the coming years, resulting in a race to see which side can more proficiently answer its questions. 

As of now, there are 90 seconds left until midnight on the clock of doomsday. This is due to the rapid adoption of disruptive technologies by humans. As a result, the doomsday clock that monitors our internet security might as well be in the middle of the night shortly. The only likely outcome as two disruptive forces collide on the digital landscape is mutually assured destruction, so perhaps it's time to all climb into our antivirus Anderson shelters and fill our bellies with MRE Malwarebytes.

AI Malware vs. AI Defences: WormGPT Cybercrime Tool Predicts a New Era

 

Business email compromise (BEC) attacks are being launched by cybercriminals with the assistance of generative AI technology, and one such tool used is WormGPT, a black-hat alternative to GPT models that has been designed for malicious goals. 

SlashNext said that WormGPT was trained on a variety of data sources, with a concentration on malware-related data. Based on the input it receives, WormGPT can produce highly convincing phoney emails by creating language that resembles human speech. 

Screenshots of malicious actors exchanging ideas on how to utilise ChatGPT to support successful BEC assaults are shown in a cybercrime form, demonstrating that even hackers who are not fluent in the target language can create convincing emails using gen AI. 

The research team also assessed WormGPT's potential risks, concentrating particularly on BEC assaults. They programmed the tool to generate an email intended to persuade an unsuspecting account manager into paying a fake invoice.

The findings showed that WormGPT was "strategically cunning," demonstrating its capacity to launch complex phishing and BEC operations, in addition to being able to use a convincing tone. 

The research study noted that the creation of tools highlights the threat posed by generative AI technologies, including WormGPT, even in the hands of inexperienced hackers.

"It's like ChatGPT but has no ethical boundaries or limitations," the report said. The report also highlighted that hackers are developing "jailbreaks," specialised commands intended to trick generative AI interfaces into producing output that may involve revealing private data, creating offensive content, or even running malicious code. 

Some proactive cybercriminals are even going so far as to create their own, attack-specific modules that are similar to those used by ChatGPT. This development could make cyber defence much more challenging. 

"Malicious actors can now launch these attacks at scale at zero cost, and they can do it with much more targeted precision than they could before," stated SlashNext CEO Patrick Harr. "If they aren't successful with the first BEC or phishing attempt, they can simply try again with retooled content." 

The growth of generative AI tools is adding complications and obstacles to cybersecurity operations, as well as highlighting the need for more effective defence systems against emerging threats. 

Harr believes that AI-aided BEC, malware, and phishing attacks may be best combated using AI-aided defence capabilities. He believes that organisations will eventually rely on AI to handle the discovery, detection, and remediation of these dangers since there is no other way for humans to stay ahead of the game. Despite its directive to block malicious requests, a Forcepoint researcher persuaded the AI tool to construct malware for locating and exfiltrating certain documents in April. 

Meanwhile, developers' enthusiasm for ChatGPT and other large language model (LLM) tools has left most organisations entirely unable to guard against the vulnerabilities introduced by the emerging technology.