Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label FraudGPT. Show all posts

FraudGPT: ChatGPT's Evil Face

 

Threat actors are promoting the FraudGPT artificial intelligence (AI) tool, which follows in the footsteps of WormGPT, on a number of Telegram channels and dark web marketplaces.

This is an AI bot, solely designed for malicious purposes, such as designing spear phishing emails, developing cracking tools, carding, and so on, Netenrich security researcher Rakesh Krishnan noted in a report published Tuesday.

The cybersecurity company said that as of July 22, 2023, the subscription cost was $200 per month (or $1,000 for six months and $1,700 for a year). 

The actor, who uses the online moniker CanadianKingpin, claims that the alternative to Chat GPT is "designed to provide a wide range of exclusive tools, features, and capabilities tailored to anyone's individuals with no boundaries." 

The author also claims that the tool can be used to generate malicious code, develop undetectable malware, and uncover leaks and vulnerabilities, and that it has over 3,000 confirmed sales and reviews. The original large language model (LLM) used to design the system is currently unknown.

The change coincides with the threat actors' growing reliance on OpenAI ChatGPT-like AI technologies to create new adversarial versions that are specifically designed to encourage all forms of cybercriminal activities without any limitations. 

"While organizations can create ChatGPT (and other tools) with ethical safeguards, it isn't a difficult feat to reimplement the same technology without those safeguards," Krishnan added. "Implementing a defence-in-depth strategy with all the security telemetry available for fast analytics has become all the more essential to finding these fast-moving threats before a phishing email can turn into ransomware or data exfiltration." 

Ari Jacoby, CEO of Deduce, Inc., and a cybersecurity specialist, believes that AI-powered fraud will render classic fraud-prevention systems obsolete, necessitating a new wave of detection and prevention to combat the sophistication provided by AI tools. Top of mind? Employing AI for good by providing businesses with data-driven countermeasures. Second, instead of focusing on individual weaknesses, big data patterns should be measured and monitored to identify waves of fraud.