Search This Blog

Malware Can Be Written With ChatGPT, as it Turns Out

In addition to writing "polymorphic" malware that will destroy your computer, the popular AI chatbot has several other talents also.


With its multi-talented AI chatbot, ChatGPT, the company now has another skill to add to its LinkedIn profile: it is capable of creating sophisticated "polymorphic" malware. 

The chatbot from OpenAI has been reported as both skilled and resourceful when it comes to developing malicious programs that can cause a lot of trouble for your hardware. This is according to a new report from cybersecurity firm CyberArk. 

As far as cybercrime is concerned, upcoming AI-powered tools have been said to change the game when it comes to the battle against cybercrime, but the use of chatbots to create more complex types of malware hasn't been discussed extensively yet, with many medical professionals raising concerns about the potential implications. 

The researchers at CyberArk report that the code developed with the help of ChatGPT displayed "advanced capabilities" that could "easily evade security products," a specific type of malware known as "polymorphic." And to sum it up, CrowdStrike has offered the following answer to the question: 

There are many different types of viruses, but the most common is a polymorphic virus. This is sometimes called a metamorphic virus due to its capability to change its appearance repeatedly by altering decryption routines and changing its signature as part of this process. Consequently, most traditional cybersecurity tools, such as antivirus and antimalware solutions, which rely on signature-based detection to identify and block threats, have been found to fail to recognize and block this threat when used. 

This kind of malware can cryptographically disguise its true identity. This allows it to bypass many of the security measures that were built to identify and detect malicious signatures in files that cannot be recognized by traditional security mechanisms. 

While ChatGPT has the option of implementing filters that should prevent malware creation from taking place, researchers have found that by simply requesting that it follow the prompter's orders, they were able to bypass these barriers. Other experimenters have observed that they cannot simply make the platform behave according to their demands without being bullied into doing so. This is something that has been observed when trying to create toxic content using the chatbot by other experimenters. The CyberArk researchers were able to get ChatGPT to display specific malicious program code, which they then used to construct an exploit that would evade defenses, allowing them to execute a complex, defense-evading attack on the system. 

To produce malicious code, ChatGPT works by analyzing a user's conversation to create a message that appears to be harmful. This message is fed into a malicious program that produces malicious code on the fly. 

CyberArk's report also states that "security firms need to take care of the use of ChatGPT's API within the malware since it poses significant challenges to them. Having said that, it's imperative to realize that this is not just a hypothetical scenario but a very real concern." Yikes, indeed, what a situation.
Share it: