Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Forcepoint. Show all posts

ChatGPT: Researcher Develops Malicious Data-stealing Malware Using AI


Ever since the introduction of ChatGPT last year, it has created a buzz among tech enthusiasts all around the world with its ability to create articles, poems, movie scripts, and much more. The AI can even generate functional code if provided with well-written and clear instructions. 

Despite the security measures put in place by OpenAI, with a majority of developers using it for harmless purposes, a new analysis suggests that AI can still be utilized by threat actors to create malware. 

According to a cybersecurity researcher, ChatGPT was utilised to create a zero-day attack that may be used to collect data from a hacked device. Alarmingly, the malware managed to avoid being detected by every vendor on VirusTotal. 

As per Forcepoint researcher Aaron Mulgrew, he had decided early on in the malware development process not to write any code himself and instead to use only cutting-edge approaches often used by highly skilled threat actors, such as rogue nation-states. 

Mulgrew, who called himself a "novice" at developing malware, claimed that he selected the Go implementation language not just because it was simple to use but also because he could manually debug the code if necessary. In order to escape detection, he also used steganography, which conceals sensitive information within an ordinary file or message. 

Creating Dangerous Malware Through ChatGPT 

Mulgrew found a loophole in ChatGPT's code that allowed him to write the malware code line by line and function by function. 

He created an executable that steals data discreetly after compiling each of the separate functions, which he believes were comparable to nation-state malware. The drawback here is that Mulgrew developed such dangerous malware with no advanced coding experience or with the help of any hacking team. 

As told by Mulgrew, the malware poses as a screensaver app, that launches itself on Windows-sponsored devices, automatically. Once launched, the malware looks for various files, like Word documents, images, and PDFs, and steals any data it can find. 

The data is then fragmented by the malware and concealed within other photos on the device. The data theft is difficult to identify because these images are afterward transferred to a Google Drive folder. 

Latest from OpenAI 

According to a report by Reuters, European Data Protection Board (EDPB) has recently established a task force to address privacy issues relating to artificial intelligence (AI), with a focus on ChatGPT. 

The action comes after recent decisions by Germany's commissioner for data protection and Italy to regulate ChatGPT, raising the possibility that other nations may follow suit.