Search This Blog

Showing posts with label AI tools. Show all posts

How ChatGPT May Act as a Copilot for Security Experts

 

Security teams have been left to make assumptions about how generative AI will affect the threat landscape since ChatGPT-4 was released this week. Although it is now widely known that GPT-3 may be used to create malware and ransomware code, GPT-4 is 571X more potent, which could result in a large increase in threats. 

While the long-term effects of generative AI are yet unknown, a new study presented today by cybersecurity company Sophos reveals that GPT-3 can be used by security teams to thwart cyberattacks. 

Younghoo Lee, the principal data scientist for Sophos AI, and other Sophos researchers used the large language models from GPT-3 to create a natural language query interface for looking for malicious activity across the telemetry of the XDR security tool, detecting spam emails, and examining potential covert "living off the land" binary command lines. 

In general, Sophos' research suggests that generative AI has a crucial role to play in processing security events in the SOC, allowing defenders to better manage their workloads and identify threats more quickly. 

Detecting illegal activity 

The statement comes as security teams increasingly struggle to handle the volume of warnings generated by tools throughout the network, with 70% of SOC teams indicating that their work managing IT threat alerts is emotionally affecting their personal lives. 

According to Sean Gallagher, senior threat researcher at Sophos, one of the rising issues within security operation centres is the sheer amount of 'noise' streaming in. Many businesses are dealing with scarce resources, and there are just too many notifications and detections to look through. Using tools like GPT-3, we've demonstrated that it's possible to streamline some labor-intensive proxies and give defenders back vital time. 

Utilising ChatGPT as a cybersecurity co-pilot 

In the study, researchers used a natural language query interface where a security analyst may screen the data gathered by security technologies for harmful activities by typing queries in plain text English. 

For instance, the user may input a command like "show me all processes that were named powershelgl.exe and run by the root user" and produce XDR-SQL queries from them without having to be aware of the underlying database structure. 

This method gives defenders the ability to filter data without the usage of programming languages like SQL and offers a "co-pilot" to ease the effort of manually looking for threat data.

“We are already working on incorporating some of the prototypes into our products, and we’ve made the results of our efforts available on our GitHub for those interested in testing GPT-3 in their own analysis environments,” Gallagher stated. “In the future, we believe that GPT-3 may very well become a standard co-pilot for security experts.” 

It's important to note that researchers also discovered GPT-3 to filter threat data to be significantly more effective than utilising other substitute machine learning models. This would probably be faster with the upcoming version of generative AI given the availability of GPT-4 and its greater processing capabilities. Although these pilots are still in their early stages, Sophos has published the findings of the spam filtering and command line analysis experiments on the SophosAI GitHub website for other businesses to adapt.

ChatGPT: When Cybercrime Meets the Emerging Technologies


The immense capability of ChatGPT has left the entire globe abuzz. Indeed, it solves both practical and abstract problems, writes and debugs code, and even has the potential to aid with Alzheimer's disease screening. The OpenAI AI-powered chatbot, however, is at high risk of abuse, as is the case with many new technologies. 

How Can ChatGPT be Used Maliciously? 

Recently, researchers from Check Point Software discovered that ChatGPT could be utilized to create phishing emails. When combined with Codex, a natural language-to-code system by OpenAI, ChatGPT can develop and disseminate malicious code. 

According to Sergey Shykevich, threat intelligence group manager at Check Point Software, “Our researchers built a full malware infection chain starting from a phishing email to an Excel document that has malicious VBA [Visual Basic for Application] code. We can compile the whole malware to an executable file and run it in a machine.” 

He adds that ChatGPT primarily produces “much better and more convincing phishing and impersonation emails than real phishing emails we see in the wild now.” 

In regards to the same, Lorrie Faith Cranor, director and Bosch Distinguished Professor of the CyLab Security and Privacy Institute and FORE Systems Professor of computer science and of engineering and public policy at Carnegie Mellon University says, “I haven’t tried using ChatGPT to generate code, but I’ve seen some examples from others who have. It generates code that is not all that sophisticated, but some of it is actually runnable code[…]There are other AI tools out there for generating code, and they are all getting better every day. ChatGPT is probably better right now at generating text for humans, and may be particularly well suited for generating things like realistic spoofed emails.” 

Moreover, the researchers have also discovered hackers that create malicious tools like info-stealers and dark web markets using ChatGPT. 

What AI Tools are More Worrisome? 

Cranor says “I think to use these [AI] tools successfully today requires some technical knowledge, but I expect over time it will become easier to take the output from these tools and launch an attack[…]So while it is not clear that what the tools can do today is much more worrisome than human-developed tools that are widely distributed online, it won’t be long before these tools are developing more sophisticated attacks, with the ability to quickly generate large numbers of variants.” 

Furthermore, complications could as well arise from the inability to detect whether the code was created by utilizing ChatGPT. “There is no good way to pinpoint that a specific software, malware, or even phishing email was written by ChatGPT because there is no signature,” says Shykevich. 

What Could be the Solution? 

One of the methods OpenAI is opting for is to “watermark” the output of GPT models, which could later be used to determine whether they are created by AI or humans. 

In order to safeguard companies and individuals from these AI-generated threats, Shykevich advises using appropriate cybersecurity measures. While the current safeguards are still in effect, it is critical to keep upgrading and bolstering their application. 

“Researchers are also working on ways to use AI to discover code vulnerabilities and detect attacks[…]Hopefully, advances on the defensive side will be able to keep up with advances on the attacker side, but that remains to be seen,” says Cranor. 

While ChatGPT and other AI-backed systems have the potential to fundamentally alter how individuals interact with technology, they also carry some risk, particularly when used in dangerous ways. 

“ChatGPT is a great technology and has the potential to democratize AI,” adds Shykevich. “AI was kind of a buzzy feature that only computer science or algorithmic specialists understood. Now, people who aren’t tech-savvy are starting to understand what AI is and trying to adopt it in their day-to-day. But the biggest question, is how would you use it—and for what purposes?”