Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label LLM security. Show all posts

How are LLMs with Endpoint Data Boost Cybersecurity


The issue of capturing weak signals across endpoints and predicting possible patterns of intrusion attempts is ideally suited for Large Language Models (LLMs). The objective is to mine attack data in order to improve LLMs and models and discover new threat patterns and correlations.

Recently, some of the top endpoint detection and response (EDR) and extended detection and response (XDR) vendors were seen taking on the challenge. 

Palo Alto Network’s chairman and CEO Nikesh Arora says, “We collect the most amount of endpoint data in the industry from our XDR. We collect almost 200 megabytes per endpoint, which is, in many cases, 10 to 20 times more than most of the industry participants. Why do you do that? Because we take that raw data and cross-correlate or enhance most of our firewalls, we apply attack surface management with applied automation using XDR.” 

Co-founder and CEO of Crowdstrike, George Kurtz stated at the company’s annual Fal.Con event last year, “One of the areas that we’ve really pioneered is that we can take weak signals from across different endpoints. And we can link these together to find novel detections. We’re now extending that to our third-party partners so that we can look at other weak signals across not only endpoints but across domains and come up with a novel detection.” 

It has been demonstrated that XDR can produce better signals with fewer noise. Broadcom, Cisco, CrowdStrike, Fortinet, Microsoft, Palo Alto Networks, SentinelOne, Sophos, TEHTRIS, Trend Micro, and VMware being some of the top providers of XDR platforms.

Why LLMs are the new key element of Endpoint Security?

Endpoint security will evolve with the inclusion of telemetry and human-annotated data by enhancing LLMs. 

As per the authors of Gartner’s latest Hype Cycle for Endpoint Security, endpoint security technologies concentrate on faster, automated detection and prevention as well as remediation of attacks, to power integrated, extended detection and response (XDR), which correlates data points and telemetry from endpoint, network, emails, and identity solutions.

Compared to the larger information security and risk management market, spending on EDR and XDR is expanding more quickly. As a result, there is more intense competition across EDR and XDR providers.

According to Gartner, the market for endpoint security platforms will expand at a compound annual growth rate (CAGR) of 16.8% from its current $14.45 billion to $26.95 billion in 2027. With an 11% compound annual growth rate, the global market for information security and risk management is expected to reach $287 billion by 2027 from $164 billion in 2022.  

AI 'Hypnotizing' for Rule bypass and LLM Security


In recent years, large language models (LLMs) have risen to prominence in the field, capturing widespread attention. However, this development prompts crucial inquiries regarding their security and susceptibility to response manipulation. This article aims to explore the security vulnerabilities linked with LLMs and contemplate the potential strategies that could be employed by malicious actors to exploit them for nefarious ends. 

Year after year, we witness a continuous evolution in AI research, where the established norms are consistently challenged, giving rise to more advanced systems. In the foreseeable future, possibly within a few decades, there may come a time when we create machines equipped with artificial neural networks that closely mimic the workings of our own brains. 

At that juncture, it will be imperative to ensure that they possess a level of security that surpasses our own susceptibility to hacking. The advent of large language models has ushered in a new era of opportunities, such as automating customer service and generating creative content. 

However, there is a mounting concern regarding the cybersecurity risks associated with this advanced technology. People worry about the potential misuse of these models to fabricate false responses or disclose private information. This underscores the critical importance of implementing robust security measures. 

What is Hypnotizing? 

In the world of Large Language Model security, there's an intriguing idea called "hypnotizing" LLMs. This concept, explored by Chenta Lee from the IBM Security team, involves tricking an LLM into believing something false. It starts with giving the LLM new instructions that follow a different set of rules, essentially creating a made-up situation. 

This manipulation can make the LLM give the opposite of the right answer, which messes up the reality it was originally taught. Think of this manipulation process like a trick called "prompt injection." It's a bit like a computer hack called SQL injection. In both cases, a sneaky actor gives the system a different kind of input that tricks it into giving out information it should not. 

LLMs can face risks not only when they are in use, but also in three other stages: 

1. When they are first being trained. 

2. When they are getting fine-tuned. 

3. After they have been put to work. 

This shows how crucial it is to have really strong security measures in place from the very beginning to the end of a large language model's life. 

Why your Sensitive Data is at Risk? 

There is a legitimate concern that Large Language Models (LLMs) could inadvertently disclose confidential information. It is possible for someone to manipulate an LLM to divulge sensitive data, which would be detrimental to maintaining privacy. Thus, it is of utmost importance to establish robust safeguards to ensure the security of data when employing LLMs.