Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label LLM Model. Show all posts

Security Teams Struggle to Keep Up With Generative AI Threats, Cobalt Warns

 

A growing number of cybersecurity professionals are expressing concern that generative AI is evolving too rapidly for their teams to manage. 

According to new research by penetration testing company Cobalt, over one-third of security leaders and practitioners admit that the pace of genAI development has outstripped their ability to respond. Nearly half of those surveyed (48%) said they wish they could pause and reassess their defense strategies in light of these emerging threats—though they acknowledge that such a break isn’t realistic. 

In fact, 72% of respondents listed generative AI-related attacks as their top IT security risk. Despite this, one in three organizations still isn’t conducting regular security evaluations of their large language model (LLM) deployments, including basic penetration testing. 

Cobalt CTO Gunter Ollmann warned that the security landscape is shifting, and the foundational controls many organizations rely on are quickly becoming outdated. “Our research shows that while generative AI is transforming how businesses operate, it’s also exposing them to risks they’re not prepared for,” said Ollmann. 
“Security frameworks must evolve or risk falling behind.” The study revealed a divide between leadership and practitioners. Executives such as CISOs and VPs are more concerned about long-term threats like adversarial AI attacks, with 76% listing them as a top issue. Meanwhile, 45% of practitioners are more focused on immediate operational challenges such as model inaccuracies, compared to 36% of executives. 

A majority of leaders—52%—are open to rethinking their cybersecurity strategies to address genAI threats. Among practitioners, only 43% shared this view. The top genAI-related concerns identified by the survey included the risk of sensitive information disclosure (46%), model poisoning or theft (42%), data inaccuracies (40%), and leakage of training data (37%). Around half of respondents also expressed a desire for more transparency from software vendors about how vulnerabilities are identified and patched, highlighting a widening trust gap in the AI supply chain. 

Cobalt’s internal pentest data shows a worrying trend: while 69% of high-risk vulnerabilities are typically fixed across all test types, only 21% of critical flaws found in LLM tests are resolved. This is especially alarming considering that nearly one-third of LLM vulnerabilities are classified as serious. Interestingly, the average time to resolve these LLM-specific vulnerabilities is just 19 days—the fastest across all categories. 

However, researchers noted this may be because organizations prioritize easier, low-effort fixes rather than tackling more complex threats embedded in foundational AI models. Ollmann compared the current scenario to the early days of cloud adoption, where innovation outpaced security readiness. He emphasized that traditional controls aren’t enough in the age of LLMs. “Security teams can’t afford to be reactive anymore,” he concluded. “They must move toward continuous, programmatic AI testing if they want to keep up.”

Meta Plans to Launch Enhanced AI model Llama 3 in July

 

The Information reported that Facebook's parent company, Meta, plans to launch Llama 3, a new AI language model, in July. As part of Meta's attempts to enhance its large language models (LLMs), the open-source LLM was designed to offer more comprehensive responses to contentious queries. 

In order to give context to questions they believe to be contentious, meta researchers are attempting to "loosen up" the model. For example, Llama 2, Meta's current chatbot model for social media sites, ignores contentious subjects like "kill a vehicle engine" and "how to win a war." The study claims that Llama 3 would be able to comprehend more nuanced questions like "how to kill a vehicle's engine," which refers to turning a vehicle off as opposed to taking it out of service. 

To ensure that the responses from the new model are more precise and nuanced, Meta will internally designate a single person to oversee tone and safety training. The goal of the endeavour is to improve the ability to respond and use Meta's new large language model. This project is crucial because Google recently disabled the Gemini chatbot's capacity to generate images in response to criticism over old photos and phrases that were sometimes mistranslated. 

The research was released in the same week that Microsoft, the challenger to OpenAI's ChatGPT, Mistral, the French AI champion, announced a strategic relationship and investment. As the tech giant attempts to attract more clients for its Azure cloud services, the multi-year agreement underscores Microsoft's plans to offer a variety of AI models in addition to its biggest bet in OpenAI.

Microsoft confirmed its investment in Mistral, but stated that it owns no interest in the company. The IT behemoth is under regulatory investigation in Europe and the United States for its massive investment in OpenAI. 

The Paris-based startup develops open source and proprietary large language models (LLM), such as the one OpenAI pioneered with ChatGPT, to interpret and generate text in a human-like manner. Its most recent proprietary model, Mistral Large, will be made available to Azure customers first through the agreement. Mistral's technology will run on Microsoft's cloud computing infrastructure.