Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label cybersecurity concerns. Show all posts

AI Agents Raise Cybersecurity Concerns Amid Rapid Enterprise Adoption

 

A growing number of organizations are adopting autonomous AI agents despite widespread concerns about the cybersecurity risks they pose. According to a new global report released by identity security firm SailPoint, this accelerated deployment is happening in a largely unregulated environment. The findings are based on a survey of more than 350 IT professionals, revealing that 84% of respondents said their organizations already use AI agents internally. 

However, only 44% confirmed the presence of any formal policies to regulate the agents’ actions. AI agents differ from traditional chatbots in that they are designed to independently plan and execute tasks without constant human direction. Since the emergence of generative AI tools like ChatGPT in late 2022, major tech companies have been racing to launch their own agents. Many smaller businesses have followed suit, motivated by the desire for operational efficiency and the pressure to adopt what is widely viewed as a transformative technology.  

Despite this enthusiasm, 96% of survey participants acknowledged that these autonomous systems pose security risks, while 98% stated their organizations plan to expand AI agent usage within the next year. The report warns that these agents often have extensive access to sensitive systems and information, making them a new and significant attack surface for cyber threats. Chandra Gnanasambandam, SailPoint’s Executive Vice President of Product and Chief Technology Officer, emphasized the risks associated with such broad access. He explained that these systems are transforming workflows but typically operate with minimal oversight, which introduces serious vulnerabilities. 

Further compounding the issue is the inconsistent implementation of governance controls. Although 92% of those surveyed agree that AI agents should be governed similarly to human employees, 80% reported incidents where agents performed unauthorized actions or accessed restricted data. These incidents underscore the dangers of deploying autonomous systems without robust monitoring or access controls. 

Gnanasambandam suggests adopting an identity-first approach to agent management. He recommends applying the same security protocols used for human users, including real-time access permissions, least privilege principles, and comprehensive activity tracking. Without such measures, organizations risk exposing themselves to breaches or data misuse due to the very tools designed to streamline operations. 

As AI agents become more deeply embedded in business processes, experts caution that failing to implement adequate oversight could create long-term vulnerabilities. The report serves as a timely reminder that innovation must be accompanied by strong governance to ensure cybersecurity is not compromised in the pursuit of automation.

DeepSeek-R1 AI Under Fire for Severe Security Risks

 

DeepSeek-R1, an AI model developed in China, is facing intense scrutiny following a study by cybersecurity firm Enkrypt AI, which found it to be 11 times more vulnerable to cybercriminal exploitation compared to other AI models. The research highlights significant security risks, including the AI’s susceptibility to generating harmful content and being manipulated for illicit activities. 

This concern is further amplified by a recent data breach that exposed over a million records, raising alarms about the model’s safety. Since its launch on January 20, DeepSeek has gained immense popularity, attracting 12 million users in just two days—surpassing ChatGPT’s early adoption rate. However, its rapid rise has also triggered widespread privacy and security concerns, leading multiple governments to launch investigations or impose restrictions on its usage.  
Enkrypt AI’s security assessment revealed that DeepSeek-R1 is highly prone to manipulation, with 45% of safety tests bypassing its security mechanisms. The study found that the model could generate instructions for criminal activities, illegal weapon creation, and extremist propaganda. 

Even more concerning, cybersecurity evaluations showed that DeepSeek-R1 failed in 78% of security tests, successfully generating malicious code, including malware and trojans. Compared to OpenAI’s models, DeepSeek-R1 was 4.5 times more likely to be exploited for hacking and cybercrime. 

Sahil Agarwal, CEO of Enkrypt AI, emphasized the urgent need for stronger safety measures and continuous monitoring to mitigate these threats. Due to these security concerns, several countries have initiated regulatory actions. 

Italy was the first to launch an investigation into DeepSeek’s privacy and security risks, followed by France, Germany, the Netherlands, Luxembourg, and Portugal. Taiwan has prohibited government agencies from using the AI, while South Korea has opened a formal inquiry into its data security practices. 

The United States is also responding aggressively, with NASA banning DeepSeek from federal devices. Additionally, lawmakers are considering legislation that could impose severe fines and even jail time for those using the platform in the country. The growing concerns surrounding DeepSeek-R1 come amid increasing competition between the US and China in AI development. 

Both nations are pushing the boundaries of AI for military, economic, and technological dominance. However, Enkrypt AI’s findings suggest that DeepSeek-R1’s vulnerabilities could make it a dangerous tool for cybercriminals, disinformation campaigns, and even biochemical warfare threats. With regulatory scrutiny intensifying worldwide, the AI’s future remains uncertain as authorities weigh the risks associated with its use.