Recent research has surfaced serious security vulnerabilities within ChatGPT plugins, raising concerns about potential data breaches and account takeovers. These flaws could allow attackers to gain control of organisational accounts on third-party platforms and access sensitive user data, including Personal Identifiable Information (PII).
According to Darren Guccione, CEO and co-founder of Keeper Security, the vulnerabilities found in ChatGPT plugins pose a significant risk to organisations as employees often input sensitive data, including intellectual property and financial information, into AI tools. Unauthorised access to such data could have severe consequences for businesses.
In November 2023, ChatGPT introduced a new feature called GPTs, which function similarly to plugins and present similar security risks, further complicating the situation.
In a recent advisory, the Salt Security research team identified three main types of vulnerabilities within ChatGPT plugins. Firstly, vulnerabilities were found in the plugin installation process, potentially allowing attackers to install malicious plugins and intercept user messages containing proprietary information.
Secondly, flaws were discovered within PluginLab, a framework for developing ChatGPT plugins, which could lead to account takeovers on third-party platforms like GitHub.
Lastly, OAuth redirection manipulation vulnerabilities were identified in several plugins, enabling attackers to steal user credentials and execute account takeovers.
Yaniv Balmas, vice president of research at Salt Security, emphasised the growing popularity of generative AI tools like ChatGPT and the corresponding increase in efforts by attackers to exploit these tools to gain access to sensitive data.
Following coordinated disclosure practices, Salt Labs worked with OpenAI and third-party vendors to promptly address these issues and reduce the risk of exploitation.
Sarah Jones, a cyber threat intelligence research analyst at Critical Start, outlined several measures that organisations can take to strengthen their defences against these vulnerabilities. These include:
1. Implementing permission-based installation:
This involves ensuring that only authorised users can install plugins, reducing the risk of malicious actors installing harmful plugins.
2. Introducing two-factor authentication:
By requiring users to provide two forms of identification, such as a password and a unique code sent to their phone, organisations can add an extra layer of security to their accounts.
3. Educating users on exercising caution with code and links:
It's essential to train employees to be cautious when interacting with code and links, as these can often be used as vectors for cyber attacks.
4. Monitoring plugin activity constantly:
By regularly monitoring plugin activity, organisations can detect any unusual behaviour or unauthorised access attempts promptly.
5. Subscribing to security advisories for updates:
Staying informed about security advisories and updates from ChatGPT and third-party vendors allows organisations to address vulnerabilities and apply patches promptly.
As organisations increasingly rely on AI technologies, it becomes crucial to address and mitigate the associated security risks effectively.
OpenAI's ChatGPT is facing renewed scrutiny in Italy as the country's data protection authority, Garante, asserts that the AI chatbot may be in violation of data protection rules. This follows a previous ban imposed by Garante due to alleged breaches of European Union (EU) privacy regulations. Although the ban was lifted after OpenAI addressed concerns, Garante has persisted in its investigations and now claims to have identified elements suggesting potential data privacy violations.
Garante, known for its proactive stance on AI platform compliance with EU data privacy regulations, had initially banned ChatGPT over alleged breaches of EU privacy rules. Despite the reinstatement after OpenAI's efforts to address user consent issues, fresh concerns have prompted Garante to escalate its scrutiny. OpenAI, however, maintains that its practices are aligned with EU privacy laws, emphasising its active efforts to minimise the use of personal data in training its systems.
"We assure that our practices align with GDPR and privacy laws, emphasising our commitment to safeguarding people's data and privacy," stated the company. "Our focus is on enabling our AI to understand the world without delving into private individuals' lives. Actively minimising personal data in training systems like ChatGPT, we also decline requests for private or sensitive information about individuals."
In the past, OpenAI confirmed fulfilling numerous conditions demanded by Garante to lift the ChatGPT ban. The watchdog had imposed the ban due to exposed user messages and payment information, along with ChatGPT lacking a system to verify users' ages, potentially leading to inappropriate responses for children. Additionally, questions were raised about the legal basis for OpenAI collecting extensive data to train ChatGPT's algorithms. Concerns were voiced regarding the system potentially generating false information about individuals.
OpenAI's assertion of compliance with GDPR and privacy laws, coupled with its active steps to minimise personal data, appears to be a key element in addressing the issues that led to the initial ban. The company's efforts to meet Garante's conditions signal a commitment to resolving concerns related to user data protection and the responsible use of AI technologies. As the investigation takes its stride, these assurances may play a crucial role in determining how OpenAI navigates the challenges posed by Garante's scrutiny into ChatGPT's data privacy practices.
In response to Garante's claims, OpenAI is gearing up to present its defence within a 30-day window provided by Garante. This period is crucial for OpenAI to clarify its data protection practices and demonstrate compliance with EU regulations. The backdrop to this investigation is the EU's General Data Protection Regulation (GDPR), introduced in 2018. Companies found in violation of data protection rules under the GDPR can face fines of up to 4% of their global turnover.
Garante's actions underscore the seriousness with which EU data protection authorities approach violations and their willingness to enforce penalties. This case involving ChatGPT reflects broader regulatory trends surrounding AI systems in the EU. In December, EU lawmakers and governments reached provisional terms for regulating AI systems like ChatGPT, emphasising comprehensive rules to govern AI technology with a focus on safeguarding data privacy and ensuring ethical practices.
OpenAI's cooperation and its ability to address concerns regarding personal data usage will play a pivotal role. The broader regulatory trends in the EU indicate a growing emphasis on establishing comprehensive guidelines for AI systems, addressing data protection and ethical considerations. For readers, understanding these developments determines the importance of compliance with data protection regulations and the ongoing efforts to establish clear guidelines for AI technologies in the EU.
Apart from providing a space for experimentation, other points increasingly show that open-source LLMs are going to gain the same attention closed-source LLMs are getting now.
The open-source nature allows organizations to understand,
modify, and tailor the models to their specific requirements. The collaborative
environment nurtured by open-source fosters innovation, enabling faster
development cycles. Additionally, the avoidance of vendor lock-in and adherence
to industry standards contribute to seamless integration. The security benefits
derived from community scrutiny and ethical considerations further bolster the
appeal of open-source LLMs, making them a strategic choice for enterprises
navigating the evolving landscape of artificial intelligence.
In a recent interview with CNBC at the World Economic Forum in Davos, Mustafa Suleyman, co-founder and CEO of Inflection AI, expressed his views on artificial intelligence (AI). Suleyman, who left Google in 2022, highlighted that while AI is an incredible technology, it has the potential to replace jobs in the long term.
Suleyman stressed upon the need to carefully consider how we integrate AI tools, as he believes they are fundamentally labour-replacing over many decades. However, he acknowledged the immediate benefits of AI, explaining that it makes existing processes much more efficient, leading to cost savings for businesses. Additionally, he pointed out that AI enables new possibilities, describing these tools as creative, empathetic, and more human-like than traditional relational databases.
Inflection AI, Suleyman's current venture, has developed an AI chatbot providing advice and support to users. The chatbot showcases AI's ability to augment human capabilities and enhance productivity in various applications.
One key concern surrounding AI, as highlighted by Stanford University professor Erik Brynjolfsson at the World Economic Forum, is the fear of job obsolescence. Some worry that AI's capabilities in tasks like writing and coding might replace human jobs. Brynjolfsson suggested that companies using AI to outright replace workers may not be making the wisest choice. He proposed a more strategic approach, where AI complements human workers, recognizing that some tasks are better suited for humans, while others can be efficiently handled by machines.
Since the launch of OpenAI's ChatGPT in November 2022, the technology has generated considerable hype. The past year has seen an increased awareness of AI and its potential impact on various industries.
As businesses integrate AI into their operations, there is a growing need to educate the workforce and the public on the nuances of this technology. AI, in simple terms, refers to computer systems that can perform tasks that typically require human intelligence. These tasks range from problem-solving and decision-making to creative endeavours.
Mustafa Suleyman's perspective on AI highlights its dual role – as a cost-saving tool in the short term and a potential job-replacing force in the long term. Balancing these aspects requires careful consideration and strategic planning.
Erik Brynjolfsson's advice to companies emphasises the importance of collaboration between humans and AI. Instead of viewing AI as a threat, companies should explore ways to leverage AI to enhance human capabilities and overall productivity.
The future of AI lies in how we go about its integration into our lives and workplaces. The key is to strike a balance that maximises the benefits of efficiency and productivity while preserving the unique skills and contributions of human workers. As AI continues to evolve, staying informed and fostering collaboration will be crucial for a harmonious coexistence between humans and machines.
The intel came from a leaked audio file of an internal presentation on an early version of Microsoft’s Security Copilot a ChatGPT-like artificial intelligence platform that Microsoft created to assist cybersecurity professionals.
Apparently, the audio consists of a Microsoft researcher addressing the result of "threat hunter" testing, in which the AI examined a Windows security log for any indications of potentially malicious behaviour.
"We had to cherry-pick a little bit to get an example that looked good because it would stray and because it's a stochastic model, it would give us different answers when we asked it the same questions," said Lloyd Greenwald, a Microsoft Security Partner giving the presentation, as quoted by BI.
"It wasn't that easy to get good answers," he added.
Security Copilot, like any chatbot, allows users to enter their query into a chat window and receive responses as a customer service reply. Security Copilot is largely built on OpenAI's GPT-4 large language model (LLM), which also runs Microsoft's other generative AI forays like the Bing Search assistant. Greenwald claims that these demonstrations were "initial explorations" of the possibilities of GPT-4 and that Microsoft was given early access to the technology.
Similar to Bing AI in its early days, which responded so ludicrous that it had to be "lobotomized," the researchers claimed that Security Copilot often "hallucinated" wrong answers in its early versions, an issue that appeared to be inherent to the technology. "Hallucination is a big problem with LLMs and there's a lot we do at Microsoft to try to eliminate hallucinations and part of that is grounding it with real data," Greenwald said in the audio, "but this is just taking the model without grounding it with any data."
The LLM Microsoft used to build Security Pilot, GPT-4, however it was not trained on cybersecurity-specific data. Rather, it was utilized directly out of the box, depending just on its massive generic dataset, which is standard.
Discussing other queries in regards to security, Greenwald revealed that, "this is just what we demoed to the government."
However, it is unclear whether Microsoft used these “cherry-picked” examples in its to the government and other potential customers – or if its researchers were really upfront about the selection process of the examples.
A spokeswoman for Microsoft told BI that "the technology discussed at the meeting was exploratory work that predated Security Copilot and was tested on simulations created from public data sets for the model evaluations," stating that "no customer data was used."