One of the less discussed consequences in regard to ChatGPT is its privacy risk. Google only yesterday launched Bard, its own conversational AI, and others will undoubtedly follow. Technology firms engaged in AI development have certainly entered a race.
The issue would be its technology, which is entirely based on users’ personal data.
ChatGPT is apparently based on a massive language model, which backs up an enormous amount of data to operate and enhance its functions. Implying, the model gets more adept at seeing patterns, foreseeing what will happen next, and producing credible text as more data is used to train it.
OpenAI, the developer of ChatGPT, sourced the Chatbot model with some 3 million words systematically taken from the internet – via books, articles, websites, and posts – which also undeniably involves online users’ personal information, gathered without their consent.
Every blog post, product review, or comment written on an article, which exists or ever existed in the online world has a good chance that the data or information involved it is was consumed by ChatGPT.
The gathered data used in order to train ChatGPT is problematic for numerous reasons.
First, the collected data is unconsented, since none of the online users were ever asked if OpenAI could use their seemingly personal information. Thus, this would be a clear violation of privacy, especially when the data is sensitive and can be used to locate us, identify our loved ones, or identify ourselves.
The usage of data can compromise what we refer to as contextual integrity even when the data is publicly available. This is a cornerstone idea in discussions about privacy law. Information on people must not be made public outside of the context in which it was first created.
Moreover, OpenAI does not include any procedure for users to monitor whether the company has their personal information in-store, or to request it to be taken down. The European General Data Protection Regulation (GDPR), which guarantees this right, is still being discussed as to whether ChatGPT complies with its criteria.
This “right to be forgotten” is specifically essential when it comes to situations involving information that is inaccurate or misleading, which seems to be a regular occurrence in ChatGPT.
Furthermore, the scraped data that ChatGPT was trained on may be confidential or protected by copyright. For instance, the tool replicated the opening few chapters of Joseph Heller's copyrighted book Catch-22.
Finally, OpenAI did not pay for the internet data it downloaded. Its creators—individuals, website owners, and businesses—were not being compensated. This is especially remarkable in light of the recent US$29 billion valuation of OpenAI, which is more than double its value in 2021.
OpenAI has also recently announced ChatGPT Plus, which is a paid subscription plan that will provide users ongoing access to the tool, swift response times, and priority access to its new feature. By 2024, it is anticipated that this approach would help generate $1 billion in revenue.
None of this would have been possible without the usage of ‘our’ data, acquired and utilized without our consent.
According to some professionals and experts, ChatGPT is a “tipping point for AI” - The realisation of technological advancement that can revolutionize the way we work, learn, write, and even think.
Despite its potential advantages, we must keep in mind that OpenAI is a private, for-profit organization whose objectives and business demands may not always coincide with those of the larger community requirements.
The privacy hazards associated with ChatGPT should serve as a caution. And as users of an increasing number of AI technologies, we need to exercise extreme caution when deciding what data to provide such tools with.
The team at Cybernews has warned that AI chatbots may be fun to play with, but they are also dangerous as it is able to give detailed info on how to exploit any vulnerability.
AI has created a stir in the imaginations of leaders in the tech industry and pop culture for decades. Machine learning tech allows you to automatically create text, photos, videos, and other media. They are all flourishing in the tech sphere as investors put billions of dollars into this field.
While AI has enabled endless opportunities to help humans, the experts warn about the potential dangers of making an algorithm that will outperform human capabilities and can get out of control.
Apocalypse situations due to AI taking over the planet are not something we are talking about. However, in today's scenario, AI has already started helping threat actors in malicious activities.
ChatGPT is the latest innovation in AI, made by research company OpenAI which was led by Sam Altman, and also backed by Microsoft, LinkedIn Co-founder Reid Hoffman, Elon Musk, and Khosla Ventures.
The AI chatbot can make conversations with people imitating various writing styles. The text made by ChatGPT is way more imaginative and complex when compared to earlier chatbots built by Silicon Valley. ChatGPT is trained using large amounts of text data from web, Wikipedia, and archived books.
After five days after the ChatGPT launch, over one million people had signed up for testing the tech. Social media was invaded with users' queries and the AI's answers- writing poems, copywriting, plotting movies, giving important tips for weight loss or healthy relationships, creative brainstorming, studying, and even programming.
According to OpenAI, ChatGPT models can answer follow-up questions, argue incorrect premises, reject inappropriate queries, and admit their personal mistakes.
According to cybernews, the research team tried "using ChatGPT to help them find a website's vulnerabilities. Researchers asked questions and followed the guidance of AI, trying to check if the chatbot could provide a step-by-step guide on exploiting."
"The researchers used the 'Hack the Box' cybersecurity training platform for their experiment. The platform provides a virtual training environment and is widely used by cybersecurity specialists, students, and companies to improve hacking skills."
"The team approached ChatGPT by explaining that they were doing a penetration testing challenge. Penetration testing (pen test) is a method used to replicate a hack by deploying different tools and strategies. The discovered vulnerabilities can help organizations strengthen the security of their systems."
Experts believe that AI-based vulnerability scanners used by cybercriminals can wreak havoc on internet security. However, cybernews team also sees the potential of AI in cybersecurity.
Researchers can use insights from AI to prevent data leaks. AI can also help developers in monitoring and testing implementation more efficiently.
AI keeps on learning, it has a mind of its own. It learns newer ways of advanced tech and exploitation, and it works as a handbook to penetration testers, offering sample payloads fulfilling their needs.
“Even though we tried ChatGPT against a relatively uncomplicated penetration testing task, it does show the potential for guiding more people on how to discover vulnerabilities that could, later on, be exploited by more individuals, and that widens the threat landscape considerably. The rules of the game have changed, so businesses and governments must adapt to it," said Mantas Sasnauskas, head of the research team.
Security experts have cautioned that a new AI bot called ChatGPT may be employed by cybercriminals to educate them on how to plan attacks and even develop ransomware. It was launched by the artificial intelligence r&d company OpenAI last month.
Computer security expert Brendan Dolan-Gavitt questioned if he could command an AI-powered chatbot to create malicious code when the ChatGPT application first allowed users to communicate. Then he gave the program a basic capture-the-flag mission to complete.
The code featured a buffer overflow vulnerability, which ChatGPT accurately identified and created a piece of code to capitalize it. The program would have addressed the issue flawlessly if not for a small error—the number of characters in the input.
The fact that ChatGPT failed Dolan Gavitt's task, which he would have given students at the start of a vulnerability analysis course, does not instill trust in massive language models' capacity to generate high-quality code. However, after identifying the mistake, Dolan-Gavitt asked the model to review the response, and this time, ChatGPT did it right.