Search This Blog

Showing posts with label Chatbot. Show all posts

A ChatGPT Bug Exposes Sensitive User Data

OpenAI's ChatGPT, an artificial intelligence (AI) language model that can produce text that resembles human speech, has a security flaw. The flaw enabled the model to unintentionally expose private user information, endangering the privacy of several users. This event serves as a reminder of the value of cybersecurity and the necessity for businesses to protect customer data in a proactive manner.

According to a report by Tech Monitor, the ChatGPT bug "allowed researchers to extract personal data from users, including email addresses and phone numbers, as well as reveal the model's training data." This means that not only were users' personal information exposed, but also the sensitive data used to train the AI model. As a result, the incident raises concerns about the potential misuse of the leaked information.

The ChatGPT bug not only affects individual users but also has wider implications for organizations that rely on AI technology. As noted in a report by India Times, "the breach not only exposes the lack of security protocols at OpenAI, but it also brings forth the question of how safe AI-powered systems are for businesses and consumers."

Furthermore, the incident highlights the importance of adhering to regulations such as the General Data Protection Regulation (GDPR), which aims to protect individuals' personal data in the European Union. The ChatGPT bug violated GDPR regulations by exposing personal data without proper consent.

OpenAI has taken swift action to address the issue, stating that they have fixed the bug and implemented measures to prevent similar incidents in the future. However, the incident serves as a warning to businesses and individuals alike to prioritize cybersecurity measures and to be aware of potential vulnerabilities in AI systems.

As stated by Cyber Security Connect, "ChatGPT may have just blurted out your darkest secrets," emphasizing the need for constant vigilance and proactive measures to safeguard sensitive information. This includes regular updates and patches to address security flaws, as well as utilizing encryption and other security measures to protect data.

The ChatGPT bug highlights the need for ongoing vigilance and preventative measures to protect private data in the era of advanced technology. Prioritizing cybersecurity and staying informed of vulnerabilities is crucial for a safer digital environment as AI systems continue to evolve and play a prominent role in various industries.




Google Bard: How to use this AI Chatbot Service?

 

Google Bard is a new chatbot tool developed in response to competitor artificial intelligence (AI) tools such as ChatGPT. It is intended to simulate human conversations and employs a combination of natural language processing and machine learning to provide realistic and helpful responses to questions you may pose. 
Such tools could be especially useful for smaller businesses that want to provide natural language support to their customers without hiring large teams of support personnel, or for supplementing Google's own search tools. Bard can be integrated into websites, messaging platforms, desktop and mobile applications, and a variety of digital systems. At the very least, it will be. Outside of a limited beta test run, it is not widely available; at least not yet.

Google Bard is the company's answer to ChatGPT. It's an AI chatbot that performs many of the same functions as ChatGPT. Still, it's intended to eventually supplement Google's own search tools (much like Bing is doing with ChatGPT) as well as provide automated support and human-like interaction for businesses.

It's been in the works for a while and employs LaMDA (Language Model for Dialogue Applications) technology. It is based on Google's Transformer neural network architecture, which has also served as the foundation for other AI generative tools such as ChatGPT's GPT-3 language model.

What was the error in Google Bard's question?

Google Bard, which was unveiled for the first time on February 6, 2023, got off to a rocky start when it made a mistake in answering a question about the James Webb Space Telescope's recent discoveries. It claimed to be the first to photograph an exoplanet outside of our solar system, but this occurred many years earlier. The fact that Google Bard displayed this incorrect information with such confidence drew harsh criticism, drawing parallels with some of ChatGPT's flaws. In response, Google's stock price dropped several points.

At the time of writing, Google Bard was only available to a small group of beta testers, but a wider launch is anticipated in the coming weeks and months. Following the success of ChatGPT, CEO Sundar Pichai initially accelerated the development of Google Bard in early 2022. With the continued positive press coverage ChatGPT has received in 2023, this is only likely to have continued.

For the time being, if you are not one of the coveted Bard beta testers, you'll have to play the waiting game until we hear more from Google.

Google Bard and ChatGPT

Google Bard and ChatGPT both create chatbots using natural language models and machine learning, but each has a unique set of features. ChatGPT is entirely based on data that was mostly collected up until 2021 at the time of writing, whereas Google Bard has the possibility to use up-to-date information for its responses. ChatGPT focuses on conversational questions and answers, but it is now being used in Bing's search results to answer more conversational searches as well. Google Bard will be used in the same way, but only to supplement Google.

Both chatbots use language models that are slightly different. Google Bard is built on LaMBDA, whereas ChatGPT is built on GPT (Generative Pre-trained Transformer). ChatGPT also has a plagiarism detector, which Google Bard currently does not, as far as we know.

ChatGPT is also freely available to try out, whereas Google Bard is only available to beta testers.

Google Bard is already accessible to a select group of Google beta testers in a limited form. There is no set timetable for its wider implementation. However, Google SEO Sundar Pichai stated in his address on the launch of Google Bard that we would soon see Google Bard leveraged to enhance Google Search, so we may see Bard more widely available in the coming weeks.

ChatGPT: A Potential Risk to Data Privacy


ChatGPT, within two months of its release, seems to have taken over the world like a storm. The consumer application has achieved 100 million active users, making it the fastest-growing product ever. Users are intrigued by the tool's sophisticated capabilities, although apprehensive about its potential to upend numerous industries. 

One of the less discussed consequences in regard to ChatGPT is its privacy risk. Google only yesterday launched Bard, its own conversational AI, and others will undoubtedly follow. Technology firms engaged in AI development have certainly entered a race. 

The issue would be its technology, which is entirely based on users’ personal data. 

300 Billion Words, How Many Are Yours? 

ChatGPT is apparently based on a massive language model, which backs up an enormous amount of data to operate and enhance its functions. Implying, the model gets more adept at seeing patterns, foreseeing what will happen next, and producing credible text as more data is used to train it. 

OpenAI, the developer of ChatGPT, sourced the Chatbot model with some 3 million words systematically taken from the internet –  via books, articles, websites, and posts – which also undeniably involves online users’ personal information, gathered without their consent. 

Every blog post, product review, or comment written on an article, which exists or ever existed in the online world has a good chance that the data or information involved it is was consumed by ChatGPT. 

What is the Issue? 

The gathered data used in order to train ChatGPT is problematic for numerous reasons. 

First, the collected data is unconsented, since none of the online users were ever asked if OpenAI could use their seemingly personal information. Thus, this would be a clear violation of privacy, especially when the data is sensitive and can be used to locate us, identify our loved ones, or identify ourselves. 

The usage of data can compromise what we refer to as contextual integrity even when the data is publicly available. This is a cornerstone idea in discussions about privacy law. Information on people must not be made public outside of the context in which it was first created. 

Moreover, OpenAI does not include any procedure for users to monitor whether the company has their personal information in-store, or to request it to be taken down. The European General Data Protection Regulation (GDPR), which guarantees this right, is still being discussed as to whether ChatGPT complies with its criteria. 

This “right to be forgotten” is specifically essential when it comes to situations involving information that is inaccurate or misleading, which seems to be a regular occurrence in ChatGPT. 

Furthermore, the scraped data that ChatGPT was trained on may be confidential or protected by copyright. For instance, the tool replicated the opening few chapters of Joseph Heller's copyrighted book Catch-22. 

Finally, OpenAI did not pay for the internet data it downloaded. Its creators—individuals, website owners, and businesses—were not being compensated. This is especially remarkable in light of the recent US$29 billion valuation of OpenAI, which is more than double its value in 2021. 

OpenAI has also recently announced ChatGPT Plus, which is a paid subscription plan that will provide users ongoing access to the tool, swift response times, and priority access to its new feature. By 2024, it is anticipated that this approach would help generate $1 billion in revenue. 

None of this would have been possible without the usage of ‘our’ data, acquired and utilized without our consent. 

Time to Consider the Issue? 

According to some professionals and experts, ChatGPT is a “tipping point for AI” - The realisation of technological advancement that can revolutionize the way we work, learn, write, and even think. 

Despite its potential advantages, we must keep in mind that OpenAI is a private, for-profit organization whose objectives and business demands may not always coincide with those of the larger community requirements. 

The privacy hazards associated with ChatGPT should serve as a caution. And as users of an increasing number of AI technologies, we need to exercise extreme caution when deciding what data to provide such tools with.  

ChatGPT's Effective Corporate Usage Might Eliminate Systemic Challenges

 

Today's AI is highly developed. Artificial intelligence combines disciplines that make an effort to essentially duplicate the capacity of the human brain to learn from experience and generate judgments based on that experience. Researchers utilize a variety of tactics to do this. In one paradigm, brute force is used, where the computer system cycles through all possible solutions to a problem until it finds the one that has been proven to be right.

"ChatGPT is really restricted, but good enough at some things to provide a misleading image of brilliance. It's a mistake to be depending on it for anything essential right now," said OpenAI CEO Sam Altman when the software was first launched on November 30. 

According to Nicola Morini Bianzino, global chief technology officer at EY, there's presently no killer use case for ChatGPT in the industry which will significantly affect both the top and bottom lines. They projected that there will be an explosion of experimentation over the next six to twelve months, particularly after businesses are able to develop over the top of ChatGPT utilizing OpenAI's API.

While OpenAI CEO Sam Altman has acknowledged that ChatGPT and other generative AI technologies face several challenges, ranging from possible ethical implications to accuracy problems.

According to Bianzino, this possibility for generative AI's future will have a big impact on enterprise software since companies would have to start considering novel ways to organize data inside an enterprise that surpasses conventional analytics tools. The ways people access and use information inside the company will alter as ChatGPT and comparable tools advance and become more capable of being trained on an enterprise's data in a secure manner.

As per Bianzino, the creation of text and documentation will also require training and alignment to the appropriate ontology of the particular organization, as well as containment, storage, and control inside the enterprise. He stated that business executives, including the CTO and CIO, must be aware of these trends because, unlike quantum computing, which may not even be realized for another 10 to 15 years, the actual potential of generative AI may be realized within the next six to twelve months.

Decentralized peer-to-peer technology mixed with blockchain and smart contracts capabilities overcome the traditional challenges of privacy, traceability, trust, and security. By doing this, data owners can share insights from data without having to relocate or otherwise give up ownership of it.



How Hackers Can Exploit ChatGPT, a Viral AI Chatbot


Cybernews researchers have discovered ChatGPT, a platform that provides hackers step-by-step instructions on hacking a website. An AI-based chatbot, ChatGPT was launched recently and has caught the attention of the online community. 

The team at Cybernews has warned that AI chatbots may be fun to play with, but they are also dangerous as it is able to give detailed info on how to exploit any vulnerability. 

What is ChatGPT?

AI has created a stir in the imaginations of leaders in the tech industry and pop culture for decades. Machine learning tech allows you to automatically create text, photos, videos, and other media. They are all flourishing in the tech sphere as investors put billions of dollars into this field. 

While AI has enabled endless opportunities to help humans, the experts warn about the potential dangers of making an algorithm that will outperform human capabilities and can get out of control. 

Apocalypse situations due to AI taking over the planet are not something we are talking about. However, in today's scenario, AI has already started helping threat actors in malicious activities.

ChatGPT is the latest innovation in AI, made by research company OpenAI which was led by Sam Altman, and also backed by Microsoft, LinkedIn Co-founder Reid Hoffman, Elon Musk, and Khosla Ventures. 

The AI chatbot can make conversations with people imitating various writing styles. The text made by ChatGPT is way more imaginative and complex when compared to earlier chatbots built by Silicon Valley. ChatGPT is trained using large amounts of text data from web, Wikipedia, and archived books. 

Popularity of ChatGPT

After five days after the ChatGPT launch, over one million people had signed up for testing the tech. Social media was invaded with users' queries and the AI's answers- writing poems, copywriting, plotting movies, giving important tips for weight loss or healthy relationships, creative brainstorming, studying, and even programming. 

According to OpenAI, ChatGPT models can answer follow-up questions, argue incorrect premises, reject inappropriate queries, and admit their personal mistakes. 

ChatGPT for hacking

According to cybernews, the research team tried "using ChatGPT to help them find a website's vulnerabilities. Researchers asked questions and followed the guidance of AI, trying to check if the chatbot could provide a step-by-step guide on exploiting."

"The researchers used the 'Hack the Box' cybersecurity training platform for their experiment. The platform provides a virtual training environment and is widely used by cybersecurity specialists, students, and companies to improve hacking skills."

"The team approached ChatGPT by explaining that they were doing a penetration testing challenge. Penetration testing (pen test) is a method used to replicate a hack by deploying different tools and strategies. The discovered vulnerabilities can help organizations strengthen the security of their systems."

Potential threats of ChatGPT and AI

Experts believe that AI-based vulnerability scanners used by cybercriminals can wreak havoc on internet security. However, cybernews team also sees the potential of AI in cybersecurity. 

Researchers can use insights from AI to prevent data leaks. AI can also help developers in monitoring and testing implementation more efficiently.

AI keeps on learning, it has a mind of its own. It learns newer ways of advanced tech and exploitation, and it works as a handbook to penetration testers, offering sample payloads fulfilling their needs. 

“Even though we tried ChatGPT against a relatively uncomplicated penetration testing task, it does show the potential for guiding more people on how to discover vulnerabilities that could, later on, be exploited by more individuals, and that widens the threat landscape considerably. The rules of the game have changed, so businesses and governments must adapt to it," said Mantas Sasnauskas, head of the research team. 





Does ChatGPT Bot Empower Cyber Crime?

Security experts have cautioned that a new AI bot called ChatGPT may be employed by cybercriminals to educate them on how to plan attacks and even develop ransomware. It was launched by the artificial intelligence r&d company OpenAI last month. 

Computer security expert Brendan Dolan-Gavitt questioned if he could command an AI-powered chatbot to create malicious code when the ChatGPT application first allowed users to communicate. Then he gave the program a basic capture-the-flag mission to complete.

The code featured a buffer overflow vulnerability, which ChatGPT accurately identified and created a piece of code to capitalize it. The program would have addressed the issue flawlessly if not for a small error—the number of characters in the input. 

The fact that ChatGPT failed Dolan Gavitt's task, which he would have given students at the start of a vulnerability analysis course, does not instill trust in massive language models' capacity to generate high-quality code. However, after identifying the mistake, Dolan-Gavitt asked the model to review the response, and this time, ChatGPT did it right. 

Security researchers have used ChatGPT to rapidly complete a variety of offensive and defensive cybersecurity tasks, including creating refined or persuading phishing emails, creating workable Yara rules, identifying buffer overflows in code, creating evasion code that could be utilized by attackers to avoid threat detection, and even writing malware. 

Dr. Suleyman Ozarslan, a security researcher and co-founder of Picus Security, claimed that he was able to program the program to carry out a variety of aggressive and defensive cybersecurity tasks, like the creation of a World Cup-themed email in perfect English and the generation of both evasion code that can get around detection rules as well as Sigma detection rules to find cybersecurity anomalies. 

Reinforcement learning is the foundation of ChatGPT. As a result, it acquires new knowledge through interaction with people and user-generated prompts. Additionally, it implies that over time, the program might pick up on some of the tricks researchers have employed to get around its ethical checks, either through user input or improvements made by its administrators. 

Multiple researchers have discovered a technique to get beyond ChatGPT's limitations, which stops it from doing things that are malicious, including providing instructions on how to make a bomb or writing malicious code. For the present term, ChatGPT's coding could be more optimal and demonstrates many of the drawbacks of relying solely on AI tools. However, as these models develop in complexity, they are likely to become more crucial in creating malicious software. 

Facebook Users Phished by a Chatbot Campaign


You might be surprised to learn that more users check their chat apps than their social profiles. With more than 1.3 billion users, Facebook Messenger is the most popular mobile messaging service in the world and thus presents enormous commercial opportunities to marketers.

Cybersecurity company SpiderLabs has discovered a fresh phishing campaign using Messenger's chatbot software

How do you make it all work? 

Karl Sigler, senior security research manager at Trustwave SpiderLabs, explains: "You don't just click on a link and then be offered to download an app - most people are going to grasp that's an attack and not click on it. In this attack, there's a link that takes you to a channel that looks like tech help, asking for information you'd expect tech support to seek for, and that escalating of the social-engineering part is unique with these types of operations."

First, a fake email from Facebook is sent to the victim – warning that their page has violated the site's community standards and would be deleted within 48 hours. The email also includes a "Appeal Now" link that the victim might use to challenge the dismissal.

The Facebook support team poses an "Appeal Now" link users can click directly from the email, asserting to be providing them a chance to appeal. The chatbot offers victims another "Appeal Now" button while posing as a member of the Facebook support staff. Users who click the actual link are directed to a Google Firebase-hosted website in a new tab.

According to Trustwave's analysis, "Firebase is a software development platform that offers developers with several tools to help construct, improve, and expand the app easier to set up and deploy sites." Because of this opportunity, spammers created a website impersonating a Facebook "Support Inbox" where users can chiefly dispute the reported deletion of their page. 

Increasing Authenticity in Cybercrime 

The notion that chatbots are a frequent factor in modern marketing and live assistance these days and that people are not prone to be cautious of their contents, especially if they come from a fairly reliable source, is one of the factors that contribute to this campaign's effectiveness. 

According to Sigler, "the advertising employs the genuine Facebook chat function. Whenever it reads 'Page Support,' My case number has been provided by them. And it's likely enough to get past the obstacles that many individuals set when trying to spot the phishing red flags."

Attacks like this, Sigler warns, can be highly risky for proprietors of business pages. He notes that "this may be very effectively utilized in a targeted-type of attack." With Facebook login information and phone numbers, hackers can do a lot of harm to business users, Sigler adds.

As per Sigler, "If the person in charge of your social media falls for this type of scam, suddenly, your entire business page may be vandalized, or they might exploit entry to that business page to acquire access to your clients directly utilizing the credibility of that Facebook profile." They will undoubtedly pursue more network access and data as well. 

Red flags to look out for 

Fortunately, the email's content contains a few warning signs that should enable recipients to recognize the letter as spoofed. For instance, the message's text contains a few grammatical and spelling errors, and the recipient's name appears as "Policy Issues," which is not how Facebook resolves such cases.

More red flags were detected by the experts: the chatbot's page had the handle @case932571902, which is clearly not a Facebook handle. Additionally, it's barren, with neither followers nor posts. The 'Very Responsive' badge on this page, which Facebook defines as having a response rate of 90% and replying within 15 minutes, was present although it seemed to be inactive. To make it look real, it even used the Messenger logo as its profile image. 

Researchers claim that the attackers are requesting passwords, email addresses, cell phone numbers, first and last names, and page names. 

This effort is a skillful example of social engineering since malicious actors are taking advantage of the platform they are spoofing. Nevertheless, researchers urge everyone to exercise caution when using the internet and to avoid responding to fake messages. Employing the finest encryption keys available will protect your credentials.

Phishing Scam Adds a Chatbot Like Twist to Steal Data

 

According to research published Thursday by Trustwave's SpiderLabs team, a newly uncovered phishing campaign aims to reassure potential victims that submitting credit card details and other personal information is safe. 

As per the research, instead of just embedding an information-stealing link directly in an email or attached document, the procedure involves a "chatbot-like" page that tries to engage and create confidence with the victim. 

Researcher Adrian Perez stated, “We say ‘chatbot-like’ because it is not an actual chatbot. The application already has predefined responses based on the limited options given.” 

Responses to the phoney bot lead the potential victim through a number of steps that include a false CAPTCHA, a delivery service login page, and finally a credit card information grab page. Some of the other elements in the process, like the bogus chatbot, aren't very clever. According to SpiderLabs, the CAPTCHA is nothing more than a jpeg file. However, a few things happen in the background on the credit card page. 

“The credit card page has some input validation methods. One is card number validation, wherein it tries to not only check the validity of the card number but also determine the type of card the victim has inputed,” Perez stated.

The campaign was identified in late March, according to the business, and it was still operating as of Thursday morning. The SpiderLabs report is only the latest example of fraudsters' cleverness when it comes to credit card data. In April, Trend Micro researchers warned that fraudsters were utilising phoney "security alerts" from well-known banks in phishing scams. 

Last year, discussions on dark web forums about deploying phishing attacks to capture credit card information grew, according to Gemini Advisory's annual report. Another prevalent approach is stealing card info directly from shopping websites. Researchers at RiskIQ claimed this week that they've noticed a "constant uptick" in skimming activity recently, albeit not all of it is linked to known Magecart malware users.