Search This Blog

Showing posts with label Chatbot. Show all posts

ChatGPT's Effective Corporate Usage Might Eliminate Systemic Challenges

 

Today's AI is highly developed. Artificial intelligence combines disciplines that make an effort to essentially duplicate the capacity of the human brain to learn from experience and generate judgments based on that experience. Researchers utilize a variety of tactics to do this. In one paradigm, brute force is used, where the computer system cycles through all possible solutions to a problem until it finds the one that has been proven to be right.

"ChatGPT is really restricted, but good enough at some things to provide a misleading image of brilliance. It's a mistake to be depending on it for anything essential right now," said OpenAI CEO Sam Altman when the software was first launched on November 30. 

According to Nicola Morini Bianzino, global chief technology officer at EY, there's presently no killer use case for ChatGPT in the industry which will significantly affect both the top and bottom lines. They projected that there will be an explosion of experimentation over the next six to twelve months, particularly after businesses are able to develop over the top of ChatGPT utilizing OpenAI's API.

While OpenAI CEO Sam Altman has acknowledged that ChatGPT and other generative AI technologies face several challenges, ranging from possible ethical implications to accuracy problems.

According to Bianzino, this possibility for generative AI's future will have a big impact on enterprise software since companies would have to start considering novel ways to organize data inside an enterprise that surpasses conventional analytics tools. The ways people access and use information inside the company will alter as ChatGPT and comparable tools advance and become more capable of being trained on an enterprise's data in a secure manner.

As per Bianzino, the creation of text and documentation will also require training and alignment to the appropriate ontology of the particular organization, as well as containment, storage, and control inside the enterprise. He stated that business executives, including the CTO and CIO, must be aware of these trends because, unlike quantum computing, which may not even be realized for another 10 to 15 years, the actual potential of generative AI may be realized within the next six to twelve months.

Decentralized peer-to-peer technology mixed with blockchain and smart contracts capabilities overcome the traditional challenges of privacy, traceability, trust, and security. By doing this, data owners can share insights from data without having to relocate or otherwise give up ownership of it.



How Hackers Can Exploit ChatGPT, a Viral AI Chatbot


Cybernews researchers have discovered ChatGPT, a platform that provides hackers step-by-step instructions on hacking a website. An AI-based chatbot, ChatGPT was launched recently and has caught the attention of the online community. 

The team at Cybernews has warned that AI chatbots may be fun to play with, but they are also dangerous as it is able to give detailed info on how to exploit any vulnerability. 

What is ChatGPT?

AI has created a stir in the imaginations of leaders in the tech industry and pop culture for decades. Machine learning tech allows you to automatically create text, photos, videos, and other media. They are all flourishing in the tech sphere as investors put billions of dollars into this field. 

While AI has enabled endless opportunities to help humans, the experts warn about the potential dangers of making an algorithm that will outperform human capabilities and can get out of control. 

Apocalypse situations due to AI taking over the planet are not something we are talking about. However, in today's scenario, AI has already started helping threat actors in malicious activities.

ChatGPT is the latest innovation in AI, made by research company OpenAI which was led by Sam Altman, and also backed by Microsoft, LinkedIn Co-founder Reid Hoffman, Elon Musk, and Khosla Ventures. 

The AI chatbot can make conversations with people imitating various writing styles. The text made by ChatGPT is way more imaginative and complex when compared to earlier chatbots built by Silicon Valley. ChatGPT is trained using large amounts of text data from web, Wikipedia, and archived books. 

Popularity of ChatGPT

After five days after the ChatGPT launch, over one million people had signed up for testing the tech. Social media was invaded with users' queries and the AI's answers- writing poems, copywriting, plotting movies, giving important tips for weight loss or healthy relationships, creative brainstorming, studying, and even programming. 

According to OpenAI, ChatGPT models can answer follow-up questions, argue incorrect premises, reject inappropriate queries, and admit their personal mistakes. 

ChatGPT for hacking

According to cybernews, the research team tried "using ChatGPT to help them find a website's vulnerabilities. Researchers asked questions and followed the guidance of AI, trying to check if the chatbot could provide a step-by-step guide on exploiting."

"The researchers used the 'Hack the Box' cybersecurity training platform for their experiment. The platform provides a virtual training environment and is widely used by cybersecurity specialists, students, and companies to improve hacking skills."

"The team approached ChatGPT by explaining that they were doing a penetration testing challenge. Penetration testing (pen test) is a method used to replicate a hack by deploying different tools and strategies. The discovered vulnerabilities can help organizations strengthen the security of their systems."

Potential threats of ChatGPT and AI

Experts believe that AI-based vulnerability scanners used by cybercriminals can wreak havoc on internet security. However, cybernews team also sees the potential of AI in cybersecurity. 

Researchers can use insights from AI to prevent data leaks. AI can also help developers in monitoring and testing implementation more efficiently.

AI keeps on learning, it has a mind of its own. It learns newer ways of advanced tech and exploitation, and it works as a handbook to penetration testers, offering sample payloads fulfilling their needs. 

“Even though we tried ChatGPT against a relatively uncomplicated penetration testing task, it does show the potential for guiding more people on how to discover vulnerabilities that could, later on, be exploited by more individuals, and that widens the threat landscape considerably. The rules of the game have changed, so businesses and governments must adapt to it," said Mantas Sasnauskas, head of the research team. 





Does ChatGPT Bot Empower Cyber Crime?

Security experts have cautioned that a new AI bot called ChatGPT may be employed by cybercriminals to educate them on how to plan attacks and even develop ransomware. It was launched by the artificial intelligence r&d company OpenAI last month. 

Computer security expert Brendan Dolan-Gavitt questioned if he could command an AI-powered chatbot to create malicious code when the ChatGPT application first allowed users to communicate. Then he gave the program a basic capture-the-flag mission to complete.

The code featured a buffer overflow vulnerability, which ChatGPT accurately identified and created a piece of code to capitalize it. The program would have addressed the issue flawlessly if not for a small error—the number of characters in the input. 

The fact that ChatGPT failed Dolan Gavitt's task, which he would have given students at the start of a vulnerability analysis course, does not instill trust in massive language models' capacity to generate high-quality code. However, after identifying the mistake, Dolan-Gavitt asked the model to review the response, and this time, ChatGPT did it right. 

Security researchers have used ChatGPT to rapidly complete a variety of offensive and defensive cybersecurity tasks, including creating refined or persuading phishing emails, creating workable Yara rules, identifying buffer overflows in code, creating evasion code that could be utilized by attackers to avoid threat detection, and even writing malware. 

Dr. Suleyman Ozarslan, a security researcher and co-founder of Picus Security, claimed that he was able to program the program to carry out a variety of aggressive and defensive cybersecurity tasks, like the creation of a World Cup-themed email in perfect English and the generation of both evasion code that can get around detection rules as well as Sigma detection rules to find cybersecurity anomalies. 

Reinforcement learning is the foundation of ChatGPT. As a result, it acquires new knowledge through interaction with people and user-generated prompts. Additionally, it implies that over time, the program might pick up on some of the tricks researchers have employed to get around its ethical checks, either through user input or improvements made by its administrators. 

Multiple researchers have discovered a technique to get beyond ChatGPT's limitations, which stops it from doing things that are malicious, including providing instructions on how to make a bomb or writing malicious code. For the present term, ChatGPT's coding could be more optimal and demonstrates many of the drawbacks of relying solely on AI tools. However, as these models develop in complexity, they are likely to become more crucial in creating malicious software. 

Facebook Users Phished by a Chatbot Campaign


You might be surprised to learn that more users check their chat apps than their social profiles. With more than 1.3 billion users, Facebook Messenger is the most popular mobile messaging service in the world and thus presents enormous commercial opportunities to marketers.

Cybersecurity company SpiderLabs has discovered a fresh phishing campaign using Messenger's chatbot software

How do you make it all work? 

Karl Sigler, senior security research manager at Trustwave SpiderLabs, explains: "You don't just click on a link and then be offered to download an app - most people are going to grasp that's an attack and not click on it. In this attack, there's a link that takes you to a channel that looks like tech help, asking for information you'd expect tech support to seek for, and that escalating of the social-engineering part is unique with these types of operations."

First, a fake email from Facebook is sent to the victim – warning that their page has violated the site's community standards and would be deleted within 48 hours. The email also includes a "Appeal Now" link that the victim might use to challenge the dismissal.

The Facebook support team poses an "Appeal Now" link users can click directly from the email, asserting to be providing them a chance to appeal. The chatbot offers victims another "Appeal Now" button while posing as a member of the Facebook support staff. Users who click the actual link are directed to a Google Firebase-hosted website in a new tab.

According to Trustwave's analysis, "Firebase is a software development platform that offers developers with several tools to help construct, improve, and expand the app easier to set up and deploy sites." Because of this opportunity, spammers created a website impersonating a Facebook "Support Inbox" where users can chiefly dispute the reported deletion of their page. 

Increasing Authenticity in Cybercrime 

The notion that chatbots are a frequent factor in modern marketing and live assistance these days and that people are not prone to be cautious of their contents, especially if they come from a fairly reliable source, is one of the factors that contribute to this campaign's effectiveness. 

According to Sigler, "the advertising employs the genuine Facebook chat function. Whenever it reads 'Page Support,' My case number has been provided by them. And it's likely enough to get past the obstacles that many individuals set when trying to spot the phishing red flags."

Attacks like this, Sigler warns, can be highly risky for proprietors of business pages. He notes that "this may be very effectively utilized in a targeted-type of attack." With Facebook login information and phone numbers, hackers can do a lot of harm to business users, Sigler adds.

As per Sigler, "If the person in charge of your social media falls for this type of scam, suddenly, your entire business page may be vandalized, or they might exploit entry to that business page to acquire access to your clients directly utilizing the credibility of that Facebook profile." They will undoubtedly pursue more network access and data as well. 

Red flags to look out for 

Fortunately, the email's content contains a few warning signs that should enable recipients to recognize the letter as spoofed. For instance, the message's text contains a few grammatical and spelling errors, and the recipient's name appears as "Policy Issues," which is not how Facebook resolves such cases.

More red flags were detected by the experts: the chatbot's page had the handle @case932571902, which is clearly not a Facebook handle. Additionally, it's barren, with neither followers nor posts. The 'Very Responsive' badge on this page, which Facebook defines as having a response rate of 90% and replying within 15 minutes, was present although it seemed to be inactive. To make it look real, it even used the Messenger logo as its profile image. 

Researchers claim that the attackers are requesting passwords, email addresses, cell phone numbers, first and last names, and page names. 

This effort is a skillful example of social engineering since malicious actors are taking advantage of the platform they are spoofing. Nevertheless, researchers urge everyone to exercise caution when using the internet and to avoid responding to fake messages. Employing the finest encryption keys available will protect your credentials.

Phishing Scam Adds a Chatbot Like Twist to Steal Data

 

According to research published Thursday by Trustwave's SpiderLabs team, a newly uncovered phishing campaign aims to reassure potential victims that submitting credit card details and other personal information is safe. 

As per the research, instead of just embedding an information-stealing link directly in an email or attached document, the procedure involves a "chatbot-like" page that tries to engage and create confidence with the victim. 

Researcher Adrian Perez stated, “We say ‘chatbot-like’ because it is not an actual chatbot. The application already has predefined responses based on the limited options given.” 

Responses to the phoney bot lead the potential victim through a number of steps that include a false CAPTCHA, a delivery service login page, and finally a credit card information grab page. Some of the other elements in the process, like the bogus chatbot, aren't very clever. According to SpiderLabs, the CAPTCHA is nothing more than a jpeg file. However, a few things happen in the background on the credit card page. 

“The credit card page has some input validation methods. One is card number validation, wherein it tries to not only check the validity of the card number but also determine the type of card the victim has inputed,” Perez stated.

The campaign was identified in late March, according to the business, and it was still operating as of Thursday morning. The SpiderLabs report is only the latest example of fraudsters' cleverness when it comes to credit card data. In April, Trend Micro researchers warned that fraudsters were utilising phoney "security alerts" from well-known banks in phishing scams. 

Last year, discussions on dark web forums about deploying phishing attacks to capture credit card information grew, according to Gemini Advisory's annual report. Another prevalent approach is stealing card info directly from shopping websites. Researchers at RiskIQ claimed this week that they've noticed a "constant uptick" in skimming activity recently, albeit not all of it is linked to known Magecart malware users.