Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Documents. Show all posts

ChatGPT Vulnerability Exposes Users to Long-Term Data Theft— Researcher Proves It

 



Independent security researcher Johann Rehberger found a flaw in the memory feature of ChatGPT. Hackers can manipulate the stored information that gets extracted to steal user data by exploiting the long-term memory setting of ChatGPT. This is actually an "issue related to safety, rather than security" as OpenAI termed the problem, showing how this feature allows storing of false information and captures user data over time.

Rehberger had initially reported the incident to OpenAI. The point was that the attackers could fill the AI's memory settings with false information and malicious commands. OpenAI's memory feature, in fact, allows the user's information from previous conversations to be put in that memory so during a future conversation, the AI can recall the age, preferences, or any other relevant details of that particular user without having been fed the same data repeatedly.

But what Rehberger had highlighted was the vulnerability that hackers capitalised on to permanently store false memories through a technique known as prompt injection. Essentially, it occurs when an attacker manipulates the AI by malicious content attached to emails, documents, or images. For example, he demonstrated how he could get ChatGPT to believe he was 102 and living in a virtual reality of sorts. Once these false memories were implanted, they could haunt and influence all subsequent interaction with the AI.


How Hackers Can Use ChatGPT's Memory to Steal Data

In proof of concept, Rehberger demonstrated how this vulnerability can be exploited in real-time for the theft of user inputs. In chat, hackers can send a link or even open an image that hooks ChatGPT into a malicious link and redirects all conversations along with the user data to a server owned by the hacker. Such attacks would not have to be stopped because the memory of the AI holds the instructions planted even after starting a new conversation.

Although OpenAI has issued partial fixes to prevent memory feature exploitation, the underlying mechanism of prompt injection remains. Attackers can still compromise ChatGPT's memory by embedding knowledge in their long-term memory that may have been seeded through unauthorised channels.


What Users Can Do

There are also concerns for users who care about what ChatGPT is going to remember about them in terms of data. Users need to monitor the chat session for any unsolicited shift in memory updates and screen regularly what is saved into and deleted from the memory of ChatGPT. OpenAI has put out guidance on how to manage the memory feature of the tool and how users may intervene in determining what is kept or deleted.

Though OpenAI did its best to address the issue, such an incident brings out a fact that continues to show how vulnerable AI systems remain when it comes to safety issues concerning user data and memory. Regarding AI development, safety regarding the protected sensitive information will always continue to raise concerns from developers to the users themselves.

Therefore, the weakness revealed by Rehberger shows how risky the introduction of AI memory features might be. The users need to be always alert about what information is stored and avoid any contacts with any content they do not trust. OpenAI is certainly able to work out security problems as part of its user safety commitment, but in this case, it also turns out that even the best solutions without active management on the side of a user will lead to breaches of data.




Google's Chatbot Bard Aims for the Top, Targeting YouTube and Search Domains

 


There has been a lot of excitement surrounding Google's AI chatbot Bard - a competitor to OpenAI's ChatGPT, which is set to become "more widely available to the public in the coming weeks." However, at least one expert has pointed out that in its demo, Bard made a factual error. 

As a result of the AI competition between Google and OpenAI, the Microsoft-backed company that created ChatGPT and provides artificial intelligence services for its products, Google has now integrated its chatbot Bard into apps like YouTube, Gmail and Drive, according to a company announcement, published Tuesday. 

In New York, a Google executive said on Thursday at the Reuters NEXT conference that the company's experimental chatbot Bard represents a path to the development of another product that will reach two billion users. In an interview with TechCrunch, Google's Product Lead Jack Krawczyk commented that Bard has laid the foundation for Google to attract even more customers with the help of its artificial intelligence feature, which enables consumers to brainstorm and get information using new artificial intelligence. 

It is possible, for instance, to ask Bard to plan a trip for an upcoming date, complete with flight options and your choice of airline. Users could also ask the tool to summarize meeting notes that have been made in Google Drive documents that they have recently uploaded. Several improvements will be made to Bard this coming Tuesday, including connections to Google's other services. 

The chatbot is also capable of communicating with users in various languages, with the ability to perform a variety of fact-checking functions as well as a broader upgrade to the larger language model that is the foundation of the tool. Google's Bard has been improving its features for nearly six months after it was first introduced to the public, marking the biggest update to the program in that period. 

Among the tech giants, Google, Microsoft, and ChatGPT creator OpenAI are racing against one another, as they roll out increasingly sophisticated consumer-facing artificial intelligence technologies, and they hope to convince users of their value as more than just a gimmick to them.

It is now believed that Google, which recently issued an internal code red when OpenAI beat it to the release of its artificial intelligence chatbot, is using its other widely used software programs to make Bard more useful as a result of the code red. It’s relatively unknown that Bard has received the same amount of attention as ChatGPT. 

According to data from Similarweb, a company that analyzes data for companies, ChatGPT had nearly 1.5 billion desktop and mobile visits in August, substantially more than Google’s A.I. tool and other competitors, which had just 50 million visits. Bard recorded just under 200 million desktop and mobile internet visits throughout August, while ChatGPT by OpenAI also registered 200 million visits during the same period. 

In an interview, Jack Krawczyk, Google's product lead for Bard, stated that Google was aware of the limitations that had caused the chatbot to not appeal to as many people as it should have. Users had told Mr. Krawczyk that the product was neat and novel, but that it did not integrate very well with their personal lives. 

Earlier this month, Google released what it called Bard Extensions, which is an extension of the ChatGPT plug-in that OpenAI announced in March, which allows ChatGPT to work with updated information provided by third-party companies such as Expedia, Instacart, and OpenTable, their own web services and voice apps. 

As a result of the new updates, Google is going to be trying to replicate some of the search engine's capabilities by including Flights, Hotels, and Maps, so users can conduct travel and transportation research using Google's search engine. In addition, Bard may be closer to becoming a more personalized assistant for its users, where they can ask which emails they missed and which points in a document are of most importance to them. 

With the help of Google's large language model, an artificial intelligence algorithm trained on vast amounts of data, Bard had been able to assist students with writing drafts of essays or planning their friend's baby showers. 

As a result of these new extensions, Bard will now draw information from a host of Google services, as well. Now, Bard will be able to retrieve information from YouTube, Google Maps, Flights, and Hotels as well. According to Google, Bard can be accessed through several other services and ask the service for things like "Show me how to write a good man speech and show me YouTube videos about it for inspiration," or suggestions for travel, complete with driving directions, etc.

The Bard extensions can be disabled by the user at any moment by choosing to do so. It is also possible for users to link their Gmail, Docs and Google Drive accounts with Bard to the tool so that it will be able to help them analyze and manage their data. 

For instance, the tool might be able to help with queries such as: "Find the most recent lease agreement in my Drive and calculate how much my security deposit was," Google said in a statement. In a statement, the firm listed that the company will not use users' personal Google Workspace information to train Bard or to serve targeted advertising to users and that users can withdraw their permission at any time in case they do not want it to access their personal information. 

By giving Bard access to a wealth of personal information as well as popular services such as Gmail, Google Maps, and YouTube, Bard is, in theory, making itself even more helpful for its users and gaining their confidence as a result. Using Bard, Google posits that a person planning a group trip to the Grand Canyon may be able to get the dates that suit everyone, get flight and hotel options, provide directions based on Maps, and also take advantage of videos with a variety of useful information available on YouTube.

Researchers Discover Kimusky Infra Targeting South Korean Politicians and Diplomats

 

Kimusky, a North Korean nation-state group, has been linked to a new wave of nefarious activities targeting political and diplomatic entities in its southern counterpart in early 2022. 

The cluster was codenamed GoldDragon by Russian cybersecurity firm Kaspersky, with infection chains resulting to the implementation of Windows malware designed to file lists, user keystrokes, and stored web browser login credentials. South Korean university professors, think tank researchers, and government officials are among the potential victims. 

Kimsuky, also known as Black Banshee, Thallium, and Velvet Chollima, is a prolific North Korean advanced persistent threat (APT) group that targets entities globally, but with a primary focus on South Korea, to gather intelligence on various topics of interest to the regime.

The group, which has been active since 2012, has a history of using social engineering tactics, spear-phishing, and watering hole attacks to obtain sensitive information from victims.

Late last month, cybersecurity firm Volexity linked the actor to an intelligence-gathering mission aimed at siphon email content from Gmail and AOL using Sharpext, a malicious Chrome browser extension.

The latest campaign employs a similar tactic, with the attack sequence initiated by spear-phishing messages containing macro-embedded Microsoft Word documents supposedly comprising content related to geopolitical issues in the region. Alternative initial access routes are also said to use HTML Application (HTA) and Compiled HTML Help (CHM) files as decoys in order to compromise the system.

Whatever method is used, the initial access is followed by a remote server dropping a Visual Basic Script that is orchestrated to fingerprint the machine and retrieve additional payloads, including an executable capable of exfiltrating sensitive information.

The attack is unique in that it sends the victim's email address to the command-and-control (C2) server if the recipient clicks on a link in the email to download additional documents. If the request does not include the expected email address, a harmless document is returned.

To complicate matters even further, the first-stage C2 server forwards the victim's IP address to another VBS server, which compares it to an incoming request generated after the target opens the bait document. The two C2 servers' "victim verification methodology" ensures that the VBScript is distributed only when the IP address checks are successful, indicating a highly targeted approach.

"The Kimsuky group continuously evolves its malware infection schemes and adopts novel techniques to hinder analysis. The main difficulty in tracking this group is that it's tough to acquire a full-infection chain," Kaspersky researcher Seongsu Park concluded.