Search This Blog

Powered by Blogger.

Blog Archive

Labels

Users' Private Info Accidentally Made Public by ChatGPT Bug

OpenAI has revealed additional information, including the possibility that some users' financial information may have been compromised.

 

After taking ChatGPT offline on Monday, OpenAI has revealed additional information, including the possibility that some users' financial information may have been compromised. 

A redis-py bug, which led to a caching problem, caused certain active users to potentially see the last four numbers and expiration date of another user's credit card, along with their first and last name, email address, and payment address, the business claims in a post. Users might have also viewed tidbits of other people's communication histories. 

It's not the first time that cache problems have allowed users to view each other's data; in a famous instance, on Christmas Day in 2015, Steam users were sent pages containing data from other users' accounts. It is quite ironic that OpenAI devotes a lot of attention and research to determining the potential security and safety repercussions of its AI, yet it was taken by surprise by a fairly well-known security flaw. 

The firm claimed that 1.2 percent of ChatGPT Plus subscribers who used the service on March 20 between 4AM and 1PM ET may have been impacted by the payment information leak. 

According to OpenAI, there are two situations in which payment information might have been exposed to an unauthorised user. During that time, if a user visited the My account > Manage subscription page, they might have seen information about another ChatGPT Plus customer who was actively utilising the service. Additionally, the business claims that certain membership confirmation emails sent during the event were sent to the incorrect recipient and contained the final four digits of a user's credit card information. 

The corporation claims it has no proof that either of these events actually occurred before January 20th, though it is plausible that both of them did. Users who may have had their payment information compromised have been contacted by OpenAI. 

It appears that caching had a role in how this whole thing came about. The short version is that the company uses a programme called Redis to cache user information. In some cases, a Redis request cancellation would result in damaged data being delivered for a subsequent request, which wasn't supposed to happen. The programme would typically get the data, declare that it was not what it had requested, and then raise an error.

Yet, the software determined everything was good and presented it to them if the other user was requesting for the same type of data — for example, if they were trying to view their account page and the data was someone else's account information. 

Users were being fed cache material that was originally intended to go to someone else but didn't because of a cancelled request, which is why they could see other users' payment information and conversation history. It also only affected individuals who were actively using the system for that reason. The software wouldn't cache any data for users who weren't actively using it. 

What made matters worse was that, on the morning of March 20, OpenAI made a change to their server that unintentionally increased the amount of Redis queries that were aborted, increasing the likelihood that the issue would return an irrelevant cache to someone.

As per OpenAI, the fault that only affected a very specific version of Redis has been addressed, and the team members have been "great collaborators." It also claims that it is changing its own software and procedures to ensure that something similar doesn't occur again. Changes include adding "redundant checks" to ensure that the data being served actually belongs to the user making the request and decreasing the likelihood that its Redis cluster will experience errors when under heavy load.
Share it:

AI Tool

Data Breach

Data Leak

Data Safety

Security Bug

User Privacy

User Safety