Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI Open Source

OpenAI Admits ChatGPT Leaked Some Payment Data, Blames Open-Source Bug (openai.com) 22

OpenAI took ChatGPT offline earlier this week "due to a bug in an open-source library which allowed some users to see titles from another active user's chat history," according to an OpenAI blog post. "It's also possible that the first message of a newly-created conversation was visible in someone else's chat history if both users were active around the same time....

"Upon deeper investigation, we also discovered that the same bug may have caused the unintentional visibility of payment-related information of 1.2% of the ChatGPT Plus subscribers who were active during a specific nine-hour window." In the hours before we took ChatGPT offline on Monday, it was possible for some users to see another active user's first and last name, email address, payment address, the last four digits (only) of a credit card number, and credit card expiration date. Full credit card numbers were not exposed at any time.

We believe the number of users whose data was actually revealed to someone else is extremely low. To access this information, a ChatGPT Plus subscriber would have needed to do one of the following:

- Open a subscription confirmation email sent on Monday, March 20, between 1 a.m. and 10 a.m. Pacific time. Due to the bug, some subscription confirmation emails generated during that window were sent to the wrong users. These emails contained the last four digits of another user's credit card number, but full credit card numbers did not appear. It's possible that a small number of subscription confirmation emails might have been incorrectly addressed prior to March 20, although we have not confirmed any instances of this.

- In ChatGPT, click on "My account," then "Manage my subscription" between 1 a.m. and 10 a.m. Pacific time on Monday, March 20. During this window, another active ChatGPT Plus user's first and last name, email address, payment address, the last four digits (only) of a credit card number, and credit card expiration date might have been visible. It's possible that this also could have occurred prior to March 20, although we have not confirmed any instances of this.


We have reached out to notify affected users that their payment information may have been exposed. We are confident that there is no ongoing risk to users' data. Everyone at OpenAI is committed to protecting our users' privacy and keeping their data safe. It's a responsibility we take incredibly seriously. Unfortunately, this week we fell short of that commitment, and of our users' expectations. We apologize again to our users and to the entire ChatGPT community and will work diligently to rebuild trust.

The bug was discovered in the Redis client open-source library, redis-py. As soon as we identified the bug, we reached out to the Redis maintainers with a patch to resolve the issue.

"The bug is now patched. We were able to restore both the ChatGPT service and, later, its chat history feature, with the exception of a few hours of history."
This discussion has been archived. No new comments can be posted.

OpenAI Admits ChatGPT Leaked Some Payment Data, Blames Open-Source Bug

Comments Filter:
  • I'm starting a betting pool. How long before Microsoft Tays ChatGPT?
    • by shanen ( 462549 ) on Saturday March 25, 2023 @12:30PM (#63398615) Homepage Journal

      And never a liability to pay for!

      Or hasn't anyone explained to you how the EULA works? (I score that as one of Microsoft's two earned points. The other was marketing upstream from the victims AKA users.)

      Then again, I might be even more hopelessly confused than usual. I don't understand "tays" in your context. Please clarify?

      I'm finding it increasingly difficult to believe technology can solve more problems than it creates. Take the example of network-propagated misinformation. (Kind of a dual of the main problem of this story, which is about the malicious propagation of valid information.) I used to think that disinformation could best be addressed, possibly even solved, by "knowing your sources". My tag for that idea was MEPR, standing for Multidimensional Earned Public Reputation. Short summary is that a low credibility source would have a low MEPR and low visibility. Only sources that earned high MEPRs would potentially have high visibility, though people could still consider the various dimensions and no identity would be able to create the equivalent of a Christmas Tree Packet...

      As far as I know, there are no websites that implement strong MEPR systems. Probably Funny joke there is that Slashdot's moderation system might be the closest approach to MEPR in the real world. (For example, almost no ACs.) At least I've been unable to find a better one, and I've been searching for years. But maybe you have a URL to hurl my way?

      But now? I think ChatGPT (with or without bugs) might be the perfect tool to automate MEPR inflation for sock puppets. That includes linking to real persons to prevent isolating the networks of circle-jerking sock puppets.

      Y'all have a real nice day. Y'ear?

      (And I do think this story has high potential for Funny. Too bad I can't write funny, eh?)

    • the sooner the better, that is all i know.
  • by gTsiros ( 205624 ) on Saturday March 25, 2023 @12:06PM (#63398591)

    ... victim.

    Your *program* leaked the information. Not a third-party. Your program. Yours.

    "But but it could also be intel sidechannel embedded arm processor undocumented zero day not my fault"

    yeah. Tough shit. Welcome to the club. You tried making money off of something demonstrably uncontrollable and downright atrocious.

    Computers suck. Deal with it or take up knitting.

  • You pushed user queries and responses through a shared redis list and never bothered to check that the response matched the query. This is not a third-party bug, it was your own design choice.

    • They're also using queues to make database requests to Redis. There's nothing theoretically wrong with that, but most developers are a lot better at doing HTTP-style, one transaction types of requests, so you're opening yourself to a lot of bugs when you use queues for message passing without it being absolutely necessary. In this case, they are using Redis for performance, so they don't need to use a queue.

      In this case, they suddenly started cancelling a lot of their Redis requests (who knows why, that's
      • No matter how you slice it, parallel programming is hard and will lead to bugs. It doesn't matter if it is multi-threading, interprocess or multiple machines. The transport mechanism is less important than the fact that it is parallel computing.

        In looking over the link from another poster (https://github.com/redis/redis-py/issues/2579), it looks like the bug happens if a request is cancelled before being processed. This happens in the library code and not the application. The calling code could be 100% "cor

        • I don't think your point about HTTP-style requests is quite right. HTTP-style requests probably have fewer bugs like this because those bugs have already been found and fixed in the libraries and not because it is inherently a better way to program. (Obviously if you are used to one style of programming, you'll be less likely to make mistakes using that style - experience matters.)

          I can't see where you think I'm wrong. I didn't say HTTP-style requests are inherently a better way to program. I said

          I did say based on probability (the number of programmers who know how to handle queues correctly) and based on the present evidence, ChatGPT programmers have no clue how to handle queues. (For example, if you are using queues to get subsecond latency which is why they claim to be using Redis, then you shouldn't be needing to cancel many requests. Something wrong is going on there, and th

        • No matter how you slice it, parallel programming is hard and will lead to bugs

          Btw, you should always be asking yourself, "Will this code lead to deadlocks? Will it lead to race conditions?"With HTTP-style requests in an environment with no stored state except a relational DB, those answers are solved (although if you don't know how to handle databases, you might have some of both)

  • You can bet that openai is open to supply chain attacks not so different than this 'bug'.

  • The bug (Score:3, Informative)

    by AlexanderPatrakov ( 6166150 ) on Saturday March 25, 2023 @02:42PM (#63398831)
    The bug in redis-py was reported independently by two users: https://github.com/redis/redis... [github.com] and https://github.com/redis/redis... [github.com] The fix is in redis-py 4.5.3: https://github.com/redis/redis... [github.com] I have not checked which exactly versions are affected, and whether the fix is backportable. There was no discussion on the oss-security list, and AFAIK no CVE ID.
  • If they cannot get something like this correct, why should expect them to get safe, controlled AI correct either?
  • Apparently, these people are the same type of semi-competents that mess it up security-wise everywhere else. That pretty much precludes a lot of professional use of their tool.

  • While I wouldn't normally ambulance chase, I think its instructive to show others the kinds of bugs that can hurt a service/system. Here is the related bug report for the redis-py project. https://github.com/redis/redis... [github.com] Not blaming either party, mistakes happen, just a reminder of the interconnected world we live in and the risks involved.
  • Anyone else here think it's fairly obvious they don't have any idea how this thing actually works?

  • can nearly always be read as "when someone pointed out that this almost certainly happened, we told them to immediately stop looking into the logs / code".

If all else fails, lower your standards.

Working...