Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
United States

OpenAI's ChatGPT Under Investigation by FTC (wsj.com) 32

The Federal Trade Commission is investigating whether OpenAI's ChatGPT artificial-intelligence system has harmed individuals by publishing false information about them, according to a letter the agency sent to the company. WSJ: The letter, reported earlier by The Washington Post and confirmed by a person familiar with the matter, also asked detailed questions about the company's data-security practices, citing a 2020 incident in which the company disclosed a bug that allowed users to see information about other users' chats and some payment-related information.
This discussion has been archived. No new comments can be posted.

OpenAI's ChatGPT Under Investigation by FTC

Comments Filter:
  • by omnichad ( 1198475 ) on Thursday July 13, 2023 @09:20AM (#63682529) Homepage

    It's a simple answer. ChatGPT doesn't publish anything. Might as well accuse a cat falling on a keyboard of treason.

    • They're going to ask where the information came from and how it ended up on a user's screen. What is OpenAI going to blame, the training data?

      The liability is somewhere between OpenAI and the website owner.

      • by omnichad ( 1198475 ) on Thursday July 13, 2023 @09:46AM (#63682607) Homepage

        Generating text behind a private login isn't the same as publishing it. The liability is on the person who uses that data publicly without considering the source. If you're using it privately, you can't call that published. That's like saying a private conversation is published.

        It's a predictive text generator, not a knowledge base. I sure hope a court is easy to convince. OpenAI isn't advertising the product as the same thing the news says the product is. That isn't the fault of the company but of shoddy clickbait journalism.

        • Nonsense, there are plenty of newspapers behind a paywall. That's a private login for subscribers. Nobody in their right mind would say that a newspaper didn't publish a story because you gotta have login acess to read it. Moreover, say a NYT journalist writes a slanderous story that's published by the NYT. The journalist can get sued. If ChatGPT writes a slanderous story for the NYT, then I don't see why ChatGPT shouldn't get sued either. Of course the software is just a proxy for the scientists at OpenA
      • They should not be liable because they use disclaimers.
        • > ChatGPT may produce inaccurate information about people, places, or facts
          • by HBI ( 10338492 )

            By that argument I may place a sign in front of my door "Entering these premises may result in disembowelment" and evade liability for chopping people up when they enter.

            No go. Doesn't work. It's simply a speed bump on the way to being held liable. Disclaimers are intended to *attempt* to skirt liability, not protections.

            • Chopping up people is illegal, generating bullshit isn't.

            • Doesn't sound much different than a "Trespassers will be shot" sign. I don't know where you live, but most states have a castle doctrine and even if your state makes it effectively impossible for you to own a firearm, a knife isn't illegal and defending your home against unwanted invaders with it isn't going to land you in any trouble unless you subdued the invader and killed them after the fact. Chopping up the body after the fact, even if you just charged them upon entry, is probably going to get a charge
    • Might as well accuse a cat falling on a keyboard of treason.

      The difference is that the cat usually produces total gibberish (unless you're a Perl coder. Then the cat might have accidentally debugged your 1-liner. Or done it on purpose).

      Whereas the whole purpose of LLM (and on a much smaller scale: your phone's autocomplete) is to produce things that could look like a coherent sentence.

      And herein lies the problem: because the output look coherent in the case of LLM, suddenly a fraction of people will start believing that they are the truth spoken by a very intelligen

      • I think the reverse question at that point is that if there is nothing ChatGPT can say to dissuade people that it's a knowledge base then they can no longer be held liable for what the users do. That just means that a person isn't listening.

        So ChatGPT could be used to produce some convincingly sounding disinformation.

        It sure could. That text could make part of a nice movie script. It's what you do with it that matters, so the blame should be on the person who takes that text and runs with it if they do the wrong thing.

  • This is a real issue (Score:4, Interesting)

    by Dan East ( 318230 ) on Thursday July 13, 2023 @09:47AM (#63682611) Journal

    This seems like it could be a very serious legal issue. With a search engine, everything is merely referenced / redirected to some 3rd party source. So search engines like Google aren't responsible for the fact that they merely indexed that content. With LLMs, they will flat-out produce libelous, inaccurate and totally fictitious output. I really don't know how those legally responsible for ChatGPT and others can get around that. I don't know that freedom of speech covers output generated by an AI, or output that is supposed to in some way represent fact-finding or search engine output.

    • by omnichad ( 1198475 ) on Thursday July 13, 2023 @09:50AM (#63682621) Homepage

      output that is supposed to in some way represent fact-finding or search engine output

      I think the real question is whether OpenAI is making those claims. As far as I know, they are not but the media and everyone else is. OpenAI is just a DJ remixing text content that may or may not have been true in the source material.

  • by MTEK ( 2826397 ) on Thursday July 13, 2023 @09:50AM (#63682625)

    It will be interesting how lawmakers/judges deal with this. As of now, this is what you agree to when you use their services:

    “Section 7. Indemnification; Disclaimer of Warranties; Limitations on Liability: (a) Indemnity. You will defend, indemnify, and hold harmless us, our affiliates, and our personnel, from and against any claims, losses, and expenses (including attorneys’ fees) arising from or relating to your use of the Services, including your Content, products or services you develop or offer in connection with the Services, and your breach of these Terms or violation of applicable law.”

    IOWs, imagine the hourly rate of OpenAI's legal team. Now imagine you, i.e., the end-user of ChatGPT, publishes something that gets OpenAI sued. Guess who gets the legal bill?

    • That should be illegal and in sane countries it is. It's one thing to manufacture a story, it's another thing entirely to knowingly and repeatedly aid a criminal in committing their crimes.

      In ChatGPT's case, they already block some queries. Which means they can't claim inability to act in court. If they keep getting their output used by assholes trying to incite violence or defame others with ChatGPT generated content, it's only a matter of time before the public will begin to consider that use one of Cha
  • Funny how if you do anything in this world of consequence, you attract lawyers and greedy government agencies like flies on shit. They stick their noses out and smell potential money to leech.

  • by Tablizer ( 95088 ) on Thursday July 13, 2023 @11:16AM (#63682829) Journal

    It's generally not illegal to lie. Fox et. al. already ended up testing these in court. Lying is only illegal if you defame somebody who is not a celebrity/politician (like Sandy Hook parents), or financially hurt another company (such as Dominion). Investors may also be reimbursed if a co. lied to them to get them to invest.

    If companies or individuals fall under these categories, they can sue ChatGPT's co. directly. I don't see why FTC is involved, since they ignore everyone else's alleged lying.

  • It was trained by the internet.... it repeated someone else word tokens in a statistical manner, it does not understand the word tokens in a menaingful way. This is basic information science that a 10 year old should understand at this point....wait we are talking lawyers.

Bus error -- please leave by the rear door.

Working...