OpenAI's ChatGPT Under Investigation by FTC (wsj.com) 32
The Federal Trade Commission is investigating whether OpenAI's ChatGPT artificial-intelligence system has harmed individuals by publishing false information about them, according to a letter the agency sent to the company. WSJ: The letter, reported earlier by The Washington Post and confirmed by a person familiar with the matter, also asked detailed questions about the company's data-security practices, citing a 2020 incident in which the company disclosed a bug that allowed users to see information about other users' chats and some payment-related information.
Re: (Score:2, Insightful)
Right because Elon is know for having a great working relationship with federal regulators, such that they'd be incline to do his bidding.
Try again hater.
Re: (Score:1)
Dude is getting mad government contracts, he must have some connections and numbers he can call. He has 300 billion he could buy his way anything.
Re:Elon done it (Score:4, Informative)
Getting government contracts is generally a matter of having a army of specialist who know to respond to RFPs with the correct keywords, while also dotting all i's crossing all t's and setting up shell companies in Delaware with 'minority' owners faster than the other guy can. Its not magic, and its not really friends in high places, its good old fashion barriers to entry, created by lobbying to protect the interest of big business while being disguised to being good for the little guy..
Re: (Score:2)
Competition that he co-founded and funded. Crazy guy.
Re: (Score:2)
Elon crazy. Same guy that thinks OpenAI is taking predictive text too far too fast wants cars to drive themselves right now.
Re: (Score:2)
Re: (Score:2)
It reminds me a lot of Steve Jobs who would downplay all kinds of technologies (video, native apps, etc.) only for them to show up in next years iDevice
How hard is this? (Score:5, Insightful)
It's a simple answer. ChatGPT doesn't publish anything. Might as well accuse a cat falling on a keyboard of treason.
I don't think your argument would convince a court (Score:3)
They're going to ask where the information came from and how it ended up on a user's screen. What is OpenAI going to blame, the training data?
The liability is somewhere between OpenAI and the website owner.
Re:I don't think your argument would convince a co (Score:4, Insightful)
Generating text behind a private login isn't the same as publishing it. The liability is on the person who uses that data publicly without considering the source. If you're using it privately, you can't call that published. That's like saying a private conversation is published.
It's a predictive text generator, not a knowledge base. I sure hope a court is easy to convince. OpenAI isn't advertising the product as the same thing the news says the product is. That isn't the fault of the company but of shoddy clickbait journalism.
Re: (Score:2)
Re: (Score:3)
Re: (Score:2)
Re: (Score:3)
By that argument I may place a sign in front of my door "Entering these premises may result in disembowelment" and evade liability for chopping people up when they enter.
No go. Doesn't work. It's simply a speed bump on the way to being held liable. Disclaimers are intended to *attempt* to skirt liability, not protections.
Re: (Score:3)
Chopping up people is illegal, generating bullshit isn't.
Re: (Score:2)
Cat versus text predictive model (Score:2)
Might as well accuse a cat falling on a keyboard of treason.
The difference is that the cat usually produces total gibberish (unless you're a Perl coder. Then the cat might have accidentally debugged your 1-liner. Or done it on purpose).
Whereas the whole purpose of LLM (and on a much smaller scale: your phone's autocomplete) is to produce things that could look like a coherent sentence.
And herein lies the problem: because the output look coherent in the case of LLM, suddenly a fraction of people will start believing that they are the truth spoken by a very intelligen
Re: (Score:2)
I think the reverse question at that point is that if there is nothing ChatGPT can say to dissuade people that it's a knowledge base then they can no longer be held liable for what the users do. That just means that a person isn't listening.
So ChatGPT could be used to produce some convincingly sounding disinformation.
It sure could. That text could make part of a nice movie script. It's what you do with it that matters, so the blame should be on the person who takes that text and runs with it if they do the wrong thing.
This is a real issue (Score:4, Interesting)
This seems like it could be a very serious legal issue. With a search engine, everything is merely referenced / redirected to some 3rd party source. So search engines like Google aren't responsible for the fact that they merely indexed that content. With LLMs, they will flat-out produce libelous, inaccurate and totally fictitious output. I really don't know how those legally responsible for ChatGPT and others can get around that. I don't know that freedom of speech covers output generated by an AI, or output that is supposed to in some way represent fact-finding or search engine output.
Re:This is a real issue (Score:4, Interesting)
output that is supposed to in some way represent fact-finding or search engine output
I think the real question is whether OpenAI is making those claims. As far as I know, they are not but the media and everyone else is. OpenAI is just a DJ remixing text content that may or may not have been true in the source material.
OpenAI/ChatGPT has an indemnification clause (Score:5, Informative)
It will be interesting how lawmakers/judges deal with this. As of now, this is what you agree to when you use their services:
“Section 7. Indemnification; Disclaimer of Warranties; Limitations on Liability: (a) Indemnity. You will defend, indemnify, and hold harmless us, our affiliates, and our personnel, from and against any claims, losses, and expenses (including attorneys’ fees) arising from or relating to your use of the Services, including your Content, products or services you develop or offer in connection with the Services, and your breach of these Terms or violation of applicable law.”
IOWs, imagine the hourly rate of OpenAI's legal team. Now imagine you, i.e., the end-user of ChatGPT, publishes something that gets OpenAI sued. Guess who gets the legal bill?
Re: (Score:2)
In ChatGPT's case, they already block some queries. Which means they can't claim inability to act in court. If they keep getting their output used by assholes trying to incite violence or defame others with ChatGPT generated content, it's only a matter of time before the public will begin to consider that use one of Cha
leeches (Score:2)
Funny how if you do anything in this world of consequence, you attract lawyers and greedy government agencies like flies on shit. They stick their noses out and smell potential money to leech.
Not illegal to lie in the USA. (Score:3)
It's generally not illegal to lie. Fox et. al. already ended up testing these in court. Lying is only illegal if you defame somebody who is not a celebrity/politician (like Sandy Hook parents), or financially hurt another company (such as Dominion). Investors may also be reimbursed if a co. lied to them to get them to invest.
If companies or individuals fall under these categories, they can sue ChatGPT's co. directly. I don't see why FTC is involved, since they ignore everyone else's alleged lying.
What did the lawyers not understand (Score:2)