Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI Youtube Technology

YouTube Case at US Supreme Court Could Shape Protections for ChatGPT and AI (reuters.com) 26

When the U.S. Supreme Court decides in the coming months whether to weaken a powerful shield protecting internet companies, the ruling also could have implications for rapidly developing technologies like artificial intelligence chatbot ChatGPT. From a report: The justices are due to rule by the end of June whether Alphabet's YouTube can be sued over its video recommendations to users. That case tests whether a U.S. law that protects technology platforms from legal responsibility for content posted online by their users also applies when companies use algorithms to target users with recommendations.

What the court decides about those issues is relevant beyond social media platforms. Its ruling could influence the emerging debate over whether companies that develop generative AI chatbots like ChatGPT from OpenAI, a company in which Microsoft is a major investor, or Bard from Alphabet's Google should be protected from legal claims like defamation or privacy violations, according to technology and legal experts. That is because algorithms that power generative AI tools like ChatGPT and its successor GPT-4 operate in a somewhat similar way as those that suggest videos to YouTube users, the experts added.

This discussion has been archived. No new comments can be posted.

YouTube Case at US Supreme Court Could Shape Protections for ChatGPT and AI

Comments Filter:
  • TFA only mentions what the lawsuit is actually over right at the end:

    The case being decided by the Supreme Court involves an appeal by the family of Nohemi Gonzalez, a 23-year-old college student from California who was fatally shot in a 2015 rampage by Islamist militants in Paris, of a lower court's dismissal of her family's lawsuit against YouTube.

    The lawsuit accused Google of providing "material support" for terrorism and claimed that YouTube, through the video-sharing platform's algorithms, unlawfully r

    • This seems to be overreach on the part of the authors of the article if they're trying to tie broader implications for AI and ChatGPT to this case. The Supreme Court usually attempts to keep their rulings as narrow as they possibly can within the confines of existing law so as not to step on the toes of the legislature who should be writing the policy.
      • by Junta ( 36770 )

        I think it is reasonable to connect "youtube is liable for the output of their algorithms" to "openAI is liable for the output of their algorithms". Of course, that is but one of the steps that would have to happen to rule against YouTube.

        If the court rules against YouTube, then OpenAI should be worried

        If the court rules in favor of YouTube, that doesn't necessarily mean anything at all for OpenAI, depending on the reasoning.

  • If it takes a supreme court case to determine is B equates to A, why would the author automatically assume C equates to A?

  • For failing to teach critical thinking, logic and the scientific method.
    I've seen a college math major drawn down youtube rabbit hole of conspiracy theories - so - I think the key one is the scientific method which explains how scientists create hypothesis, conduct careful experiments and rigorously study results in order to make conclusions on the truth of the hypothesis.
    • You can lead a horse to water, but you cannot make him drink.

      It is not the lack of ability to apply critical thinking that caused issues for those math majors, but the desire to feel smart & special by understanding the secrets others do not. The only notable thing here is that math majors have the chops to earn the right to feel smart & special by real merit.

  • If I watch a video and take a violent and illegal action afterwards, is that the video's doing? Or should I be held accountable for my own actions?

    Should we regulate businesses so that they don't spread disinformation or violent extremists content? Sure. But that's not the same as holding them accountable for the acts of a third party. A reasonable best effort is the standard we should hold businesses to in filtering this kind of stuff. Holding them accountable for individual videos doesn't make sense, when

    • Maybe everyone should be held accountable for their own actions: both the person who does the commits morally wrong actions and those who encourage them.

      For example, under the Texas Law of Parties, YouTube could be viewed as a participant in the crime. At the federal level, conspiracy is already illegal. And I suspect that if you and your friends had posted a video encouraging people to riot, you could also be held accountable for that as well. If you convinced others to commit atrocities by lying to

    • Holding them accountable for individual videos doesn't make sense

      The discussion is not about holding them account for the content of individual videos. The discussion is about holding them account for their specific promotion of individual videos.

    • If I watch a video and take a violent and illegal action afterwards, is that the video's doing? Or should I be held accountable for my own actions?

      Well, if you want to have an analogy, look how people are trying to sue gun manufacturers saying their advertising drove people to commit murders....

      Rather than blame the human....they blame the tool or the company that makes a tool or provides information.

      Same type thing.

      I dunno when we stopped blaming the asshole criminal that does the actual crime.

      But hel

    • Indeed, this is what the legal area of 'causation' is all about. In newer areas of the law, the Supreme Court almost always keeps a lookout for the perfect case in which to rule whatever it is already leaning towards ruling. I'm 99% confident their ruling will be 'Hell no!' and this particular case was picked for being at the outer limits of ridiculousness

  • From a web description, "Section 230 is a provision of federal law that protects website hosts, including platforms like Google and Facebook, and their users from legal liability for online information provided by third parties."

    But AI is "generating" information, not just relaying links or directly presenting information provided by third parties.
    So is it protected? Or could this lawsuit succeed? https://yro.slashdot.org/story... [slashdot.org]

  • Hosting and curating are two different things. If you promote (up-rank) bad content that's on your hosting/messaging platform, you are acting as a curator, not just a content hoster. Section 320 only protects hosters, not curators.

    • by Erioll ( 229536 )

      I agree that hosting and curating are different, and should have different guidelines and protections. Where it gets difficult IMO is to what level of "curation" and "hosting" should the platforms be liable for?

      For example, YouTube, the algorithm is nearly opaque, and it runs it based on both what you and others are doing. Probably a pretty high level of liability.

      The opposite end: This slashdot forum. Except for ratings above a threshold (user-driven), things are laid out in exactly the order replies

    • Does that mean if someone scores 5 on a comment that slashdot should be liable for promoting the post? According to your definition, Slashdot is curating the post - displaying it more prominently that other posts.

      Section 230 protects just about anyone hosting a website with user-contributed content (including Slashdot, StackOverflow, GitHub, ...). If it isn't, you will dramatically raise the costs of anyone hosting 3rd party content, probably leading to the closing of many sites. We'll see what the Supreme

      • by Seahawk ( 70898 )

        One could argue that the ranking on slashdot is user-generated content as well? Where as the youtube ranking is part user-generated, if we assume thumbs up etc. is part of the algorithm, and part secret sauce they aren't telling about. I think there is a good argument for holding people responsible for the secret sauces of the internet.

  • That case tests whether a U.S. law that protects technology platforms from legal responsibility for content posted online by their users also applies when companies use algorithms to target users with recommendations.

    These are two completely different legal issues, and the text of S230 provides no obvious protection for editorializing and content curation. That's by design because the greater CDA was about cleaning up the Internet. S230 was intended to immunize companies for taking actions to remove offensi

    • by Junta ( 36770 )

      If the ruling is in favor of Youtube, you are right, the resemblance is too low to conclude much.

      If the ruling is against Youtube, then at least one facet that must be established is that companies are liable for their algorithmic output. So in which case that's a big consequence for generative AI. If Youtube is liable because they recommended certain content through algorithm, it would be reasonable to think that OpenAI could be found liable if, say, ChatGPT cyber-bullies someone to suicide.

  • Google has been using their secret "citizen dossiers" (that the end user citizen never gets to see) to feed a recommendation algorithm (that the end user never gets to understand) that have performed malicious psychological experiments against the end user.

    By now, Google has had ample opportunity to cover up any such abuses targeted at specific end users as well as to cover up any coordinated intent cc: The Justice Department

Life is a healthy respect for mother nature laced with greed.

Working...