Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI United States Technology

Bipartisan Bill Denies Section 230 Protection for AI (axios.com) 34

Sens. Josh Hawley and Richard Blumenthal want to clarify that the internet's bedrock liability law does not apply to generative AI, per a new bill introduced Wednesday. From a report: Legal experts and lawmakers have questioned whether AI-created works would qualify for legal immunity under Section 230 of the Communications Decency Act, the law that largely shields platforms from lawsuits over third-party content. It's a newly urgent issue thanks to the explosive of generative AI. The new bipartisan bill bolsters the argument that Section 230 doesn't cover AI-generated work. It also gives lawmakers an opening to go after Section 230 after vowing to amend it, without much success, for years.

Section 230 is often credited as the law that allowed the internet to flourish and for social media to take off, as well as websites hosting travel listings and restaurant reviews. To its detractors, it goes too far and is not fit for today's web, allowing social media companies to leave too much harmful content up online. Hawley and Blumenthal's "No Section 230 Immunity for AI Act" would amend Section 230 "by adding a clause that strips immunity from AI companies in civil claims or criminal prosecutions involving the use or provision of generative AI," per a description of the bill from Hawley's office.

This discussion has been archived. No new comments can be posted.

Bipartisan Bill Denies Section 230 Protection for AI

Comments Filter:
  • Huh? (Score:2, Informative)

    by timeOday ( 582209 )
    This is idiotic. Why would something AI-generated be held to a different standard than a photograph somebody staged or a drawing they made? Keeping in mind that all the pre-existing categories of illegal content already do apply to AI-generated works.
    • by Luthair ( 847766 )
      Maybe I'm reading this wrong, but my assumption is that they're saying if Google's automated systems create "original" content for Google, then Google itself should be liable. We're holding the company responsible for what it produces, section 230 is about not holding the company responsible for content that it did not produce but hosts.
      • I think it is saying that Google's AI is not a third-party, that Google is responsible for its own software, that AI is not a separate entity to be treated as a non-Google contributor for the purposes of Section 230.

        • by Luthair ( 847766 )
          Which is exactly what I said :)
        • by Budenny ( 888916 )

          Yes, agreed that this is what it seems to be saying.

          This particular case is pretty clear: if Google (or anyone) uses AI to generate content which it publishes, this is and ought to be outside the protection of 230. The fact that the content is AI generated is surely irrelevant. The key thing is that Google has created and published it. If it paid a bunch of people somewhere to generate content which it published, the same thing would apply, it would be a publisher and would lose 230 protection for that

          • I would add another situation where section 230 probably shouldn't apply - when the site curates content. E.g. places like Facebook, YouTube, etc. where they only display to users a tiny trickle of the available content selected to be as engaging as possible - at best creating echo chambers where conflicting viewpoints are rarely if ever seen, and at worst intentionally promoting false narratives.

    • by ranton ( 36917 )

      I am one of the first to say that AI shouldn't be treated differently, but I'm not convinced this represents AI being treated differently. Section 230 protects websites from third party content on their platforms. It all depends on how this is worded, but it makes sense to clarify when AI generated content is and is not protected by Section 230 based on whether the content is third party generated.

      If the platform is generating first party content using AI, it should not be protected by Section 230. An AI sh

      • by rsilvergun ( 571051 ) on Wednesday June 14, 2023 @04:55PM (#63603316)
        the article and the proposed law both make it sound like the protections to S230 apply to the content creator. They do not. They apply to the owner of the software application.

        The content creator is still responsible for their content. Whether it was made by an "AI" (aka a computer program) is completely irrelevant.

        This is a sly and sneaky way to chip away at Section 230 protections. You're being tricked, your gut it telling your that and rightly so. Don't be fooled.
        • Wrong on both counts - section 230 isn't really about software at all, it's about content.

          No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

          Which could be reasonably read to say that if I request an AI to give me a story about how Biden and Trump are secretly gay lovers who eat babies to prolong their life - that the AI owner is to be treated as the "publisher or speaker" of the resulting story, as they are the primary information content provider.

          I do think that needs to be clarified... but I'm not entirely sure how. I mean, common sense would initially

    • by vivian ( 156520 )

      Social media companies may not be directly accountable for content that they publish, but the people who generate and post that that content are.

      If you photoshop a picture of someone to make it look like they are commiting a crime, you are accountable for that if you then post it or publish it, and it is believed to be true sufficiently enough to cause reputation damage.

      If the operators of AI systems could claim immunity for responsibility for the output of the AI system, it would be possible to cre

    • by Zangief ( 461457 )

      well the summary says this is an opening to relitigate section 230, so maybe the whole thing is an excuse

    • Section 230 does not protect the content creator. It protects the platform that hosts their content against repercussions for the content that was posted. And only if they make a good-faith effort to take down violating content when found.

      Makes sense to me on this one. If the AI system is both the hosting platform and the content creator then they certainly shouldn't be able to hid behind 230 as if "we just host the content, we didn't create it" which is the onus of Section in the first place.
  • It's a trap (Score:5, Insightful)

    by rsilvergun ( 571051 ) on Wednesday June 14, 2023 @04:31PM (#63603240)
    Section 230 doesn't protect content it protects people.

    If AI content posted to your site makes you vulnerable to lawsuits all I have to do to silence you is pay a bot farm to post now illegal content to your site and wait for the lawsuits to shut you down.

    This is a way for the top to take back control of the internet from us pleebs. To turn it into cable TV.
    • by dfghjk ( 711126 )

      "If AI content posted to your site makes you vulnerable to lawsuits..."

      Excluding generative AI from section 230 protection does not make it illegal nor would an exclusion extend to sites that have section 230 protection. Talk about gaslighting.

    • TFS makes it sound reasonable, but the devil is always hidden somewhere in the details. A platform hosting 3rd party AI-generated content (for example, some ChatGPT output posted on Reddit) should have S230 protection. The entity running the AI software itself, should not have S230 protection.

      Now, what if Reddit ran their own AI software to generate their own posts, and put them up alongside the user-submitted content? In that case, Reddit should lose S230 protection for those specific posts, because the

      • Section 230 protects the platform not the poster. Whoever makes the content is still liable for the content.

        Reddit posting AI posts doesn't "lose" Section 230 protection, they never had it in the 1st place. Because they're a poster in that context, not a platform.

        Reddit as a platform is protected. Reddit as a poster is not.

        The people writing this bill are hoping we don't realize the distinction and that they can sneak this through as a backdoor way to make Reddit as a platform liable for AI gene
      • Now, what if Reddit ran their own AI software to generate their own posts, and put them up alongside the user-submitted content? In that case, Reddit should lose S230 protection for those specific posts, because they've switched roles from platform to author, and authors are held legally responsible for any "illegal" speech they may produce.

        This seems like a huge loophole in S230 that I'm surprised hasn't been abused (maybe it has?). Forget about generative AI, a site can avoid legal responsibility for the "illegal" speech they create by just claiming it came from some anonymous user.

    • Comment removed based on user account deletion
      • I'm not going after MicroGoogleAI as a platform, I'm going after them as a content creator. AI is just a fancy way to create content.

        AI is *not* the platform, it's the content. The fact that the content is generated on the fly is irrelevant. The Platform is whatever website I view that content on.

        So under existing law I can sue MicroGoogleAI because they do *not* have S230 protections (as a content creator). I can't sue MicroGoogleWebHosting because they *do* have S230 protections.

        The trick with
      • by nasch ( 598556 )

        The problem is, what if I ask BardGPT to write a story depicting rsilvergun as a troll pedophile who licks butts? Is it MicroGoog's fault? Mine? Both? Neither? I don't think this is as clear cut as the Republican politicians want to make it sound (shocker).

    • If AI content posted to your site makes you vulnerable to lawsuits

      That doesn't seem to be what the summary is talking about. If all we're actually talking about is companies being liable for the content that their own bots generate, I'm fine with that. Those bots aren't users, after all, they're tools being used by the company to generate content, so the companies should already be liable for that content. If Section 230 is ambiguously worded as it is, I'm fine with it being updated to make that liability clear.

    • by AmiMoJo ( 196126 )

      The site wouldn't be liable, the AI developer would be.

      It's to stop people making AIs that can do things like remove clothing or insert people into porn without consent. Or produce a deepfake video of a politician saying something they never said.

    • by stikves ( 127823 )

      It is almost as if they hate the fact they could not lock down Internet back in the day.

      Internet has been a "happy accident". If it was designed today, do you think they would allow anyone to receive a public IP address, register their DNS, and host an email server? Nope, you'd have to apply for permits, and install government approved software only.

      (They have almost missed the open PC platform, but now fixing it thanks to having almost zero ARM desktops that are free of EFI/secureboot/management chips).

      Of

  • by locater16 ( 2326718 ) on Wednesday June 14, 2023 @04:45PM (#63603280)
    This is just stripping Section 230 all over again. Recommendation algorithms are "AI" already, they're the biggest ML workloads companies run right now. That recent Supreme Court ruling, or rather confirmation of a lower court ruling upholding that Google can't be sued for its recommendations, is exactly what this bill would kill. Thus making it impossible for these algorithms to run, thus destroying the internet as we know it and half the US economy.

    Lobbyists might be evil, but they can be an evil that counteracts the evil of dumbass politicians. We just have to let them fight and hope things don't get too bad.
    • by dfghjk ( 711126 )

      If recommendations are "content" then they are not protected by 230 already. If they are not content and are protected, ML based recommendations are not "generative AI". What you said is nonsense. Also, you could literally eliminate the entire internet and not destroy half the US economy.

      Education is your enemy apparently.

    • AI generated content is what normally would be defined as something that could be copyrightable, had it been produced by a human. That's fairly distinct from recommendation algorithms, which are just essentially computer curated links to existing human-created content.

    • Recommendations are not content. They are an index of content.

      AI-generated text and images are content. They are produced by the company. They should not be protected under Sec230 because they are both produced and hosted by the company running the AI. You can't say "This is the violating content our company created, but an AI generated it and we just host it so we're protected under Sec230".

      No, AI is YOUR product as a company. That makes YOU as the company responsible for the authored media. That mea
  • Anything from Hawley has to be looked at with much skepticism. He wouldn't support anything unless it served the interests of furthering his ridiculous ideology
  • I think it's too damned early to be passing transformative bills like this. We don't have any idea whether it's necessary, and frankly I just don't see it. Yet.

  • Would this bill even matter? The AI companies don't host user-generated content, let alone make it accessible to others. They generate content for individual users, at the request of those users. If the AI-generated content gets posted anywhere it's on other services and it's posted by the user who asked for it to be generated. Section 230 might protect those other services, but the AI company wouldn't need it to get the claims dismissed.

A triangle which has an angle of 135 degrees is called an obscene triangle.

Working...