Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
United States The Courts

Supreme Court Could Be About To Decide the Legal Fate of AI Search (theverge.com) 92

The Supreme Court is about to reconsider Section 230, a law that's been foundational to the internet for decades. But whatever the court decides might end up changing the rules for a technology that's just getting started: artificial intelligence-powered search engines like Google Bard and Microsoft's new Bing. From a report: Next week, the Supreme Court will hear arguments in Gonzalez v. Google, one of two complementary legal complaints. Gonzalez is nominally about whether YouTube can be sued for hosting accounts from foreign terrorists. But its much bigger underlying question is whether algorithmic recommendations should receive the full legal protections of Section 230 since YouTube recommended those accounts to others. While everyone from tech giants to Wikipedia editors has warned of potential fallout if the court cuts back these protections, it poses particularly interesting questions for AI search, a field with almost no direct legal precedent to draw from.

Companies are pitching large language models like OpenAI's ChatGPT as the future of search, arguing they can replace increasingly cluttered conventional search engines. (I'm ambivalent about calling them "artificial intelligence" -- they're basically very sophisticated autopredict tools -- but the term has stuck.) They typically replace a list of links with a footnote-laden summary of text from across the web, producing conversational answers to questions. These summaries often equivocate or point out that they're relying on other people's viewpoints. But they can still introduce inaccuracies.

This discussion has been archived. No new comments can be posted.

Supreme Court Could Be About To Decide the Legal Fate of AI Search

Comments Filter:
  • by DeplorableCodeMonkey ( 4828467 ) on Thursday February 16, 2023 @11:06AM (#63298407)

    The algorithm is a user. The algorithm-user is fully owned by the platform. When it speaks, it speaks as the platform. Therefore the platform has no Section 230 protection anymore than it would if the CEO makes a personal account and starts shitposting on it. Section S230 does not protect the company if a corporate officers does that because you cannot separate the platform from its workforce publicly using it for unethical or unlawful purposes.

    • by narcc ( 412956 )

      That's absurd.

    • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Thursday February 16, 2023 @11:28AM (#63298451) Homepage Journal

      The algorithm is a user.

      Who told you that, and why did you repeat it unironically when it's obviously completely false? There is literally no basis for believing that. It's not logical, and it's also not the law, which holds humans and software to be fundamentally different (which is why an AI can't be listed as an author on a patent application — specific to US law, but then, we're talking about US law.)

      Therefore the platform has no Section 230 protection anymore than it would if the CEO makes a personal account and starts shitposting on it.

      If the algorithm wrote comments, then the site would be responsible for those comments. If the CEO writes comments, then the CEO and the site are both responsible. If the algorithm removes comments, that's protected by Section 230. If the CEO removes comments, that's also protected by Section 230. This really isn't as complicated as you're making it out to be.

      • I am curious why Section 230 provides both (1) protection of the company from being liable for the content posted by its users and (2) protection when they moderate and remove stuff posted by the users? It looks that these two points kind of contradict to each other, and in the end provide full protection, no matter what.
        If the company can be held responsible for posted stuff, I can image they have to moderate and remove stuff. OK. But if they are protected, then they do not have to do that at all, they
        • by garett_spencley ( 193892 ) on Thursday February 16, 2023 @03:36PM (#63299449) Journal

          I don't see what one has to do with the other.

          Yes, I know there is this "If you moderate therefore you are a 'publisher'" argument, but that stems from this misconception that Section 230 had some sort of implicit requirement of neutrality, which is a complete myth and fabrication that has no basis in reality what-so-ever.

          https://www.eff.org/deeplinks/2018/04/no-section-230-does-not-require-platforms-be-neutral [eff.org]

          The whole point was that people that were starting to creating online forums and communities, and these "internet services" needed to a) be able to moderate those discussions, even if it's just to remove spam and to keep things on-topic and b) that in order to encourage people to create these types of forums and discussion in the first place, it was necessary for the law to recognize that it would be incredibly expensive and impractical for most to police literally every single thing that users post and so the services are not liable for the user-generated content, the user who generated said content is.

          Conflating the two is nonsensical. If you make services that moderate the "publishers" of user-generated content, then suddenly Sally's Food Blog becomes liable for ALL comments because Sally's mods removed a spam bot or a troll. And the "neutrality" thing would mean that topical discussion forums become legally impractical because if a comment gets removed for being off topic, or a user banned for repeatedly spamming the forum with off topic nonsense, then suddenly they are liable for ALL comments. It is a ridiculous proposition.

        • I am curious why Section 230 provides both (1) protection of the company from being liable for the content posted by its users and (2) protection when they moderate and remove stuff posted by the users?

          Because sites can't function without both of those rights.

          It looks that these two points kind of contradict to each other,

          What? Why? You have to explain that.

          If the company can be held responsible for posted stuff, I can image they have to moderate and remove stuff. OK. But if they are protected, then they do not have to do that at all, they are protected.

          I see you've never run a site which allows public comments, nor did you bother searching for any explanation from anyone who has. There are plenty of comments that aren't illegal but which still interfere with the desired function of your fora.

        • I am curious why Section 230 provides both (1) protection of the company from being liable for the content posted by its users and (2) protection when they moderate and remove stuff posted by the users? It looks that these two points kind of contradict to each other, and in the end provide full protection, no matter what.

          The original title of the law was "The communications Decency Act".

          The intent of the law when it was written was to encourage platforms to try to censor objectionable content.

          Therefore, it granted platforms freedom to censor anything they found objectionable and shielded them from liability for anything that they missed.

          • Yes, thank you for pointing this out. It makes sense now. I also checked it myself. I totally see the intent now. However, it does look like it was designed for much smaller forums, where you have topics and off-topics and sometimes crazy people post crazy things. Like Slashdot.
            My point is that when the forum becomes so large and so general as Facebook or youtube, this works poorly. Because they are more like a mass media, there is no "topic" that is maintained and there is no competition to fight for. In
    • If it goes to the Supreme Court, put a fork in it, it's done. Remember, the current Supreme Court is a Trump-packed group of conservatives that made abortion illegal just a few months ago. Some day, I'm sure we can turn the Supreme Court back into a respected arm of government, but that day is not today.

      • by ShanghaiBill ( 739463 ) on Thursday February 16, 2023 @11:44AM (#63298503)

        made abortion illegal just a few months ago.

        The Supreme Court did not "make abortion illegal."

        What they did was say the issue should be decided through contested state elections rather than contested federal judicial appointments.

        It is the state legislatures that are making abortion illegal. If you don't like that, then get out and vote. That's the way democracy is supposed to work.

        • Re: (Score:2, Troll)

          by Ksevio ( 865461 )

          They opened the door for it to be made illegal both at the state level and at the federal level. It's the first time the court has taken away rights it previously granted. Human rights are something that's we should be protecting, not leaving up to states where the politicians have corrupted the election system so much that they're now selecting their voters (through Gerrymandering) rather than the voters selecting their leaders (as democracy is suppose to work).

          We fought a war over states' rights to restri

          • They opened the door for it to be made illegal both at the state level

            If you don't like your state's policies, then get out and vote. Many candidates are finding that rabid pro-life positions are electoral poison even in red states.

            and at the federal level.

            There is no chance of that happening. None.

            It's the first time the court has taken away rights it previously granted.

            "Granting rights" is not what courts are for.

            We fought a war over states' rights to restrict human rights (slavery)

            We also added two amendments to the Constitution (13 & 14) to make that happen. We didn't just pretend the Constitution says something it doesn't say.

            • Re: (Score:2, Insightful)

              by drinkypoo ( 153816 )

              We fought a war over states' rights to restrict human rights (slavery)

              We also added two amendments to the Constitution (13 & 14) to make that happen. We didn't just pretend the Constitution says something it doesn't say.

              One thing it doesn't say is that slavery is abolished. What the thirteenth amendment says is that you can only enslave convicted criminals. That's not abolition of slavery. Stop pretending that America is not still a slave state.

              Also, on the subject of The Catholic Court [slashdot.org], what they said is relevant, but what they did is another; three of them willfully misled congress (and in one case even deliberately and directly lied to them) about their stance on whether Roe should be repealed. Is that how democracy is

            • by Ksevio ( 865461 )

              If you don't like your state's policies, then get out and vote. Many candidates are finding that rabid pro-life positions are electoral poison even in red states.

              That works to an extent, but when the majority of a state believes something, the government shouldn't be contradicting it.

              There is no chance of that happening. None.

              Likely so, despite the Republican bills

              "Granting rights" is not what courts are for.

              My mistake, should have said the court acknowledged the right that was granted by the constitution.

          • Courts don't grant rights. The Congress and states do that. Talk to your Congressman.

            • by Zak3056 ( 69287 )

              Courts don't grant rights. The Congress and states do that. Talk to your Congressman.

              If we're splitting hairs, governmental bodies do not grant rights--they recognize that they exist, and (may) prohibit their infringement. "Congress shall make no law..." is a recognition and protection of your pre-existing right to speak or worship freely (or not), to assemble peacefully, and to publish your words. It is not a grant, and the Constitution explicitly acknowledges that.

        • by jacks smirking reven ( 909048 ) on Thursday February 16, 2023 @12:19PM (#63298659)

          In general you would be right, we vote for the President who appoints justices.

          However we did re-elect Obama fair and square in 2012 and when it was time for him to nominate a judge the Republicans basically took the ball and went home upending that entire precedent in a way that was in no way shape or form was conguent with the ruling document. Kinda soured the whole "democracy" angle on that process a bit.

          Combine that with the lifetime appointments, no term limits and you have to at least empathize a little bit how the SC is being viewed as a somewhat undemocratic institution when this is probably a very clearcut example of "legislating from the bench".

        • They knew perfectly well what the end result would be. The right wing Judges have also been actively campaigning on the side to criminalize abortion by showing up at meetings and events with a variety of groups that exist to advocate for criminalization.

          It's naïve to think they want anything more than to ban abortion. This is an "If it looks like a duck..." scenario. We need to stop pretended the Supreme Court are interested in what the Constitution says rather than their obvious political agendas.
      • The Supreme court never made abortion illegal. They invalidated Roe-vs-Wade and made the responsibility of legislating abortion control or rights at the State level, which is where it decided before RvW.

    • The bigger question not being asked is: Why should we care if Youtube 'recommends' something? They aren't forcing your eyeballs open to watch this content as if it's Clockwork Orange.
      • No, that's a dumb question. The important question is why should companies be able to show you illegal content and be immune from punishment. If you promote porn to kids, incite violence, or spread terrorist speech, you are breaking the law.

        • I'm going to sue the phone company because a terrorist called me asking to join their jihad.
        • Doing something knowingly is a key requirement for the vast majority of laws. If you have sex in front of children, you're breaking the law. But if a kid walks in on you and you immediately stop and cover up, or if a kid installs a spycam you don't know about, you're *not* breaking the law. You're suggesting a principle that would hold you should be.
    • The algorithm is a user. The algorithm-user is fully owned by the platform. When it speaks, it speaks as the platform. Therefore the platform has no Section 230 protection anymore than it would if the CEO makes a personal account and starts shitposting on it. Section S230 does not protect the company if a corporate officers does that because you cannot separate the platform from its workforce publicly using it for unethical or unlawful purposes.

      230 protection is the right to censor content and not be held liable for what isn't censored. The idea was if you don't censor you'd already have a good argument, but if you do censor content you might be liable for what slips through.

      It makes absolutely no sense they would lose that protection because "algorithm".
      Censoring pornography algorithmically for example, is 100% the sprit of the law, that's what it was meant to encourage when it was written, and it's very clear you're not liable for other people's

    • The algorithm is a service. A program. If the algorithm is a user, then so is the Start Menu in windows and the Sort Function in excel.
    • I think you're doing it on purpose. Whether it's an algorithm or not S230 protects YouTube from legal liability for anything their users post (except copyright, DMCA covers that).

      This is a good thing, because without it the Internet becomes cable Television. Instead of the free speech paradise you're pining for you'll get DMCA style takedowns for every account. I can literally send an email to slashdot's editors and say DeplorableCodeMonkey hurt my fee-fees and I want you gone, and they'll have to eithe
      • Section 230 is an abomination. We don't need social media. We need real content creators who take responsibility for their actions. If you want to put some crazy shit up on the Internet, go buy iamunstable.com and share your thoughts there.

        • We don't need social media. We need real content creators who take responsibility for their actions.

          Taking responsibility for your actions implies having the legal muscle to defend your site. That brings us back to what rsilvergun said, you'll be turning the web into cable television, because letting users post content would be too much of a liability risk.

          If you think the web would be better off where the only people who have speech are the corporations, you have a right to that opinion. I doubt you'll enjoy it as much as you think though, since you're right now participating on a site which would be a

          • Who cares? Why do you need moronic drivel at the bottom of every article?

            Slashdot doesn't censor (at least not often). It lets users vote to rank things and lets users filter by votes. It also removes spam. None of that triggers liability under common carrier provisions.

            No one loses their voice when they can't post on a YT video. If you want your voice, get a web server and a domain and spew all the drivel you want, up to and including hate speech.

            • You seem to have forgotten about slashdot's "bitchslap" function, which the editors have admitted that they use not to just censor content that they don't like, but to simultaneously censor anyone who replies to said comment or mod it up. They've also been known to remove content critical of the cult of scientology.

      • I think you're doing it on purpose. Whether it's an algorithm or not S230 protects YouTube from legal liability for anything their users post (except copyright, DMCA covers that).

        You're being a moron here, not me. The danger from AI search is that it does a whole lot more than just return a list of weighted links. It includes commentary. That's the whole reason for the excitement behind integrated ChatGPT with Bing.

        In most of the world, that risks opening up Bing to unbelievable civil and criminal liability

    • The algorithm is a user.

      No. An algorithm is literally nothing more than a set of steps or a process to follow. Every recipe in a cookbook is an algorithm. The troubleshooting section in the back of the paper manual for your latest electronic gizmo is a set of algorithms for addressing common problems. The width of an element on a web page adjusting based on the width of your browser window is done via an algorithm. Everywhere you look, it's algorithms, but not just the sort you're talking about.

      A "raw", "unsorted" feed of posts, v

    • The algorithm is a user.

      Nope. Though as false equivalencies go I don't think it really helps your misleading argument much.

      Rather by user or algorithm, platforms need to make content related decision, decisions that will inevitably hit edge cases or simply involve mistakes.

      Now with AI text generation again, it's the same as some company employed moderator writing a post. If they're expected to do so en-mass then mistakes will happen, you need to allow for that.

      Platforms need some reasonable protection from liability for those mist

    • I like this

  • Do not mention either section 230 or artificial intelligence. Henceforth they will be outlawed.

  • what's in the Constitution. This became evident when Alito cited a 16th century witch hunter when he overturned Roe (no, I'm not exaggerating, look it up, he cited a "judge" from before the constitution existed to rule on constitutional law...)

    That said, they're all bought and paid for at this point, so it's just a question of who's got more money, the folks on the right wing that want to shut down Section 230 so they can shut down sites like /. and funnel us into their media empires or the bigger social
    • by ctilsie242 ( 4841247 ) on Thursday February 16, 2023 @11:24AM (#63298443)

      When S230 goes, so goes all the open forums. Expect everything... and I mean everything to be at best moderated, at worst, just plain locked to comments. Social media sites will be ratcheting their AI based autoban mechanisms as well to the point where many people will get booted off.

      Maybe another country might have places to host stuff like Slashdot, but we are back to how things were in the mid-1990s when the CDA was being hammered out. Had that come into law, it would have pretty much killed all websites and forums, as well as NNTP and IRC, stone cold dead.

      • The party of small limited government, wielding the power of the government, to prop up their beliefs that are unpopular with a majority of the country. I'm searching for a term for that type of individual. Ah yes, a snowflake.

      • Expect a DMCA style take down system where any rando can instantly get your account banned unless you're rich and famous and get special treatment.

        That's what the right wing is after. The ability to silence their critics. They don't like that the internet is as egalitarian as it is. That anyone can speak on it. That makes sense. The core belief in the right wing is that everyone and everything has a place in a natural hierarchy and they should stay in that place. If you strip away all their rhetoric and
        • by DarkOx ( 621550 )

          no its a hellscape where the most aggressive stupidity that appeals to the most base instincts of the mob get rewarded.

          Its not a meritocracy at all, its place where societies failures can gather collectively declare themselves victims and whinge about some imaginary right-wing boogeyman that is oppressing them.

          While being to thick to see they actually are the angry mob and they actually are very much in charge now. Things are bad because their ideas are bad. They will never figure that out because they shou

      • Destroying S230 would also destroy things like GitHub because they are protected by the same laws. The same is also true of stackoverflow. Once companies realize that they will force congress to fix that regardless of what the court does.

      • You don't understand what Section 230 does. It would have zero affect on forums. If you don't censor your user's content, you are free and clear.

        • In which case, spam bots get to completely control your forums. Since you can't ban them. Or stop them from posting.
      • by DarkOx ( 621550 )

        Maybe another country might have places to host stuff like Slashdot, but we are back to how things were in the mid-1990s when the CDA was being hammered out. Had that come into law, it would have pretty much killed all websites and forums, as well as NNTP and IRC, stone cold dead.

        and it is damned unfortunate that did not come to pass. The worst ideas of the uniformed mob have been allowed to flourish in the shit-hole that is the world wide web. Out culture has been in free fall pretty much from 1992 on. On the information super-highway common decency and moral character have become road-kill.

    • Comment removed based on user account deletion
    • Anyway read this comment while you can, the mods don't like it when you mention politics as they are (heavily partisan and favoring the right wing establishment). You're supposed to dance around the issue so as to not offend people who get upset over a candy's choice of shoe...

      Better yet, edit your settings to browse at -1. Slashdot's moderation is basically fucked these days by people who learned how to moderate on Reddit ("I disagree, but I'm too lazy to respond, so I'ma smash that down arrow!"), and there's not nearly as much swastika/penisbird/goatse/copypasta shitposting here as in the old days. It's not trolling or flamebait to express a sincere opinion which runs afoul of the groupthink, but no one ever reads the moderation guidelines.

  • People are not asking Google to be responsible for the content other's put on youtube, etc. Instead, they claim that the algorithm itself counts as a kind of content.

    Google's algorithm finds content that is similar (as defined by people that like x content also like y content, so we define x and y as similar). They apply this to everything, even banned content, which they want to stop. So if X is banned, then you cannot use an algorithm to recommend Y to people that like X . Because you know that X is

    • In the end, whether it's ai or ad hoc, the legal system won't have time to consider every case and they'll be able to slip in whatever they want a million times faster than it can be litigated.

      I think the real way to deal with big tech is antitrust.

      We're lacking a few things:
      1) Libel should not be actionable. Let the penalty for lying be loss of credibility - then we all have section 230 protections
      2) Use antitrust to prevent banks and credit card companies and ESG entities from using monopoly power to caj

  • Repealing Section 230 does not mean that suddenly every nutcase will be free to say whatever they want.

    It will probably result in all comment sections being removed altogether since sites will suddenly be liable for what is posted there by others. Almost certainly this will result in fewer forums for communication since all websites will be liable for their content.

    Especially in America where litigation is king.

    So you'll have your free speech, and no where to voice it.

  • If TV can have news-looking people who spew intentionally misleading information, and get away with it by calling it "entertainment", then surely AI can also be labeled "entertainment"?
  • by groobly ( 6155920 ) on Thursday February 16, 2023 @12:53PM (#63298835)

    Section 230 is very simple: if you don't control content, you can't be sued. The platforms need to be designated as telecoms, not as some other thing that AOL once was to create the current problem. The platforms claim they don't control content, but they have the right to and must control content however they please. Also, if the platform makes its own posts, it is responsible for its own posts. Obviously.

    • by catprog ( 849688 )

      So if you remove spam or off-topic posts you should not get the protetion of Section 230?

  • by mesterha ( 110796 )

    I'm ambivalent about calling them "artificial intelligence" -- they're basically very sophisticated autopredict tools -- but the term has stuck.

    I see this again and again; people complain about the term artificial intelligence. AI is the historical name of a field of research. It's not a claim that any system developed by that research is intelligent. (Intelligence isn't even well defined.) It is uncontroversial that ChatGPT was created by artificial intelligence researchers. They were probably peo

  • Company shouldn't be liable for my statement.
    Company should be liable for their statements.

    It's a misapplication.

    Company not liable for hosting subjective video.
    Company intentionally promoting content they have access to: liable.

    To put it another way, restaurant isn't liable for my poisoning others dishes.
    Restaurant is liable for poisoning dishes they serve.

    Restaurant not liable for supplied poison.
    Restaurant is liable for using supplied poison.

    If Section 230 removes all liability, then it should be modifi

    • Company intentionally promoting content they have access to: liable.

      All of the little nuances in that statement are at the crux of this case, and the chilling effect that the ambiguity would have had on the development of the internet is exactly why congress passed S230.

      For example: I saw your post because Slashdot's algorithm for assigning karma decided that yours is good enough that your post starts at "Score: 2", which set it apart from all of the ACs and people with poorer karma. Is that sufficient to say that Slashdot "intentionally" "promoted" your post, making the

  • This could fix the modern Internet in one giant sweep. Fingers crossed the Supremes have the balls to do it.

  • ...with terrorism & terrorist organisations. They support, fund, train, & provide intelligence to them.

    Oh, did I say terrorists? Sorry, I meant "freedom fighters."

    It's no secret either: https://en.wikipedia.org/wiki/... [wikipedia.org]
    • by DVK9 ( 9481479 )
      Relying on Wiki is like relying on the gossip. Some of it is true and factual , then there are those that get altered by "editors" to insert bias left or right.
      • It's pretty much common knowledge that the US govt. sponsors terrorism. You can search around for some respected investigative journalism to verify it if you like but at this point no reasonable person is denying it.

"An entire fraternity of strapping Wall-Street-bound youth. Hell - this is going to be a blood bath!" -- Post Bros. Comics

Working...