Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Google United States Social Networks The Internet

Google Warns Internet Will Be 'A Horror Show' If It Loses Landmark Supreme Court Case 324

The U.S. Supreme Court, hearing a case that could reshape the internet, considered on Tuesday whether Google bears liability for user-generated content when its algorithms recommend videos to users. From a news writeup: In the case, Gonzalez vs, Google, the family of a terrorist attack victim contends that YouTube violated the federal Anti-Terrorism Act because its algorithm recommended ISIS videos to users, helping to spread their message. Nohemi Gonzalez was an American student killed in a 2015 ISIS attack in Paris, and his family's lawsuit challenges the broad legal immunity that tech platforms enjoy for third party content posted on their sites. Section 230 of the Communications Decency Act, passed in 1996, protects platforms from legal action over user-generated content, and it also protects them if they choose to remove content. Section 230 has withstood court challenges for the past three decades even as the internet exploded.

The attorney for Gonzalez's family claimed that YouTube's recommendations fall outside the scope of Section 230, as it is the algorithms, not the third party, that actively pick and choose where and how to present content. In this case, the attorney said, it enhanced the ISIS message. "Third parties that post on YouTube don't direct their videos to specific users," said the Gonzalez's attorney Eric Schnapper. Instead, he said, those are choices made by the platform. Justice Neil Gorsuch said he was '"not sure any algorithm is neutral. Most these days are designed to maximize profit." [...] Internet firms swear that removing or limiting 230 protections would destroy the medium. Would it? Chief Justice John Roberts asked Google's attorney Lisa Blatt. "Would Google collapse and the internet be destroyed if Google was prevented from posting what it knows is defamatory?" She said, "Not Google," but other, smaller websites, yes. She said if the plaintiffs were victorious, the internet would become a zone of extremes -- either The Truman Show, where things are moderated into nothing, or like "a horror show," where nothing is.
This discussion has been archived. No new comments can be posted.

Google Warns Internet Will Be 'A Horror Show' If It Loses Landmark Supreme Court Case

Comments Filter:
  • by AmazingRuss ( 555076 ) on Wednesday February 22, 2023 @11:43AM (#63314725)
    ... before google started pushing crap. It'll be fine after. The kooks can go stand on the street corner and bellow like they did before.
    • by Sloppy ( 14984 ) on Wednesday February 22, 2023 @11:57AM (#63314791) Homepage Journal

      The Internet was fine after Section 230 was enacted. So, yeah, the internet was fine, later, when Google came along. Why was it fine? Because Section 230 allowed Google (and every other website or service which users interact with) to do their things without being held responsible for what other people decide to do.

      Name your favorite website. It probably exists because of Section 230, and will probably go away if you repeal it.

      • Re: (Score:3, Insightful)

        by nagora ( 177841 )

        The Internet was fine after Section 230 was enacted. So, yeah, the internet was fine, later, when Google came along. Why was it fine? Because Section 230 allowed Google (and every other website or service which users interact with) to do their things without being held responsible for what other people decide to do.

        Name your favorite website. It probably exists because of Section 230, and will probably go away if you repeal it.

        That's a chance I'll take.

        • Re: (Score:3, Insightful)

          by skam240 ( 789197 )

          Chance? Try certainty.

          If websites can be held liable for what they're users post that is the end of conversing on the internet in open environments. Even if a website isnt directly shut down due to lawsuits they'll choose to eliminate public discourse from their site as it will be too much of a liability for them.

      • by Impy the Impiuos Imp ( 442658 ) on Wednesday February 22, 2023 @12:04PM (#63314843) Journal

        This also allowed US firms to dominate the Internet, while similar ones in other countries all had to labor under publisher style liability.

        You don't have to filter what your users do. But if you do, and goof, section 230 stops people from suing over that.

        • by Geoffrey.landis ( 926948 ) on Wednesday February 22, 2023 @12:31PM (#63315011) Homepage

          ...You don't have to filter what your users do. But if you do, and goof, section 230 stops people from suing over that.

          This particular case doesn't seem to be about filtering. From the summary, it is about recommending.

          Would the internet be destroyed if algorithms stopped recommending content, and instead content got ignored or boosted based on other factors?

          I don't know.

          • The simplest algorithm a website can use to make it more palatable to its users is chronologically sorting, ie recommending the latest content at the top..

            Would the internet be useable without any kind of algorithms that makes it easier for users to find content? No is my answer, it would be a fricking disaster.

            Although, if the SCOTUS comes down on algorithms and recommendations I'd expect it'll gut the US tech sector and companies will start looking for more agreeable jurisdictions to run their services in

          • allow me. No.

            I'm fairly sure we had website and "internet" before recommendation algorithms.

            The recommendation algorithms are designed to trap you in your very own customized "filter bubble". While they may not be intelligent, the algorithms are still quite effective at fooling you with your own image.
          • Comment removed (Score:4, Insightful)

            by account_deleted ( 4530225 ) on Wednesday February 22, 2023 @03:10PM (#63315681)
            Comment removed based on user account deletion
          • Re: (Score:3, Informative)

            by Frobnicator ( 565869 )

            This particular case doesn't seem to be about filtering. From the summary, it is about recommending.

            That's how I see it to. There is an extremely fine distinction being made.

            Google's statement is taking a "sky is falling, they're going to revoke the entire Section 230" approach. That's over-broad hyperbole that makes for great headlines but isn't fair or accurate.

            The Gonzales's family lawyers have also taken a broad approach, that all related content is the company's responsibility, which can also make for great headlines but also isn't fair or accurate.

            The actual issue before the court has several nu

      • by Comboman ( 895500 ) on Wednesday February 22, 2023 @12:20PM (#63314933)

        Name your favorite website. It probably exists because of Section 230, and will probably go away if you repeal it.

        This case isn't about repealing section 230 (though there are certainly some who would like to do that). This case is about establishing whether recommendation algorithms are protected by 230. Since those algorithms are not created by third-parties and are entirely within Google's control, I would tend to say they aren't protected but IANAL. I do not think it would be the end of the internet if Google/Facebook/etc could be held liable for promoting (not hosting) violent/hateful content. If they put as much effort into removing hateful content as they do removing copyright violations, that would be an excellent start.

        • While you're right, I suspect it is even more complicated than whether or not the algorithm is protected under 230.

          A bit of historical background: the family of someone who was murdered in a terrorist attack is suing Google for not only hosting ISIS propaganda videos on YouTube, but recommending them via the algorithm. The claim is that the recommendation contributed to the radicalization of the murderer.

          Google's lawyers have basically argued this, paraphrasing:

          "Imagine you go into a bookstore, and you ask

          • by nzkbuk ( 773506 )
            Doesn't the whole WWII thing fall down as soon add that ISIS was on a number of terrorist lists and most of google, facebook et al's algorithms drive users in an echo chamber to extreme ways?

            It's alot closer to a store which sells medical cannabis and also having meth. When someone comes in asking about the cannabis the clerk says "Hey we also have some meth, why don't you try some of that" Perhaps it was only the clerk that was dealing in the meth, but elsewhere isn't the store's owner / mangers olso h
        • Just curious - what is the legal definition of "hateful content"? If everyone gets a veto on any website that they visit, then effectively we'll have a web that contains nothing of any interest to anyone, because, outside the totalitarian regimes, you can't get as much as 75% approval on ANYTHING...
        • In litigation just about any content could be deemed harmful. Maybe someone watches a car repair video, screws up his car and kills someone because of a bad repair. There are a ton of videos of people doing things that are dangerous... sue them for influencing kids?
      • by kick6 ( 1081615 )

        The Internet was fine after Section 230 was enacted. So, yeah, the internet was fine, later, when Google came along. Why was it fine? Because Section 230 allowed Google (and every other website or service which users interact with) to do their things without being held responsible for what other people decide to do.

        Name your favorite website. It probably exists because of Section 230, and will probably go away if you repeal it.

        ...and then google realized they could use their position to put a thumb on the scale...

      • Re: (Score:2, Insightful)

        by NoMoreACs ( 6161580 )

        The Internet was fine after Section 230 was enacted. So, yeah, the internet was fine, later, when Google came along. Why was it fine? Because Section 230 allowed Google (and every other website or service which users interact with) to do their things without being held responsible for what other people decide to do.

        Name your favorite website. It probably exists because of Section 230, and will probably go away if you repeal it.

        Considering how "triggered" some people seem to get over the slightest slight, the internet (or rather, the world wide web) will quickly collapse under the weight of a million lawsuits, temporary injunctions, appeals and "takedown orders". Soon, no one will dare publish anything; lest someone take offense.

        The bottom line is: Either we have a First Amendment, or we don't. There are already many remedies for "aggrieved parties" to argue when something crosses the "Shouting 'Fire!' in a crowded theatre" thresh

        • by Geoffrey.landis ( 926948 ) on Wednesday February 22, 2023 @12:44PM (#63315071) Homepage

          The bottom line is: Either we have a First Amendment, or we don't.

          False dichotomy. What free speech allows and doesn't allow is subject to interpretation. It already has limits; the question here is what the limits are, and what they should be.

          There are already many remedies for "aggrieved parties" to argue when something crosses the "Shouting 'Fire!' in a crowded theatre" threshold;

          Exactly. And that's what this case is about: where is that line.

          we definitely don't need every Karen and Kevin being the arbiter of what gets to stay on the internet.

          The current interpretation of the law has abridged Karen and Kevin's rights to sue, and limited who Karen and Kevin are allowed to sue. This helps free internet conduits from the consequences of free speech. But it's already well established that the right to free speech does not mean the right to be free of the consequences of free speech. The question is, if free speech damages people, who can be held responsible?

          I don't know the answer to that question. But it is not cut and dried, as you suggest: "no consequences, that's the first amendment!"

      • My favorite website is Slashdot. Yes, I am 97 years old.
        • by dgatwood ( 11270 )

          Slashdot is from 1997. The CDA was passed in 1996. So arguably Slashdot exists because of Section 230.

      • Not even close. Most of my favourite websites wouldn't be affected because most of my favourite websites are offering curated and/or original content, not spewing forth third-party content and comments.
      • It would be perfectly fine if YouTube would show me exactly what I search for instead of recommending whatever bullshit it's algorithm wants to.

    • by saloomy ( 2817221 ) on Wednesday February 22, 2023 @12:02PM (#63314829)
      And it will be fine after. The platforms should have immunity from liability of hosting user-generated content though. Even recommending videos. Watching a video is not going to make people act out that video. Halloween didn't make a bunch of dudes slash like Mike Myers when they left the theatre, after watching an advertisement to go see the movie Halloween.

      Post hoc, ergo propter hoc. After, therefore, because of. Except it isn't true. It is a sarcastic phrase. Just because someone watched a video and then murdered someone, does not mean that video caused the killing. Even if it did, the liability would maybe fall on the creator, not the service that does not understand the content of the videos, just recommends to users what one user also watched when another user watched similar videos.

      The only reason they are going after Google, is that is where the money is, and that has to stop.
      • by rogoshen1 ( 2922505 ) on Wednesday February 22, 2023 @12:23PM (#63314959)

        hate google as much as any rational person should, but but this is the correct response. Freedom and speech and freedom of expression are worth preserving even if it causes a serious case of feelbads, or worse if an impressionable person sees or hears wrongthink and decides to act on it. Where do you draw the line on that kind of thing, video games? music? movies? all of these can contain wrongthink/feelbads, does mom need to step in and curate everything for us? are we incapable of rationality and self determination?

        it's insulting to the sense of agency and intelligence of the population at large that mommy gov't must step in and protect us from bad things that we might see, hear, or think. it's a damned slippery slope to actual thoughtcrime (i think the UK is about 15 years ahead of the rest of us, and are seemingly using Orwell as a blueprint, rather than a cautionary tale.)

        it's just more regulatory nonsense from politicians and pundits who have no idea when it comes to the technical challenges or the ramifications of this line of thinking.

        • I wonder about this. Usually I'd be inclined to side with Google in this case. However, I wonder if we take away "the Internet" part of the discussion if their position makes sense.

          We don't let theatres show terrorist propaganda to anyone, let alone children. Heck, we don't let them show adult movies to children. Theatres have a similar role YouTube plays - showing third-party content to a visiting audience.

          The only difference seems to be scale. YouTube deals with much much more material than a theatr
          • > We don't let theatres show terrorist propaganda

            We, the general public, do not allow theaters to show terrorist propaganda because in general we find it distasteful and vote with our wallets. But as far as the legalities of it go, so long as the terrorists are not raping children on-camera, an actual government ban of propaganda of any kind would be highly unlikely to pass first amendment muster.

            And "on the internet" or not, content about... or even promoting... illegal activities has traditionally bee

          • by dgatwood ( 11270 )

            We don't let theatres show terrorist propaganda to anyone, let alone children. Heck, we don't let them show adult movies to children. Theatres have a similar role YouTube plays - showing third-party content to a visiting audience.

            That's not really a good analogy. Websites that take user-generated content are more like a swap meet than a movie theater:

            • They don't choose specific pieces of top-tier content to make available; they allow content creators to make available anything that doesn't violate some set of rules, to the best of their ability to enforce those rules.
            • They don't limit the number of offerings to a few items at any given time, and show those few items to all of their customers, but rather allow their customers to choo
        • Freedom and speech and freedom of expression are worth preserving even if it causes a serious case of feelbads, or worse if an impressionable person sees or hears wrongthink and decides to act on it.

          Here is where the problem begins. Now the government is forcing google (or anyone else) to publish the speech that google does not agree with. That effectively violates google's right to free speech.

      • Post hoc, ergo propter hoc. After, therefore, because of. Except it isn't true. It is a sarcastic phrase. Just because someone watched a video and then murdered someone, does not mean that video caused the killing. Even if it did, the liability would maybe fall on the creator, not the service that does not understand the content of the videos, just recommends to users what one user also watched when another user watched similar videos.

        The only reason they are going after Google, is that is where the money is, and that has to stop.

        Exactly.

        Apple and Google. America's Designated Defendants.

      • by bws111 ( 1216812 )

        Watching a video is not going to make people act out that video.

        Viral 'challenges' say otherwise.

    • by rahmrh ( 939610 )

      I would think the "recommended" algorithm pushing something dangerous/questionable would put the company owning/using said algorithm at risk.

      They should be safe under section 230 for the content itself, but the act of pushing/recommending questionable content goes well beyond the editing that section 230 seems to have originally expected.

      If anything brings down companies like Google, it will be their AI's inability to be anything but a correlation engine dragging people down a misinformation/questionable c

      • If they have the expense of hosting it, they should have the freedom to promote it to keep eyeballs around for more ads (as compensation for hosting it). Suggesting material based on what like-minded viewers have similarly liked is not grounds for liability. Sorry.
        • You say it is okay tor them to push known lies because this makes them money? By contrary, this is aggravating circumstances.

        • by rahmrh ( 939610 )

          Rarely are they hosting the actual content. They are making money on their recommendations (and if one views the recommendations as Google creating the troublesome content-ie the recommendation itself that is pushing the dangerous content, then the recommendation by its self is a problem). And just because they are making money does not mean anything goes. Providers certainly cannot figure out you like child porn and recommend more child porn.

          Suggesting content like the blackout challenge that has killed

    • I think Section 230 should stay, but do not think it should cover their algorithms.

      You are a neutral purveyor of content? Then there is no algorithm feeding... the person has to search.

      You want to have an algorithm suggesting content? Then you aren't being a neutral purveyor you're pushing content.

      Imagine how much that would break things we see as normal today... can you imagine if "Q" content hadn't been pushed by the algorithm on FaceBook / YouTube and others? That's just one example out of many.
    • Section 230 paved the way for the commercial internet. Before section 230 the web was small and libertarian enough that nobody thought of suing anyone. You could insult anybody and not get sued for slander. Most of it was done in good fun. You could probably try to scam anyone too, but most people weren't susceptible to scams. The first "scam" attempt I remember, called MAKE.MONEY.FAST was pretty hilarious and I don't think anyone fell for it. Oh yeah and then there was the infamous "green card" spam by som

  • Usenet (Score:5, Insightful)

    by CubicleZombie ( 2590497 ) on Wednesday February 22, 2023 @11:55AM (#63314783)

    Way back when I worked for a dialup ISP, employees were forbidden to use the company's Usenet servers even if we had a personal account. They knew the service was chock full of CP and god knows what else, but the act of an employee putting eyes on it would mean they'd have to do something about it.

    I'm confident that however this case turns out, there will be yet another warning popup to click on for every site on the internet.

  • by Ritz_Just_Ritz ( 883997 ) on Wednesday February 22, 2023 @11:56AM (#63314787)

    I'm all for free speech and what-not, but Google can't have their cake and eat it too. Their "heaven show" is when they make money from advertisers by shepherding lemmings to user generated content. So they are representing to advertisers that they have the ability to connect the dots between consumers of video content and the advertising dollars being extracted from their clients. They also clearly spend quite a bit of energy moderating content based on whatever criteria people are screaming about this day of the week. So they claim to be able to match makers and takers, but being responsible for any of that is somehow a "horror show" for them. I suspect it will be a lot more expensive to operate and less free for content creators, but not at all impossible.

    The only "horror show" they really see is lower margins.

    Best,

    • by Tablizer ( 95088 )

      I agree it would be a mess if the content hoster got sued for all the bad stuff customers/users post, but I do believe they should be held responsible if somebody complains about dangerous content and the complaint is ignored.

      They should also do a reasonable amount of monitoring such that popular content is inspected by their bouncers. By example, if a given content item has more than 10k visitors, it would be required to be reviewed and logged by the hoster's inspectors.

      The courts are not supposed to creat

    • I'm all for free speech and what-not, but Google can't have their cake and eat it too.

      It's not just Google, it's much of the Internet, /. included.

      Their "heaven show" is when they make money from advertisers by shepherding lemmings to user generated content. So they are representing to advertisers that they have the ability to connect the dots between consumers of video content and the advertising dollars being extracted from their clients. They also clearly spend quite a bit of energy moderating content based on whatever criteria people are screaming about this day of the week. So they claim to be able to match makers and takers, but being responsible for any of that is somehow a "horror show" for them. I suspect it will be a lot more expensive to operate and less free for content creators, but not at all impossible.

      The only "horror show" they really see is lower margins.

      Content moderation is a bit like making a self-driving car using only video cameras. It's easy to do great 90% of the time. But the long tail will eat you alive.

      If websites become legally responsible for user generated content then there are no more websites with user generated content.

    • This isn't just about Google, it's about ALL content that ANYONE might post to the ANY website. If someone cuts their leg off after watching a dum guy cutdown a tree. Whose fault is that? After all the algorithm recommended the video to you because you were trying to learn how to properly cut down a tree. If Google looses there will be no more recommendations.

      • I'm fine with an end to recommendations as long as I can easily + and - search words or phrases, kinda like the "good old days" of AltaVista. Sites shouldn't be liable for user-generated content within reason, but recommendations, regardless of whether they are by human or algorithm, beg for responsibility. The really tough question is what to do about moderation. Civilization requires regulation, and figuring out the balance between that and freedom is usually a complicated, interactive journey of debat

  • by taustin ( 171655 ) on Wednesday February 22, 2023 @11:57AM (#63314793) Homepage Journal

    The heart of this case isn't that YouTube hosted ISIS recruiting videos, but rather than it recommended them. While any outcome is possible, the most likely one - if Google loses - is that they will be held responsible for what they recommend.

    Which would be a horror show of civilization altering proportions.

    Imagine, for a moment, an internet in which algorithms that recommend what you just bought for you, or recommend videos on oily, he-man professional wrestlers when you've never watched a single sports video of any kind, or recommend Minecraft videos over and over and over and over and over and over and over, despite you flagging every single Minecraft video - and every other computer game video - as "Not Interested," imagine all those algorithms, and their pointless recommendation disappearing!!! (And imagine Google, and Amazon, and Facebook, not being able to charge advertisers for preferential placement in the recommendations!)

    A HORROR SHOW INDEED (But then, horror is the second most popular genre, after romance. So maybe we're really onto something.)

    • So could slashdot get sued for recommending a score 5 post?

  • And I don't just mean the consequence for google's revenues.

    The idea of establishing legal principles based on the consequence/outcome instead of the principle can be dangerous. This is the same logic that justifies, say, rescuing an insolvent business with tax dollars because they're "too big to fail" ie the harm to the public would be unpleasant. We didn't prosecute the bond-rating companies that abrogated their responsibility (not just to their customers, but enshrined in LAW) to correctly rate junk-bo

    • The principle is already established.

      recommended videos are simply advertisements, and any platform is responsible for the advertisements it displays. the fact that the ads are for your own content is irrelevant.

      A recommended link is just like a TV station saying "stay tuned after the news for Jeopardy!".

      Platforms are already liable for the content of the advertisements they show. A non-active search related recommended link is simply an advertisement.

      But my algorithm is the same thing! there is n
  • Propaganda (Score:5, Insightful)

    by jenningsthecat ( 1525947 ) on Wednesday February 22, 2023 @12:04PM (#63314847)

    I can see some of the holes in my own argument here, but it occurs to me that algorithms are generating propaganda.

    At the slightest hint of interest in a topic (and, in my experience, sometimes no hint at all), subsequent YouTube suggestions and recommendations are heavily weighted in favour of that topic. Although the algorithm has no specific goal in 'mind' other than keeping viewers on the site as long as possible, the echo-chamber nature of the whole transaction results in something that looks a lot like propaganda.

    I think Google has accepted a kind of selective responsibility here. Any time I view certain videos that have anything to do with Covid-19 vaccination, I get an annoying little blue box beneath that says "COVID-19 vaccine. Get the latest information from Health Canada". If Google is trying to 'protect' me from 'misinformation' here by pushing a counterpoint, haven't they implicitly taken on the responsibility of doing so in ALL potentially controversial cases?

    I'm not choosing a horse here or pitching an approach. I would say I'm playing Devil's Advocate, except for the fact that there seem to be multiple devils here.

  • The issue for them is money, not free speech. They use free speech as an excuse to avoid responsibility for their failure to moderate content that violates YouTube policies, calling the consequences of any successful action against them a "horror show". The real horror show is the terrorist attacks and other unfortunate events that result from the spread of this propaganda.

  • These idiots (Score:2, Flamebait)

    by MitchDev ( 2526834 )

    They won't be happy until the internet is reduced to 3 year olds with learning disabilties levels.....

  • by King_TJ ( 85913 ) on Wednesday February 22, 2023 @12:15PM (#63314903) Journal

    It's an interesting case, IMO. On one hand, I would have traditionally sided with Google on this, on principle. If you're simply performing the service or function of making content available and redistributing it? That doesn't mean YOU are liable for the contents of it, any more than your phone carrier is liable if you have a conversation with someone that starts discussing illegal activities.

    But what's happened is with all this monetizing of content based on "views", it created incentive to keep pushing any content that seems to be similar enough to something else a person recently watched. There's willful intent to selectively present what people try to share on the platform, in other words. And when you screw that up by pushing pro-terrorist content or anything along those lines? YOU as the content distribution suddenly bear some responsibility.

    • But what's happened is with all this monetizing of content based on "views", it created incentive to keep pushing any content that seems to be similar enough to something else a person recently watched. There's willful intent to selectively present what people try to share on the platform, in other words. And when you screw that up by pushing pro-terrorist content or anything along those lines? YOU as the content distribution suddenly bear some responsibility.

      So I think that's where the actual argument (and perhaps solution) lies.

      If the website is deliberately promoting or driving people towards content it suspects to be illegal, or perhaps if they're simply being negligent in not stopping it, then they might start bearing some liability.

      Note, this would have made pretty short work of most of the torrent & file sharing websites of the past few decades.

    • by Omega Hacker ( 6676 ) <omega AT omegacs DOT net> on Wednesday February 22, 2023 @12:44PM (#63315073)
      I agree Google should lose this one, as the lawsuit makes a particular distinction that I don't see other people in the comments actually grasping:

      - If *I* post a video with illegal content to YouTube, *I* should be held liable for it, but YouTube itself should be held immune *unless* they fail to remove such content when notified.
      - OTOH, if YouTube is "recommending" a video, Google/YouTube itself is the originator of that "content" (the link to the illegal video) and therefore *they* must be held liable for it.

      That changes the dynamic from "must take it down when informed" to "never should have created it in the first place", at which point Google should be held liable for the "recommendation content" that directs users to illegal videos. As long as changes to 230 make that distinction clear - namely that Google's recommendations are in fact Google-originated content and that they own liability for it - I see no "end of the internet" at all.
  • I'm also happy with anything that de-commercializes the internet. It's becoming unusable because it's so hyper-commercialized. I can't search for the things I want 20-30% of the time, amazon search sucks, ads are absolutely becoming ridiculous. Tracking is horrible, you can't even get up in the morning and take a piss without some web-tracker picking up the data and serving up ads on it.

    Anything that google doesn't like means they are earning less money, google needs to become more like a utility, not a suc
  • by gestalt_n_pepper ( 991155 ) on Wednesday February 22, 2023 @12:24PM (#63314963)

    1) Recommendations will have to be turned off. It's the cheapest and easiest thing to do. You'll still be able to literally search for anything, but there won't be able to be any bias in what say, google, sends you. It'll just send everything that's jimmied up to serve a certain audience. No filtering.

    2) It will have the exact opposite effect of what the people who brought the original case hoped for.

  • I wonder if all this will do is force hosting companies out of the US and host in other countries. Canada, Mexico, Cayman .... Good time for a betting man.
  • ... is there even a casual link established between "youtube recommends isis video" and "wannabe terrorist sees video and commits terrotist act"?

    that's very far fetched. if those parents can sue google for recommending isis videos then they can also sue the entire us administration for creating isis in the first place, they have directly created orders of magnitude more terrorists than google, twitter and facebook combined could ever hope to "influence".

    otoh, i do think these algorithms should be fully disc

  • by Surak_Prime ( 160061 ) on Wednesday February 22, 2023 @12:55PM (#63315127)

    For real, because I have to be missing something. Everyone here seems to be acting like this is an issue that Section 230 hinges on, when all I can see is that this is about whether or not using personalized algorithms to push specific stuff to to specific users violates Section 230 or not. Well, I'm in favor of the intent of Section 230, and, I hope Google gets the book thrown at them! As I see it, it would be a GOOD thing if a regular search engine gave me unfiltered results rather than trying to base the results I get on recent searches, what it calculates I might be looking for based on other things, or what it wants to advertise at me. It would be a good thing if ads had to go back to trying to do blanket coverage and hoping, rather than targeting individuals based on creepy data mining. Let ME look for my stuff.

  • If all we are ruling on is algorithmic recommendations then I think the hysteria is overblown. If we stray in to user generated content then we turn the internet into TV.

  • They should first of all make lying about people in defamatory ways legal and immune from lawsuits. Soon nobody would care what anybody who hadn't earned credibility said about anyone else.

    Then they could structure things so users could create their own moderation schemes that would access the full dataset and apply spam filters etc.

    These schemes would have to allow users to share the effort spent creating them and subscribe to them or not. But the raw data should be available to users.

    For instance I have

  • by Eunomion ( 8640039 ) on Wednesday February 22, 2023 @01:05PM (#63315173)
    The issue is they recommended it. They specifically deployed software that encouraged some content over others, and the idea that they're not responsible for that is ludicrous.
    • Open it up and let everyone such as users create such software on their platform for their own use. Make their software optional and make alternatives buildable

    • Exactly. By creating a recommendation content was created. That content is the responsibility of the creator, Google/YouTube in this case. I believe the Internet would be a better place if content recommendation algorithms disappeared entirely. They get in the way, and, well, mostly suck at what they do.

No spitting on the Bus! Thank you, The Mgt.

Working...