Google Warns Internet Will Be 'A Horror Show' If It Loses Landmark Supreme Court Case 324
The U.S. Supreme Court, hearing a case that could reshape the internet, considered on Tuesday whether Google bears liability for user-generated content when its algorithms recommend videos to users. From a news writeup: In the case, Gonzalez vs, Google, the family of a terrorist attack victim contends that YouTube violated the federal Anti-Terrorism Act because its algorithm recommended ISIS videos to users, helping to spread their message. Nohemi Gonzalez was an American student killed in a 2015 ISIS attack in Paris, and his family's lawsuit challenges the broad legal immunity that tech platforms enjoy for third party content posted on their sites. Section 230 of the Communications Decency Act, passed in 1996, protects platforms from legal action over user-generated content, and it also protects them if they choose to remove content. Section 230 has withstood court challenges for the past three decades even as the internet exploded.
The attorney for Gonzalez's family claimed that YouTube's recommendations fall outside the scope of Section 230, as it is the algorithms, not the third party, that actively pick and choose where and how to present content. In this case, the attorney said, it enhanced the ISIS message. "Third parties that post on YouTube don't direct their videos to specific users," said the Gonzalez's attorney Eric Schnapper. Instead, he said, those are choices made by the platform. Justice Neil Gorsuch said he was '"not sure any algorithm is neutral. Most these days are designed to maximize profit." [...] Internet firms swear that removing or limiting 230 protections would destroy the medium. Would it? Chief Justice John Roberts asked Google's attorney Lisa Blatt. "Would Google collapse and the internet be destroyed if Google was prevented from posting what it knows is defamatory?" She said, "Not Google," but other, smaller websites, yes. She said if the plaintiffs were victorious, the internet would become a zone of extremes -- either The Truman Show, where things are moderated into nothing, or like "a horror show," where nothing is.
The attorney for Gonzalez's family claimed that YouTube's recommendations fall outside the scope of Section 230, as it is the algorithms, not the third party, that actively pick and choose where and how to present content. In this case, the attorney said, it enhanced the ISIS message. "Third parties that post on YouTube don't direct their videos to specific users," said the Gonzalez's attorney Eric Schnapper. Instead, he said, those are choices made by the platform. Justice Neil Gorsuch said he was '"not sure any algorithm is neutral. Most these days are designed to maximize profit." [...] Internet firms swear that removing or limiting 230 protections would destroy the medium. Would it? Chief Justice John Roberts asked Google's attorney Lisa Blatt. "Would Google collapse and the internet be destroyed if Google was prevented from posting what it knows is defamatory?" She said, "Not Google," but other, smaller websites, yes. She said if the plaintiffs were victorious, the internet would become a zone of extremes -- either The Truman Show, where things are moderated into nothing, or like "a horror show," where nothing is.
The internet was fine... (Score:4, Insightful)
Re:The internet was fine... (Score:5, Insightful)
The Internet was fine after Section 230 was enacted. So, yeah, the internet was fine, later, when Google came along. Why was it fine? Because Section 230 allowed Google (and every other website or service which users interact with) to do their things without being held responsible for what other people decide to do.
Name your favorite website. It probably exists because of Section 230, and will probably go away if you repeal it.
Re: (Score:3, Insightful)
The Internet was fine after Section 230 was enacted. So, yeah, the internet was fine, later, when Google came along. Why was it fine? Because Section 230 allowed Google (and every other website or service which users interact with) to do their things without being held responsible for what other people decide to do.
Name your favorite website. It probably exists because of Section 230, and will probably go away if you repeal it.
That's a chance I'll take.
Re: (Score:3, Insightful)
Chance? Try certainty.
If websites can be held liable for what they're users post that is the end of conversing on the internet in open environments. Even if a website isnt directly shut down due to lawsuits they'll choose to eliminate public discourse from their site as it will be too much of a liability for them.
Comment removed (Score:5, Interesting)
Re:The internet was fine... (Score:5, Interesting)
This also allowed US firms to dominate the Internet, while similar ones in other countries all had to labor under publisher style liability.
You don't have to filter what your users do. But if you do, and goof, section 230 stops people from suing over that.
Re:The internet was fine... (Score:5, Interesting)
...You don't have to filter what your users do. But if you do, and goof, section 230 stops people from suing over that.
This particular case doesn't seem to be about filtering. From the summary, it is about recommending.
Would the internet be destroyed if algorithms stopped recommending content, and instead content got ignored or boosted based on other factors?
I don't know.
Re: (Score:2)
The simplest algorithm a website can use to make it more palatable to its users is chronologically sorting, ie recommending the latest content at the top..
Would the internet be useable without any kind of algorithms that makes it easier for users to find content? No is my answer, it would be a fricking disaster.
Although, if the SCOTUS comes down on algorithms and recommendations I'd expect it'll gut the US tech sector and companies will start looking for more agreeable jurisdictions to run their services in
Re: (Score:3)
I'm fairly sure we had website and "internet" before recommendation algorithms.
The recommendation algorithms are designed to trap you in your very own customized "filter bubble". While they may not be intelligent, the algorithms are still quite effective at fooling you with your own image.
Comment removed (Score:4, Insightful)
Re: (Score:3, Informative)
This particular case doesn't seem to be about filtering. From the summary, it is about recommending.
That's how I see it to. There is an extremely fine distinction being made.
Google's statement is taking a "sky is falling, they're going to revoke the entire Section 230" approach. That's over-broad hyperbole that makes for great headlines but isn't fair or accurate.
The Gonzales's family lawyers have also taken a broad approach, that all related content is the company's responsibility, which can also make for great headlines but also isn't fair or accurate.
The actual issue before the court has several nu
Re: The internet was fine... (Score:3, Insightful)
No. The internet **is not** dead if section 230 falls. Just like the U.S. Flag *was not dead* when people started to burn the flag. The flag burners got their jollies and went home.
The problem with section 230 is it was twisted by big tech to suit their own agenda purposes rather than use it to dwell and create a usable public space.
If section 230 goes away then nothing literally happens. Except that Big Tech no longer has the excuse to act like a nanny on behalf of the FBI
Re: The internet was fine... (Score:4, Insightful)
Re:The internet was fine... (Score:5, Informative)
This case isn't about repealing section 230 (though there are certainly some who would like to do that). This case is about establishing whether recommendation algorithms are protected by 230. Since those algorithms are not created by third-parties and are entirely within Google's control, I would tend to say they aren't protected but IANAL. I do not think it would be the end of the internet if Google/Facebook/etc could be held liable for promoting (not hosting) violent/hateful content. If they put as much effort into removing hateful content as they do removing copyright violations, that would be an excellent start.
Re: (Score:2)
While you're right, I suspect it is even more complicated than whether or not the algorithm is protected under 230.
A bit of historical background: the family of someone who was murdered in a terrorist attack is suing Google for not only hosting ISIS propaganda videos on YouTube, but recommending them via the algorithm. The claim is that the recommendation contributed to the radicalization of the murderer.
Google's lawyers have basically argued this, paraphrasing:
"Imagine you go into a bookstore, and you ask
Re: (Score:2)
It's alot closer to a store which sells medical cannabis and also having meth. When someone comes in asking about the cannabis the clerk says "Hey we also have some meth, why don't you try some of that" Perhaps it was only the clerk that was dealing in the meth, but elsewhere isn't the store's owner / mangers olso h
Re: (Score:2)
Re: (Score:2)
Re:The internet was fine... (Score:5, Interesting)
The recommendation engine by Slashdot is primarily influenced by end-users and community moderation. Not by slashdot itself.
I can see user-based moderation and recommendations based on user input being protected, particularly if the algortihmn parameters are made public..
But something primarily in the companies hands, with no insight to weights, and primarily to drive profit -- I can see that being out of scope of 230.
Re: (Score:2)
The Internet was fine after Section 230 was enacted. So, yeah, the internet was fine, later, when Google came along. Why was it fine? Because Section 230 allowed Google (and every other website or service which users interact with) to do their things without being held responsible for what other people decide to do.
Name your favorite website. It probably exists because of Section 230, and will probably go away if you repeal it.
...and then google realized they could use their position to put a thumb on the scale...
Re: (Score:2)
But how do you define promotion of content?
We all know the insane Nazi messages which sometimes are on slashdot. Sometimes slashdot hides them(Which is the same as promotion all other messages, since these will now be shown before/instead of) the nazi messages.
Other times based on message score, Slashdot chooses to show and highlight the Nazi messages, aka promoting them, so if this lawsuit wins, slashdot and all other websites which show user content, and just don't show them in order would be tost.
Not to
Re: (Score:2)
If the score is primarily user driven, it should be covered? Does slashdot have any say in the moderation of comments -- it seems pretty passive and based entirely on user input to me, and also obvious to it's workings.
As opposed to Google, which is a secret unknown algorithm trying to prioritize profit and screen time. Using metrics that are not necessarily user created, but derived from user analysis, and profitability.
Re: (Score:2, Insightful)
The Internet was fine after Section 230 was enacted. So, yeah, the internet was fine, later, when Google came along. Why was it fine? Because Section 230 allowed Google (and every other website or service which users interact with) to do their things without being held responsible for what other people decide to do.
Name your favorite website. It probably exists because of Section 230, and will probably go away if you repeal it.
Considering how "triggered" some people seem to get over the slightest slight, the internet (or rather, the world wide web) will quickly collapse under the weight of a million lawsuits, temporary injunctions, appeals and "takedown orders". Soon, no one will dare publish anything; lest someone take offense.
The bottom line is: Either we have a First Amendment, or we don't. There are already many remedies for "aggrieved parties" to argue when something crosses the "Shouting 'Fire!' in a crowded theatre" thresh
False dichotomy [Re:The internet was fine...] (Score:5, Insightful)
The bottom line is: Either we have a First Amendment, or we don't.
False dichotomy. What free speech allows and doesn't allow is subject to interpretation. It already has limits; the question here is what the limits are, and what they should be.
There are already many remedies for "aggrieved parties" to argue when something crosses the "Shouting 'Fire!' in a crowded theatre" threshold;
Exactly. And that's what this case is about: where is that line.
we definitely don't need every Karen and Kevin being the arbiter of what gets to stay on the internet.
The current interpretation of the law has abridged Karen and Kevin's rights to sue, and limited who Karen and Kevin are allowed to sue. This helps free internet conduits from the consequences of free speech. But it's already well established that the right to free speech does not mean the right to be free of the consequences of free speech. The question is, if free speech damages people, who can be held responsible?
I don't know the answer to that question. But it is not cut and dried, as you suggest: "no consequences, that's the first amendment!"
Re: The internet was fine... (Score:2)
Re: (Score:3)
Slashdot is from 1997. The CDA was passed in 1996. So arguably Slashdot exists because of Section 230.
Re: (Score:2)
Re: The internet was fine... (Score:2)
It would be perfectly fine if YouTube would show me exactly what I search for instead of recommending whatever bullshit it's algorithm wants to.
Re: (Score:2)
Section 230 came about for the very reasons you speak of. CompuServe (remember them?) was put in the impossible position of being sued for removing content and also sued for NOT removing content.
Re:The internet was fine... (Score:4, Insightful)
Post hoc, ergo propter hoc. After, therefore, because of. Except it isn't true. It is a sarcastic phrase. Just because someone watched a video and then murdered someone, does not mean that video caused the killing. Even if it did, the liability would maybe fall on the creator, not the service that does not understand the content of the videos, just recommends to users what one user also watched when another user watched similar videos.
The only reason they are going after Google, is that is where the money is, and that has to stop.
Re:The internet was fine... (Score:5, Insightful)
hate google as much as any rational person should, but but this is the correct response. Freedom and speech and freedom of expression are worth preserving even if it causes a serious case of feelbads, or worse if an impressionable person sees or hears wrongthink and decides to act on it. Where do you draw the line on that kind of thing, video games? music? movies? all of these can contain wrongthink/feelbads, does mom need to step in and curate everything for us? are we incapable of rationality and self determination?
it's insulting to the sense of agency and intelligence of the population at large that mommy gov't must step in and protect us from bad things that we might see, hear, or think. it's a damned slippery slope to actual thoughtcrime (i think the UK is about 15 years ahead of the rest of us, and are seemingly using Orwell as a blueprint, rather than a cautionary tale.)
it's just more regulatory nonsense from politicians and pundits who have no idea when it comes to the technical challenges or the ramifications of this line of thinking.
Re: (Score:3)
We don't let theatres show terrorist propaganda to anyone, let alone children. Heck, we don't let them show adult movies to children. Theatres have a similar role YouTube plays - showing third-party content to a visiting audience.
The only difference seems to be scale. YouTube deals with much much more material than a theatr
Re: (Score:3)
> We don't let theatres show terrorist propaganda
We, the general public, do not allow theaters to show terrorist propaganda because in general we find it distasteful and vote with our wallets. But as far as the legalities of it go, so long as the terrorists are not raping children on-camera, an actual government ban of propaganda of any kind would be highly unlikely to pass first amendment muster.
And "on the internet" or not, content about... or even promoting... illegal activities has traditionally bee
Re: (Score:3)
We don't let theatres show terrorist propaganda to anyone, let alone children. Heck, we don't let them show adult movies to children. Theatres have a similar role YouTube plays - showing third-party content to a visiting audience.
That's not really a good analogy. Websites that take user-generated content are more like a swap meet than a movie theater:
Re: (Score:2)
Freedom and speech and freedom of expression are worth preserving even if it causes a serious case of feelbads, or worse if an impressionable person sees or hears wrongthink and decides to act on it.
Here is where the problem begins. Now the government is forcing google (or anyone else) to publish the speech that google does not agree with. That effectively violates google's right to free speech.
Re: (Score:3)
Post hoc, ergo propter hoc. After, therefore, because of. Except it isn't true. It is a sarcastic phrase. Just because someone watched a video and then murdered someone, does not mean that video caused the killing. Even if it did, the liability would maybe fall on the creator, not the service that does not understand the content of the videos, just recommends to users what one user also watched when another user watched similar videos.
The only reason they are going after Google, is that is where the money is, and that has to stop.
Exactly.
Apple and Google. America's Designated Defendants.
Re: (Score:2)
Watching a video is not going to make people act out that video.
Viral 'challenges' say otherwise.
Re: (Score:2)
I would think the "recommended" algorithm pushing something dangerous/questionable would put the company owning/using said algorithm at risk.
They should be safe under section 230 for the content itself, but the act of pushing/recommending questionable content goes well beyond the editing that section 230 seems to have originally expected.
If anything brings down companies like Google, it will be their AI's inability to be anything but a correlation engine dragging people down a misinformation/questionable c
Re: (Score:2)
Re: The internet was fine... (Score:3)
You say it is okay tor them to push known lies because this makes them money? By contrary, this is aggravating circumstances.
Re: (Score:2)
Rarely are they hosting the actual content. They are making money on their recommendations (and if one views the recommendations as Google creating the troublesome content-ie the recommendation itself that is pushing the dangerous content, then the recommendation by its self is a problem). And just because they are making money does not mean anything goes. Providers certainly cannot figure out you like child porn and recommend more child porn.
Suggesting content like the blackout challenge that has killed
Re: (Score:2)
You are a neutral purveyor of content? Then there is no algorithm feeding... the person has to search.
You want to have an algorithm suggesting content? Then you aren't being a neutral purveyor you're pushing content.
Imagine how much that would break things we see as normal today... can you imagine if "Q" content hadn't been pushed by the algorithm on FaceBook / YouTube and others? That's just one example out of many.
Re: (Score:2)
Section 230 paved the way for the commercial internet. Before section 230 the web was small and libertarian enough that nobody thought of suing anyone. You could insult anybody and not get sued for slander. Most of it was done in good fun. You could probably try to scam anyone too, but most people weren't susceptible to scams. The first "scam" attempt I remember, called MAKE.MONEY.FAST was pretty hilarious and I don't think anyone fell for it. Oh yeah and then there was the infamous "green card" spam by som
Usenet (Score:5, Insightful)
Way back when I worked for a dialup ISP, employees were forbidden to use the company's Usenet servers even if we had a personal account. They knew the service was chock full of CP and god knows what else, but the act of an employee putting eyes on it would mean they'd have to do something about it.
I'm confident that however this case turns out, there will be yet another warning popup to click on for every site on the internet.
Profit without any responsibility (Score:5, Insightful)
I'm all for free speech and what-not, but Google can't have their cake and eat it too. Their "heaven show" is when they make money from advertisers by shepherding lemmings to user generated content. So they are representing to advertisers that they have the ability to connect the dots between consumers of video content and the advertising dollars being extracted from their clients. They also clearly spend quite a bit of energy moderating content based on whatever criteria people are screaming about this day of the week. So they claim to be able to match makers and takers, but being responsible for any of that is somehow a "horror show" for them. I suspect it will be a lot more expensive to operate and less free for content creators, but not at all impossible.
The only "horror show" they really see is lower margins.
Best,
Re: (Score:2)
I agree it would be a mess if the content hoster got sued for all the bad stuff customers/users post, but I do believe they should be held responsible if somebody complains about dangerous content and the complaint is ignored.
They should also do a reasonable amount of monitoring such that popular content is inspected by their bouncers. By example, if a given content item has more than 10k visitors, it would be required to be reviewed and logged by the hoster's inspectors.
The courts are not supposed to creat
Re: (Score:2)
I'm all for free speech and what-not, but Google can't have their cake and eat it too.
It's not just Google, it's much of the Internet, /. included.
Their "heaven show" is when they make money from advertisers by shepherding lemmings to user generated content. So they are representing to advertisers that they have the ability to connect the dots between consumers of video content and the advertising dollars being extracted from their clients. They also clearly spend quite a bit of energy moderating content based on whatever criteria people are screaming about this day of the week. So they claim to be able to match makers and takers, but being responsible for any of that is somehow a "horror show" for them. I suspect it will be a lot more expensive to operate and less free for content creators, but not at all impossible.
The only "horror show" they really see is lower margins.
Content moderation is a bit like making a self-driving car using only video cameras. It's easy to do great 90% of the time. But the long tail will eat you alive.
If websites become legally responsible for user generated content then there are no more websites with user generated content.
Re: (Score:2)
This isn't just about Google, it's about ALL content that ANYONE might post to the ANY website. If someone cuts their leg off after watching a dum guy cutdown a tree. Whose fault is that? After all the algorithm recommended the video to you because you were trying to learn how to properly cut down a tree. If Google looses there will be no more recommendations.
Re: (Score:2)
I'm fine with an end to recommendations as long as I can easily + and - search words or phrases, kinda like the "good old days" of AltaVista. Sites shouldn't be liable for user-generated content within reason, but recommendations, regardless of whether they are by human or algorithm, beg for responsibility. The really tough question is what to do about moderation. Civilization requires regulation, and figuring out the balance between that and freedom is usually a complicated, interactive journey of debat
A horror show indeed (Score:5, Insightful)
The heart of this case isn't that YouTube hosted ISIS recruiting videos, but rather than it recommended them. While any outcome is possible, the most likely one - if Google loses - is that they will be held responsible for what they recommend.
Which would be a horror show of civilization altering proportions.
Imagine, for a moment, an internet in which algorithms that recommend what you just bought for you, or recommend videos on oily, he-man professional wrestlers when you've never watched a single sports video of any kind, or recommend Minecraft videos over and over and over and over and over and over and over, despite you flagging every single Minecraft video - and every other computer game video - as "Not Interested," imagine all those algorithms, and their pointless recommendation disappearing!!! (And imagine Google, and Amazon, and Facebook, not being able to charge advertisers for preferential placement in the recommendations!)
A HORROR SHOW INDEED (But then, horror is the second most popular genre, after romance. So maybe we're really onto something.)
Re: (Score:2)
So could slashdot get sued for recommending a score 5 post?
Re: (Score:2)
Search engines can't control the actual content, which is on external sites. If you both host and recommend the content... then you should be held responsible. If you only do one and not the other, then fine... no responsibility.
Re:A horror show indeed (Score:5, Funny)
Search engines work by algorithmically recommending content.
And that's the problem with them. In the beginning, search engines were literally just that - they searched the Web to find sites containing words and phrases entered by the searcher. Then they devolved into a mashup of a crystal ball gazer, a hotel concierge, a used car salesman, and a ticket scalper. Along the way, they almost entirely abandoned the ability to do simple, unadorned, literal word and phrase searches. BTW, that's a bug, not a feature.
Re: (Score:2)
In the beginning, search engines were literally just that - they searched the Web to find sites containing words and phrases entered by the searcher.
Yes, then people started gaming the system, and it became impossible to find what you were looking for on the internet without the search engine becoming more discerning. PageRank(tm) was a major step forwards in making search engines usable again.
Then they devolved into a mashup of a crystal ball gazer, a hotel concierge, a used car salesman, and a ticket scalper.
What you're doing is pretending history didn't happen. There is an arms race between search engines and those who would make them present you with the information they want you to see, as opposed to what you're searching for. You've simplified that situation into
Re: (Score:2)
They have always had algorithms in place to rank search results.
the consequence is irrelevant (Score:2)
And I don't just mean the consequence for google's revenues.
The idea of establishing legal principles based on the consequence/outcome instead of the principle can be dangerous. This is the same logic that justifies, say, rescuing an insolvent business with tax dollars because they're "too big to fail" ie the harm to the public would be unpleasant. We didn't prosecute the bond-rating companies that abrogated their responsibility (not just to their customers, but enshrined in LAW) to correctly rate junk-bo
Re: (Score:2)
recommended videos are simply advertisements, and any platform is responsible for the advertisements it displays. the fact that the ads are for your own content is irrelevant.
A recommended link is just like a TV station saying "stay tuned after the news for Jeopardy!".
Platforms are already liable for the content of the advertisements they show. A non-active search related recommended link is simply an advertisement.
But my algorithm is the same thing! there is n
Propaganda (Score:5, Insightful)
I can see some of the holes in my own argument here, but it occurs to me that algorithms are generating propaganda.
At the slightest hint of interest in a topic (and, in my experience, sometimes no hint at all), subsequent YouTube suggestions and recommendations are heavily weighted in favour of that topic. Although the algorithm has no specific goal in 'mind' other than keeping viewers on the site as long as possible, the echo-chamber nature of the whole transaction results in something that looks a lot like propaganda.
I think Google has accepted a kind of selective responsibility here. Any time I view certain videos that have anything to do with Covid-19 vaccination, I get an annoying little blue box beneath that says "COVID-19 vaccine. Get the latest information from Health Canada". If Google is trying to 'protect' me from 'misinformation' here by pushing a counterpoint, haven't they implicitly taken on the responsibility of doing so in ALL potentially controversial cases?
I'm not choosing a horse here or pitching an approach. I would say I'm playing Devil's Advocate, except for the fact that there seem to be multiple devils here.
It's all about money, not free speech (Score:2)
The issue for them is money, not free speech. They use free speech as an excuse to avoid responsibility for their failure to moderate content that violates YouTube policies, calling the consequences of any successful action against them a "horror show". The real horror show is the terrorist attacks and other unfortunate events that result from the spread of this propaganda.
These idiots (Score:2, Flamebait)
They won't be happy until the internet is reduced to 3 year olds with learning disabilties levels.....
Hope Google loses this one! (Score:5, Insightful)
It's an interesting case, IMO. On one hand, I would have traditionally sided with Google on this, on principle. If you're simply performing the service or function of making content available and redistributing it? That doesn't mean YOU are liable for the contents of it, any more than your phone carrier is liable if you have a conversation with someone that starts discussing illegal activities.
But what's happened is with all this monetizing of content based on "views", it created incentive to keep pushing any content that seems to be similar enough to something else a person recently watched. There's willful intent to selectively present what people try to share on the platform, in other words. And when you screw that up by pushing pro-terrorist content or anything along those lines? YOU as the content distribution suddenly bear some responsibility.
Re: (Score:2)
But what's happened is with all this monetizing of content based on "views", it created incentive to keep pushing any content that seems to be similar enough to something else a person recently watched. There's willful intent to selectively present what people try to share on the platform, in other words. And when you screw that up by pushing pro-terrorist content or anything along those lines? YOU as the content distribution suddenly bear some responsibility.
So I think that's where the actual argument (and perhaps solution) lies.
If the website is deliberately promoting or driving people towards content it suspects to be illegal, or perhaps if they're simply being negligent in not stopping it, then they might start bearing some liability.
Note, this would have made pretty short work of most of the torrent & file sharing websites of the past few decades.
Re:Hope Google loses this one! (Score:4, Insightful)
- If *I* post a video with illegal content to YouTube, *I* should be held liable for it, but YouTube itself should be held immune *unless* they fail to remove such content when notified.
- OTOH, if YouTube is "recommending" a video, Google/YouTube itself is the originator of that "content" (the link to the illegal video) and therefore *they* must be held liable for it.
That changes the dynamic from "must take it down when informed" to "never should have created it in the first place", at which point Google should be held liable for the "recommendation content" that directs users to illegal videos. As long as changes to 230 make that distinction clear - namely that Google's recommendations are in fact Google-originated content and that they own liability for it - I see no "end of the internet" at all.
I'm happy with anything that google doesn't like (Score:2)
Anything that google doesn't like means they are earning less money, google needs to become more like a utility, not a suc
The most likely outcome.. (Score:3)
1) Recommendations will have to be turned off. It's the cheapest and easiest thing to do. You'll still be able to literally search for anything, but there won't be able to be any bias in what say, google, sends you. It'll just send everything that's jimmied up to serve a certain audience. No filtering.
2) It will have the exact opposite effect of what the people who brought the original case hoped for.
Off-shore (Score:2)
haven't followed the case but ... (Score:2)
... is there even a casual link established between "youtube recommends isis video" and "wannabe terrorist sees video and commits terrotist act"?
that's very far fetched. if those parents can sue google for recommending isis videos then they can also sue the entire us administration for creating isis in the first place, they have directly created orders of magnitude more terrorists than google, twitter and facebook combined could ever hope to "influence".
otoh, i do think these algorithms should be fully disc
Tell me why I'm wrong. (Score:3)
For real, because I have to be missing something. Everyone here seems to be acting like this is an issue that Section 230 hinges on, when all I can see is that this is about whether or not using personalized algorithms to push specific stuff to to specific users violates Section 230 or not. Well, I'm in favor of the intent of Section 230, and, I hope Google gets the book thrown at them! As I see it, it would be a GOOD thing if a regular search engine gave me unfiltered results rather than trying to base the results I get on recent searches, what it calculates I might be looking for based on other things, or what it wants to advertise at me. It would be a good thing if ads had to go back to trying to do blanket coverage and hoping, rather than targeting individuals based on creepy data mining. Let ME look for my stuff.
If this stays about recommendations (Score:2)
If all we are ruling on is algorithmic recommendations then I think the hysteria is overblown. If we stray in to user generated content then we turn the internet into TV.
Need to create sharable subscribable user moderatn (Score:2)
They should first of all make lying about people in defamatory ways legal and immune from lawsuits. Soon nobody would care what anybody who hadn't earned credibility said about anyone else.
Then they could structure things so users could create their own moderation schemes that would access the full dataset and apply spam filters etc.
These schemes would have to allow users to share the effort spent creating them and subscribe to them or not. But the raw data should be available to users.
For instance I have
Liability shield is fine for hosting content, but (Score:3)
Re: (Score:2)
Open it up and let everyone such as users create such software on their platform for their own use. Make their software optional and make alternatives buildable
Re: (Score:3)
Re: (Score:3)
One horror show, please.
Zero internet sites permitting public content in 3, 2, 1...
Re: (Score:2)
Despite the fact that I'm commenting here, I don't know if that'd be a bad thing.
Re: (Score:2, Troll)
Despite the fact that I'm commenting here, I don't know if that'd be a bad thing.
It would be a bad thing for free speech and expression. In your case, maybe it would be for the best. In the general case, it will do harm to the exchange of ideas.
Re:A horror show where nothing is moderated? (Score:4, Informative)
Not really. Anyone can set up a virtual machine for around $3.50/month. They can create a web site and publish to their heart's content. We'll just go back to the days when people had to say something of interest to get an audience, rather than the days of large platforms amplifying the loudest and most extreme voices.
If someone objects to paying $3.50/month, then that's too bad. They're effectively paying much more than that on existing platforms in terms of giving up personal data.
Re: (Score:2)
Re: (Score:2)
I agree: The hosting vs. promoting distinction is critical.
Re: (Score:2)
Anyone can set up a virtual machine for around $3.50/month.
Tell us you don't understand the argument etc etc.
Getting ideas disseminated depends on people seeing them. HTH
Re: (Score:3)
Again: If you are not willing to put in the work to get people to see your ideas, that's not society's problem.
In pre-Internet days, people would stand on corners and speak, or wave signs, or whatever. They didn't automatically get to write editorials in newspapers, go on TV, or host a radio show.
People who want to say something need to put in the time and effort to get it out there. Nobody is owed a platform.
Re: (Score:2)
Again: If you are not willing to put in the work to get people to see your ideas, that's not society's problem.
It literally is.
Nobody is owed a platform.
That's not even close to what this is about.
Re: (Score:2)
How so? It has never been a problem in all of human history before, including in liberal democracies prior to the Internet.
Re: (Score:3)
How so? It has never been a problem in all of human history before, including in liberal democracies prior to the Internet.
It has literally always been a problem prior to the internet for people in minority groups to be able to make their voices heard.
Re:A horror show where nothing is moderated? (Score:5, Insightful)
We need to stop law suits where it just matters who has money to sue. The content creator, arguably could be liable, if the videos were that bad, but I bet you are going to have a really REALLY hard time getting the killer to point to that video and say "That video made me do it!", and then finding an unimpeachable psychologist who will say, any sane person watching that video would commit murder.
We need to place responsibility where it belongs. On the killer. The killers should be brought to justice if possible. They are the ones with blood on their hands. Google is not in this case.
Re:A horror show where nothing is moderated? (Score:4, Insightful)
$3.50 is the price now, while Section 230 keeps the VPS provider from needing liability insurance to cover every customer.
Take away Section 230, and those VPS providers look like some nice, juicy, deep pockets to sue for their customers' words.
Re: (Score:2)
Not really. Anyone can set up a virtual machine for around $3.50/month. They can create a web site and publish to their heart's content. We'll just go back to the days when people had to say something of interest to get an audience, rather than the days of large platforms amplifying the loudest and most extreme voices.
If someone objects to paying $3.50/month, then that's too bad. They're effectively paying much more than that on existing platforms in terms of giving up personal data.
Oh, boy! BBSes!!!
Didn't we invent something better than that?
Re: (Score:2)
It would be a bad thing for free speech and expression. In your case, maybe it would be for the best. In the general case, it will do harm to the exchange of ideas.
That's a reasonable opinion and one possible outcome. Another possible outcome is that we'd return "public content" back to its roots, e.g. usenet style message boards, personal websites, a million little topical phpbbs, Slashdot circa late 90s-early 2000s, etc. as opposed to the Facebook and Twitter way of doing things.
No, that's literally not a possible outcome. None of those things would be viable under the CDA without Section 230, which by the way nobody is talking about repealing.
Even batshit crazy people should be free to speak.
Well, they won't be, and neither will the sane people. Nobody will be able to afford to give them a voice.
Re: (Score:2)
Usenet? You understand that, while distributed ,companies were still hosting the news groups right?
Re: (Score:2)
One horror show, please.
Zero internet sites permitting public content in 3, 2, 1...
Precisely!
Re: (Score:2)
Is it a bad thing? I am sure that Internet will survive without 230. It may become different, but it will still be.
It will be less useful to the average person. When Al Gore stepped up in congress to turn the ARPAnet into the Internet, the goal was to make it accessible to all people, not just to promote corporate profits. Your average person is not going to run their own server, and the structure of the "modern" (i.e. still stuck on IPv4 and NAT) internet runs contrary to their doing so.
Internet is not Google
Pretending this would only affect Google is daft.
Re: (Score:2)
Completely, uninhibited free speech on the internet is a bit like having a extremely complex circuit where a bunch of the amplifiers have gain values set at infinity. Like, literally, infinity. The free-speech purist will insist that the circuit operate without any constraints or limits.
Re: (Score:2)
Well on the bright side, every person will have to become their own email provider, since no one will ever dare relay email (much less: filter spam) for someone else. I hope your grandparents enjoy configuring postfix and spamassassin on their new VPS.
Oh wait, the VPS companies' lawyers won't let the companies offer VPS service anymore. Sorry, I meant to say: I hope your grandparents enjoy co-locating their personal email server hardware.
Re: (Score:2)
Nonsense. Email is not (supposed to be) public. It's not typically considered "published content". So none of this would apply to email.
Re: (Score:2)
Are we reading the same Section 230 [cornell.edu]? The one I linked to, very clearly applies to everything from any multi-user application, down to the very network infrastructure. And the client software for accessing it all, too.
Perhaps you meant that nobody would bother suing these entities, if they were to lose the liability shield that Section 230 currently gives them? I suppose I could buy into an argument that humanity has become nicer and less litigious over the last quarter century.
Re: (Score:3, Interesting)
As is common, you're not including the unintended consequences.
Let's say CDA Section 230 is repealed. Everyone rejoices for freedom of expression on the internet without private company censorship. Yay!
Then one company gets sued into oblivion because out of control bullying of a teenager who harms themselves, and we get a flood of "think of the children" lawsuits against any and every social network and forum out there. Independently, every web site that doesn't have an 8-figure legal defense fund shuts
Re: (Score:2)
Freedom has never been free.
It's been tried before (Score:2)
A bunch of conservatives got mad at the liberal crowd of reddit and decided to start their own unmoderated site, voat.co They aren't around any longer but you'll see why.
https://web.archive.org/web/20... [archive.org]
No advertiser would go near it with headlines talking about the n*****s and f**s screwing up the country.
Maybe that first archive was a fluke. Let's try another one. https://web.archive.org/web/20... [archive.org] Uh oh. Well maybe once more to be sure. https://web.archive.org/web/20... [archive.org] Well at least they moved on from t
Re: (Score:2)
In the BBS days the total size of the market was a few hundred thousand. Now it's 300 million. And the clientele is ah... different.
I'm sure some forums would see more traffic, but where the bulk of those people go would look radically different from the 90s.
Re: (Score:3)
It'll be discord, which I really don't like. Call me old, but I miss actual forums instead of realtime chats posing as one.