Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Social Networks United States Technology

Supreme Court Poised To Reconsider Key Tenets of Online Speech (nytimes.com) 241

The cases could significantly affect the power and responsibilities of social media platforms. From a report: For years, giant social networks like Facebook, Twitter and Instagram have operated under two crucial tenets. The first is that the platforms have the power to decide what content to keep online and what to take down, free from government oversight. The second is that the websites cannot be held legally responsible for most of what their users post online, shielding the companies from lawsuits over libelous speech, extremist content and real-world harm linked to their platforms. Now the Supreme Court is poised to reconsider those rules, potentially leading to the most significant reset of the doctrines governing online speech since U.S. officials and courts decided to apply few regulations to the web in the 1990s.

On Friday, the Supreme Court is expected to discuss whether to hear two cases that challenge laws in Texas and Florida barring online platforms from taking down certain political content. Next month, the court is scheduled to hear a case that questions Section 230, a 1996 statute that protects the platforms from liability for the content posted by their users. The cases could eventually alter the hands-off legal position that the United States has largely taken toward online speech, potentially upending the businesses of TikTok, Twitter, Snap and Meta, which owns Facebook and Instagram. "It's a moment when everything might change," said Daphne Keller, a former lawyer for Google who directs a program at Stanford University's Cyber Policy Center.

This discussion has been archived. No new comments can be posted.

Supreme Court Poised To Reconsider Key Tenets of Online Speech

Comments Filter:
  • In 1990s... (Score:4, Interesting)

    by VeryFluffyBunny ( 5037285 ) on Thursday January 19, 2023 @02:17PM (#63222442)
    ...Silicon Valley types & Ayn Rand devotees believed in self-organising systems. This was problematic because they ignored what ecologists had learned about self-organising systems out in the wild. Social media platforms were built on the same misguided assumptions. Adam Curtis made a series of documentaries (2011) which made a biting critique of these assumptions called, "All Watched Over by Machines of Loving Grace." Well worth a watch if you can find it. https://en.wikipedia.org/wiki/... [wikipedia.org]

    Yes, the unusual protections that the US & other judicial systems have afforded these corporations is in desperate need of review & reform.
    • Re:In 1990s... (Score:5, Insightful)

      by leptons ( 891340 ) on Thursday January 19, 2023 @02:32PM (#63222484)
      Unfortunately I do not trust this right-wing bent "court" to change anything for the better. They'll use their "conservative ideas" to bring about the worst possible change, for reasons that only they could possibly justify, while gaslighting, and making up their own interpretation of precedent to fit whatever awful agenda they've been told to foist upon us.
      • by lsllll ( 830002 )
        Since you appear to be a leftie (so am I), entertain us. What's a bad outcome in this scenario?
      • Re: (Score:2, Interesting)

        by DesScorp ( 410532 )

        ... and making up their own interpretation of precedent to fit whatever awful agenda they've been told to foist upon us.

        You mean like magically pulling a "right to privacy" out of their ass, and then through "emanations and penumbras" finding a right to abortion in that? Like applying the Commerce Clause to everything in order to ignore the limits on Federal power imposed in the Bill of Rights? Like finding that freedom of religion really means suppressing religious expression in public spaces? Like pretty much ignoring the 9th and 10th Amendments completely because Fuck You, That's Why? THAT kind of gaslighting and making s

      • Conservative courts (Score:5, Interesting)

        by Okian Warrior ( 537106 ) on Thursday January 19, 2023 @03:39PM (#63222738) Homepage Journal

        Unfortunately I do not trust this right-wing bent "court" to change anything for the better. They'll use their "conservative ideas" to bring about the worst possible change, for reasons that only they could possibly justify, while gaslighting, and making up their own interpretation of precedent to fit whatever awful agenda they've been told to foist upon us.

        I think you're misunderstanding the intersection between court and conservative: the supreme court now, and conservative courts in general, rule based on the written text of the law.

        Looking at Roe vs. Wade as an example, the opinion was that in order to be considered a right protected by the constitution it has to be either a) mentioned in the constitution, or b) be a commonly held right in legal decisions. Abortion isn't mentioned in the constitution, and the court searched law decisions back to the 1700's looking for counterexamples (abortion was always held to be murder)(*).

        You can believe that abortion should be a right, and that's a fine political opinion(**), but the text of the constitution doesn't allow it to be regulated at the federal level.

        The liberal justices usually rule what the situation *should* be, and that's also a fine political position, but for stability of the society you really want your laws to always be interpreted by the direct reading of the text of the laws. Judges *have* to rule based on the text of the law, and not on their political beliefs - otherwise, it's impossible to know what's legal and what's not.

        We've seen some of the most egregious examples of rights violations and citizen disempowerment in the past 2 decades, simply because judges will make rulings based on their personal beliefs and not what is written down in in the law books. Lots and lots of cases get overturned on appeal, and this puts enormous friction on the application of justice.

        We need to get back to the rule of law, and conservative justices are more focused on that than they are their political beliefs.

        You can even put this on a psychological basis. Conservatives have 5 dimensions of morality [ted.com], while liberals have only 2. One of the extra levels "Authority" is seen by conservatives as a moral imperative; which means, they hold adherance to the law as a basis for morality in a psychological sense. (And on average, Liberals do not hold this rule as important, rather they base their morality on harm and fairness alone. See the link for details.)

        Overall, you want your judges to lean conservative: they will rule based on the text of the law, and then everyone will know ahead of time what is legal and what is not.

        (*) Note that women voting was the same way: not in the constitution, and not held as a right generally. The solution was to add it to the constitution so that there would be no debate. Similar for black citizenship, abolishment of slavery, and black parity voting.

        (**) The 9th and 10th amendment imply that there are other rights not mentioned in the constitution, and abortion may be one of those rights.

        • by Local ID10T ( 790134 ) <ID10T.L.USER@gmail.com> on Thursday January 19, 2023 @04:52PM (#63223026) Homepage

          I think you're misunderstanding the intersection between court and conservative: the supreme court now, and conservative courts in general, rule based on the written text of the law.

          I'm going to ignore the entire abortion red herring and try to get us back on topic -the topic is 47 USC 230 c (the Communications Decency Act).

          The written text of the law in question is:

          (c)Protection for “Good Samaritan” blocking and screening of offensive material

          (1)Treatment of publisher or speaker
          No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

          (2)Civil liability
          No provider or user of an interactive computer service shall be held liable on account of-

          (A)any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or

          (B)any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).

          On Friday, the Supreme Court is expected to discuss whether to hear two cases that challenge laws in Texas and Florida barring online platforms from taking down certain political content.

          Based on the written text of the law, the services are clearly authorized to block political content.

          This is a federal law and (according to Article VI, Paragraph 2 of the U.S. Constitution) takes precedence over any state laws which would seek to contradict it.

          • It's not that clear. For example, this part:

            "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."

            In this section, if Google/Facebook/Twitter start promoting certain tweets or posts, then they are taking an action to show you that information. They are no longer just a provider. Under that legal theory, they are not protected by section 230, because section 230 protects moderation, but it does not protect editorial actions. Twitter intentionally tried to inject their voice into the conversation about Hunter's laptop, and thus became a participant, not a provid

            • Those are separate legal issues to the one specified in the summary above. (we are getting offtopic here)

              In this section, if Google/Facebook/Twitter start promoting certain tweets or posts, then they are taking an action to show you that information. They are no longer just a provider. Under that legal theory, they are not protected by section 230, because section 230 protects moderation, but it does not protect editorial actions. Twitter intentionally tried to inject their voice into the conversation about Hunter's laptop, and thus became a participant, not a provider. That is one legal theory, and the court may decide.

              I am partially in agreement with you. The law does not say anything about promoting content.

              Promoting content is not specifically protected by section 230. It can be argued that it is covered under (47USC230c1) "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider" as the service is not th

      • Lol the scare quotes. Much FUD
    • by spads ( 1095039 )
      works for me (US)...

      https://watchdocumentaries.com... [watchdocumentaries.com]

      Always dug his stuff. Haven't seen this one I don't think.
    • Not for an American anyway. We're kind of big on freedom of speech and with that comes freedom of association. Section 230 of the CDA grants you freedom of association and with it freedom of speech. It gives you the right to control who gets to speak on your platform and therefore who your associating with.

      Take that away and you don't get a free speech Paradise instead billionaires can you use their money to say whatever they want and you get silenced unless you happen to agree with those billionaires.
      • by KlomDark ( 6370 )

        Shout them down, don't ban/cancel them. Like here, if you say something that most don't agree with, your post gets dropped to -1, but the banhammer doesn't come out. I hate that shit, quit squelching speech and instead just vote against it.

      • You do know that billionaires own those social media sites & are free to promote & censor anything in any way they like (usually with algorithm-y-jigs), including allowing seditious plotting because it sells more advertising, right? There ain't no "free speech" on the interwebs pipes, well, technically there is but it's the billionaire's & their paying customers' paid-for free speech.
    • they ignored what ecologists had learned about self-organising systems out in the wild

      What did they learn?

  • by King_TJ ( 85913 ) on Thursday January 19, 2023 @02:17PM (#63222448) Journal

    I don't get how the social media networks can claim on one had they're "not responsible for any content the users post", but decide to selectively censor it and claim ownership of it (even reselling your info and data-mined details about your content you shared) on the other?

    Back when I ran a computer BBS in the late 80's and early 90's, this came up and it was generally believed you gave up ability to claim immunity from what your user's posted when you started censoring it. (Once you started actively trying to control the content, that made you responsible for doing it. It was understandable to argue you wished to act more like a "common carrier" who simply provided the service allowing people to communicate back and forth, by contrast.)

    • by nealric ( 3647765 ) on Thursday January 19, 2023 @02:28PM (#63222480)

      It's pretty simple: you are "on their property", and they can censor what they want. If you go to a bar and start ranting in the corner, the owner can kick you out if they don't like what you have to say. The owner kicking you out and telling you not to come back doesn't mean the owner should be responsible for anything anybody in the bar says.

      Perfect moderation simply isn't possible if you are running a forum with billions of users. But it doesn't follow that because perfect moderation isn't possible that any moderation at all makes you responsible for anything any everything posted on your server.

      That's why social media isn't a newspaper. In a newspaper, every word was deliberately put there by management of the paper. A newspaper is liable for its content. On social media, none of the content was put there by the management. They may have rules about what can be posted, but they didn't put it there.

      So, for example, I could start a social media platform for Republicans and ban anything that's pro-Democrat. Nothing wrong with that if that's your jam- fair game. But do you really believe that my banning Democrats makes me liable for a bomb threat or child porn posted on my platform that my moderators didn't catch immediately?

      • it is not a problem if you do that explicitly - if say you are going to to run a Republican website and ban all pro Democrats comments. The problem is when people expect an open and fair conversation and have no idea that the moderation to cause bias is occuring. Take that same website and just call it "Politics" and make no explicit reference to being a Republican run website, and still delete or hide every pro-democrat comment or user, then you are tricking people. It's the same as expecting reviews on a
        • There is also, it seems to me, a difference between "We're going to filter what can be posted on our service; here's everything that passed our filtering" and "This is the content posted on our service that we want you to see" -- the former is acceptable under Section 230, while the latter is the service deciding what to present to you, the editorial action of a publisher. Once the service makes decisions for you about what you should see, so that you have to actively work to get around their choice of feed
      • The issue here is not the censorship. The issue is a liability for the content that remains. If social media company actively reviews and sensors all content, then they are liable for everything they let through. If all content is reviewed, curated and prioritized on your feed, this could be viewed as editorializing and section 230 protection will not apply.

        Let's say I sent offensive joke to all my contacts in gmail. Google would not be liable for what was sent - I would. If I sent offensive joke to Goog
      • Re: (Score:3, Interesting)

        by lsllll ( 830002 )

        It's pretty simple: you are "on their property", and they can censor what they want. If you go to a bar and start ranting in the corner, the owner can kick you out if they don't like what you have to say.

        The owner can kick you out if you start behaving in a manner that's disturbing other patron's enjoyment, but not because of the content of your conversation. Mind you, I'm a leftie myself, but is this [bbc.com] the world you're advocating for?

        The owner kicking you out and telling you not to come back doesn't mean the owner should be responsible for anything anybody in the bar says.

        With the power to kick people out comes the responsibility to have to moderate everyone. If you don't moderate everyone, such as not kicking out Brad Pitt or your mother-in-law for doing the exact same thing you kicked someone else for last week, then you're absolutely liable

        • Re: (Score:3, Informative)

          by zendarva ( 8340223 )
          The owner can kick you out for any reason they're not specifically prohibited for kicking you out over. What you say is not a protected class. Political belief is not a protected class.
        • It's pretty simple: you are "on their property", and they can censor what they want. If you go to a bar and start ranting in the corner, the owner can kick you out if they don't like what you have to say.

          The owner can kick you out if you start behaving in a manner that's disturbing other patron's enjoyment, but not because of the content of your conversation

          That's a very strong claim, one that requires evidence. Do you have a citation for that?

          AFAIK, the owner can kick you out for any reason or no reason at all, as long as it's not because you are a member of a protected class. Mind you, the owner can kick members of protected classes out, they just have to have some reason other than because you're a member of the protected class.

          See https://en.wikipedia.org/wiki/... [wikipedia.org].

      • To extended this analogy and relevant to one of the cases, if the owner gives the person ranting a microphone and allows them to continue ranting, it is reasonable and just to hold them jointly responsible for the contents of the speech. By giving a microphone and failing to take it back, they are endorsing the contents of the rant. This is particularly true when the rant isn't happening live but is prerecorded and the owner is able to review its contents at their leisure before choosing whether or not to
        • by dgatwood ( 11270 )

          To extended this analogy and relevant to one of the cases, if the owner gives the person ranting a microphone and allows them to continue ranting, it is reasonable and just to hold them jointly responsible for the contents of the speech. By giving a microphone and failing to take it back, they are endorsing the contents of the rant. This is particularly true when the rant isn't happening live but is prerecorded and the owner is able to review its contents at their leisure before choosing whether or not to play it over the bar sound system.

          Except that analogy falls apart completely when the owner gives out one billion microphones. Liability cannot realistically exist unless the owner is aware or reasonably should have been aware that someone was using a platform to cause harm. The larger the number of people on the platform, the more infeasible it becomes for the owner to be aware of abuse.

    • by Graymalkin ( 13732 ) * on Thursday January 19, 2023 @02:40PM (#63222502)

      Back when I ran a computer BBS in the late 80's and early 90's, this came up and it was generally believed you gave up ability to claim immunity from what your user's posted when you started censoring it.

      This is exactly what Section 230 covers. It specifically allows a platform to moderate content without automatically becoming that content's publisher and thus being liable for it. Without this protection no online platform can allow user content to be published. There's too much liability for platforms if they are suddenly considered publishers of content posted by trolls and assholes on their sites.

      • by lsllll ( 830002 )
        Not if 230 is changed to allow one of two behaviors GP posted: don't moderate and you're not liable, but if you start moderating, then you're liable. The ONLY moderation for giant social media sites should be to remove (and report to police) illegal content, such as threats, defamation, and child porn.
        • The ONLY moderation for giant social media sites should be to remove (and report to police) illegal content, such as threats, defamation, and child porn.

          I disagree with that. If I owned a brick and mortar business on the corner of 1st and Main, tacked up a giant whiteboard and made markers available for anyone strolling by down the street to write anything they wanted on the whiteboard, as the owner of said whiteboard I would expect to be able to erase anything I disagreed with (in addition to erasing an

          • See... I'd argue your whiteboard example there is a different situation.

            For starters, there's no linking of what's drawn on your "public whiteboard" to specific individuals. You're not making money from your brick and mortar business selling the information people are scribbling up there. "Oh, look... someone just wrote that they support Trump. Let me get his full name and cellphone number and add him to my database of known Republicans so I can sell it!" (Yeah, not happening.)

            All you really did there was

        • Not if 230 is changed to allow one of two behaviors GP posted: don't moderate and you're not liable, but if you start moderating, then you're liable.

          What you're proposing is that site operators are not held liable if they let trolls destroy discourse. It's obvious why that's not a workable plan.

        • At one instance you'll have a service full of trolls, assholes and other dregs of society shitting up all conversations that no normal person want to be associated with and in the other you have a service that will remove any speech that anyone would find in the slightest offensive for liability reasons.

          So tell me how that is much better than today? Either using something that is like 8kun or 4chan *with no moderation at all* or a service that'll have in it's TOS that any user must bear the cost of any liti

          • by lsllll ( 830002 )

            At one instance you'll have a service full of trolls, assholes and other dregs of society shitting up all conversations that no normal person want to be associated with and in the other you have a service that will remove any speech that anyone would find in the slightest offensive for liability reasons.

            When you walk down the street, you see homeless, you see puke on the street, and you see the beautiful blonde in red lipstick walk past you (Matrix). You pay attention to the blonde only. It's no different than any web site with user content. But read on...

            So tell me how that is much better than today? Either using something that is like 8kun or 4chan *with no moderation at all* or a service that'll have in it's TOS that any user must bear the cost of any litigation the service suffers for your speech it missed to moderate.

            Slashdot owners don't really moderate the content. I believe I've only seen a handful of instances where a post was actually removed. The moderation happens by us, yet Slashdot hasn't degenerated. Again, the garbage I'm talking about here is stuff l

            • When you walk down the street, you see homeless, you see puke on the street, and you see the beautiful blonde in red lipstick walk past you (Matrix). You pay attention to the blonde only. It's no different than any web site with user content. But read on...

              Funny how you think that everyone work the same. Now ask someone with OCD what he pays attention to, or the person being bullied, or the guy who is about to loose his home because he lost his job. Context matters, saying that people only pay attention to what looks interesting dismisses the fact that sometimes reality forces things upon you, and there's a lot of shitty people who think it's just fine for them to force things into your face at every chance they get.

              Slashdot owners don't really moderate the content. I believe I've only seen a handful of instances where a post was actually removed. The moderation happens by us, yet Slashdot hasn't degenerated.

              Doesn't matter, it's still moderation even

              • by lsllll ( 830002 )

                Funny how you think that everyone work the same. Now ask someone with OCD what he pays attention to, or the person being bullied, or the guy who is about to loose his home because he lost his job. Context matters, saying that people only pay attention to what looks interesting dismisses the fact that sometimes reality forces things upon you, and there's a lot of shitty people who think it's just fine for them to force things into your face at every chance they get.

                Thanks for making my point for me. By giving large corporations the right to moderate the content any way they see fit, you're essentially letting them choose what you see. To quote yourself again: "there's a lot of shitty people who think it's just fine for them to force things into your face at every chance they get."

                Doesn't matter, it's still moderation even if it's done by the users. Slashdot isn't what it once was, in my opinion it has degenerated quite a bit. People are fighting over the tiniest things these days. The days of "news for nerds" are long gone, instead we have long threads about fucking politics and political e-peen waving and where people mod other people down for posting factual information because it doesn't fit into the political narrative they have bought into. That slashdot hasn't been overrun entirely by trolls and assholes is because the moderation still kind of functions.

                I completely agree, but one reason why people flock here is because of its pretty good moderation system. True that SD used to be "News for Nerds" and it's no anymore. They should have tw

    • by suutar ( 1860506 ) on Thursday January 19, 2023 @02:44PM (#63222512)

      And that belief is why Congress enacted section 230, so you could in fact get rid of egregiously nasty content without implicitly becoming liable for anything you overlooked.

      • by lsllll ( 830002 )
        The problem is that while you call it "nasty", someone else will call it a view you don't agree with.
        • by suutar ( 1860506 )

          Fair enough. However, there had been court cases where sites were sued because they didn't moderate, leaving them in a damned-if-you-do, damned-if-you-don't situation; they couldn't just leave everything up, but if they didn't they're liable for what they do leave. Not very tenable.

          • by lsllll ( 830002 )
            You're right. I'm not advocating abolishing Section 230. I'm rooting for changing it to give (large) web sites two choices: moderate and be liable, or don't moderate and don't be liable.
    • So if someone at a party in your house slanders someone else, you are liable because you kicked out people who start yelling racist shit? That is fucking ridiculous.

    • by backslashdot ( 95548 ) on Thursday January 19, 2023 @02:54PM (#63222550)

      So if you have a cats website and someone starts posting about their dog on there, you can't kick them out lest you become liable for any slanderous statements people on that website make (such as posting a bad review of a cat groomer).

      • by lsllll ( 830002 )
        You said this in another story last week and I responded, but in short, size and options matter. If your cat web site has over a certain number of regular users (number to be determined by congress), then no, you can't take off posts about dogs. If your user-base is below that number, then your site is not considered the public square and you can moderate what you want. The whole idea is to protect uncomfortable and unwanted speech. Yes, the government can't suppress speech, but if most social media sit
        • What do you mean folks who are trying to make a change will not have a place to voice their opinions .. there are plenty of right wing websites, sites like rumble, and the entirety of talk-radio from which leftists are essentially banned. Note, in the 2000s you conservatives/republicans used to be against net neutrality. It's only when you realized that corporations can actually be against you that the tune has changed.

    • by lsllll ( 830002 )
      Been saying this here on /. for the past few years. I really hope the SCOTUS decisions lead to what you laid out above.
    • Censorship is something the government does. What they're doing is deciding who to associate with. Part of the first amendment is freedom of association. The problem you have is that you want to take away freedom of association. You basically are opposed to one half of the first amendment. I don't think you're doing that on purpose I just don't think you fully understand the ramifications because they're not being explained to you by your preferred media outlets.

      You're not going to get a free speech pa
    • I don't get how the social media networks can claim on one had they're "not responsible for any content the users post", but decide to selectively censor it and claim ownership of it (even reselling your info and data-mined details about your content you shared) on the other?

      Not sure why those two situations are in conflict. Users are generating content for advertisers to advertise in, moderation happens to keep the paychecks coming in. When a group of ppl do something like read a bunch of stupid memes then storm the Capitol the social media companies will make changes to keep that from happening again.... to keep the ad checks coming in.

      Part of this problem has to do with using an ad-based system and part of it has to do with online speech ending in people actually getting h

    • I don't get how the social media networks can claim on one had they're "not responsible for any content the users post", but decide to selectively censor it and claim ownership of it

      Because there's a law called Section 230 which specifically says they can, written by people who realise the world isn't black and white.

      Yeah they can have their cake and eat it too. The law specifically says they can. I'm not sure why the supreme court is needed to interpret this, you can ask the person who wrote it.

    • In 1996 a federal law was passed that specifically addressed these issues.

      47 USC Section 230 (The Communications Decency Act)

      (c)Protection for “Good Samaritan” blocking and screening of offensive material

      (1)Treatment of publisher or speaker
      No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

      (2)Civil liability
      No provider or user of an interactive computer service shall be held liable on account of—

      (A)any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or

      (B)any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).

    • by Zangief ( 461457 )

      because your BBS had a handful of people posting

      that doesn't scale. if you had let it unmoderated for a few million users, it would have become unusable

      like, come on, you are on the internet. You should know how trolls work

    • I don't get how the social media networks can claim on one had they're "not responsible for any content the users post", but decide to selectively censor it

      Briefly, because Section 230 is written to exactly allow them to do that.

  • Companies should decided what the was to post/host/whatever and also not be liable for user posts - the individuals should be liable. However, they should be required to have transparency about what is, was, and will be banned. Not just some 100 page terms no one reads and that is applied differently to every account. This is necessary to show exactly what biases they have. I think of politics and controversial topics like a product, if a product gets 200 1 star reviews and 200 5 star reviews, then the user
  • I hate censorship (Score:5, Insightful)

    by gillbates ( 106458 ) on Thursday January 19, 2023 @02:57PM (#63222556) Homepage Journal

    As much as I hate censorship, there has never been a right to use someone else's time, money, and property to communicate your point.

    As much as a sympathize with what the Republicans are trying to achieve, they're in the wrong here. Yes, censorship is bad, but because of the openness of the Internet, it has always been possible to work around it by starting your own platform to attract like-minded people. If this is upheld, trolls will effectively be able to ruin public discourse on any social media site. Granted, I don't like what Facebook and Twitter have done, but neither am I required to use their platform; if I wanted to argue with children, I'd tell my progeny to clean their rooms.

    • Re: (Score:2, Interesting)

      by lsllll ( 830002 )
      I agree with you, but here's the issue [pcmag.com]. The social media giants (FB/Twitter) are so large that pretty much everybody uses them. Until this model breaks, they' need to be considered as the public square [breitbart.com], which means they should not regulate legal speech, even if they're private companies.
      • Morally speaking, you are correct, that because they are a de facto public square, they should not regulate speech, even if they are private companies. However, the social media giants did not set out to create a public square, but rather, to sell advertising to a sub-group of people who can be easily emotionally manipulated.

        How would you feel if you had curated a private forest, and because so many hunters began staying at your lodge, that the US Forest service exercised eminent domain and told you tha

        • by lsllll ( 830002 )

          I don't think the forest analogy works here. The moment you open up your private forest to hunters staying at your lodge, you become a semi-public place (albeit privately owned) and will have to follow government regulation, ie. exits, maximum occupancy, and on and on. But it was you who decided the benefits outweighed the pain of losing your farm portion to trees. That's not eminent domain, however, as much as it is regulations. Eminent domain would be the government coming and taking your land away fr

  • Appears that a big shitstorm is coming.

    At least a Category 3, I'd say.

    Well, it was fun while it lasted.

  • by ljw1004 ( 764174 ) on Thursday January 19, 2023 @03:19PM (#63222636)

    [from the article] But Halimah DeLaine Prado, Google’s general counsel, said in an interview that “any negative ruling in this case, narrow or otherwise, is going to fundamentally change how the internet works,” since it could result in the removal of recommendation algorithms that are “integral” to the web.

    Are recommendation algorithms "integral" to the web? And if so, do they have to be like the current recommendation algorithms? Here is my worked alternative: My dream is an internet where all recommendation algorithms (including search, filtering, newsfeed promotion, suggestions) are TRANSPARENT. Some people say that this is impossible. So here's my initial draft at something I think could be tweaked to work.

    (1) We're interested in all cases where third-party items is presented filtered and/or in ranked order. (2) I would decree that this must be accomplished by a formula which assigns a real number to each item, and presents them in order of this number. (3) The inputs to the formula must solely user-provided search+query+filter terms, plus data or metadata about each individual item (including tags, keywords, weights), and they must be universal i.e. not changing from one user to another, and they must be published and easily viewable. (4) The formula too must be published and easily viewable and reproducible. (5) The inputs and formula have no copyright or other IP protection: everyone is free to retrieve and reproduce the calculations into rank. (6) The data/metadata for each item must be derived either by a similarly non-protected algorithm from the submission itself, or may be calculated themselves by a similarly-defined "transparent" algorithm.

    My proposal is: if a recommendation system is "transparent" in this way, then its results should be protected by section 230; companies are free to use non-transparent recommendation systems, but if they do then their results are not protected. Also, I'd require that if anyone offers a recommendation system, transparent or not, others must be allowed to scrape, record and analyze it, and publish their findings.

    Example 1: on slashdot, data about each item includes its date and the list of votes it has received. The default slashdot view would be transparent because it would filter based on the user's slidey-bar "full/abbreviated/hidden".

    Example 2: an altavista search has you type in search terms and it only uses data in the form of matching those search terms to words in the web-page; the system would have to expose to you the word list it uses for each web-page.

    Example 3: a google search is heavily driven by PageRank and other things that we don't know what they are. Google could continue using PageRank (as a "weight" metadata associated with each item) but it would have to expose a lot more about it. If Google's search results start promoting Google's own things, e.g. preferring AMP pages, preferring Google Flight links over expedia, then it would be manifest in the publicly reproducible formula they use.

    Example 4: if twitter displays a chronological feed of the accounts you follow, that'd be transparent as defined here -- because follower lists and publication data will be public information. Shadow-banning will be impossible (in the sense that if someone is banned, it will be transparent that they're banned).

    Example 5: any website which offers the user some controls to filter, match keywords etc. will be happily transparent, as defined here.

    Example 6: Amazon's search results -- do they count as displaying third-party data? I don't know.

    One effect of my draft proposal is that anyone can set up a company (say, DuckDuckGo) which forwards queries on to google/bing to get the results, then presents them itself. Or which does a huge number of PageRank queries to scrape Google's PageRank database. I currently view that as an unavoidable consequence of being open enough that journalists and academics and government ombudsmen can also freely and ope

    • e.g. PageRank might not have been invented. I don't know if that's a downside or an upside to this proposal, or if there are ways around it.

      Do you remember what search engines and curated lists were like before Google and PageRank? If not, or if you don't remember it clearly, consider that there must have been a very good reason why Google became almost instantly dominant, with zero marketing or advertising budget. Google's spider wasn't better than Lycos' or Altavista's. In fact, early on it wasn't as good, because Page and Brin didn't have big corporate resources (though they did have Stanford's fat pipe). PageRank was just orders of magnitud

  • Everyone should read this comment [slashdot.org] from user Knightman on a previous article on the same issue.

    Basically, if there is no Section 230, any online provider can be sued for moderation, and for NOT moderating, based on two prior court cases. That is why that section allowed web site owners (think even small blogs and forums) to put in some effort into moderating, yet not be responsible for comments left by their visitors.

    Fast forward to the era of big social media, and things changed a bit: mega corps have imple

  • Twitter, Facebook etc go broke and they make it sound like a bad thing?

  • A direct conflict between these two things?

    Conservative ruling #1: Trump can claim it’s all a fake and encourage his supporters to hang the vice president, and Facebook MUST allow the post.

    Conservative ruling #2: Facebook is held liable for anything posted on its network, including the posts mentioned previously.

    This oughta be hilarious to watch.
  • When you start holding platforms liable for any speech that users post to their platform, you are suddenly imposing a heavy operating cost on that platform.

    The websites that will be affected are *not* the ones run by big companies, like Facebook or Twitter. Facebook can afford to spend tens of millions of dollars a year on human moderators, fancy AI technology, experts to deploy and troubleshoot the AI technology, compliance officers, lawyers, etc. They're doing this already.

    But let's suppose you are a sma

Established technology tends to persist in the face of new technology. -- G. Blaauw, one of the designers of System 360

Working...