Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
The Internet United States Communications Media Network Social Networks

Most Americans Can't Tell the Difference Between a Social Media Bot and A Human, Study Finds (theverge.com) 140

A new study from Pew Research Center found that most Americans can't tell social media bots from real humans, and most are convinced bots are bad. "Only 47 percent of Americans are somewhat confident they can identify social media bots from real humans," reports The Verge. "In contrast, most Americans surveyed in a study about fake news were confident they could identify false stories." From the report: The Pew study is an uncommon look at what the average person thinks about these automated accounts that plague social media platforms. After surveying over 4,500 adults in the U.S., Pew found that most people actually don't know much about bots. Two-thirds of Americans have at least heard of social media bots, but only 16 percent say they've heard a lot about them, while 34 percent say they've never heard of them at all. The knowledgeable tend to be younger, and men are more likely than women (by 22 percentage points) to say they've heard of bots. Since the survey results are self-reported, there's a chance people are overstating or understating their knowledge of bots. Of those who have heard of bots, 80 percent say the accounts are used for bad purposes.

Regardless of whether a person is a Republican or Democrat or young or old, most think that bots are bad. And the more that a person knows about social media bots, the less supportive they are of bots being used for various purposes, like activists drawing attention to topics or a political party using bots to promote candidates.

This discussion has been archived. No new comments can be posted.

Most Americans Can't Tell the Difference Between a Social Media Bot and A Human, Study Finds

Comments Filter:
  • Begged question... (Score:5, Insightful)

    by argStyopa ( 232550 ) on Monday October 15, 2018 @06:45PM (#57483224) Journal

    ...this assumes there IS a meaningful difference.

    • by Shadow of Eternity ( 795165 ) on Monday October 15, 2018 @07:13PM (#57483418)

      Maybe people can't tell the difference because so many real people have been smeared as "Russian Bots" that it's even been used to delegitimize anyone who didn't like The Last Jedi. They can't tell the difference because there isn't one... not because humans and bots behave the same, but because the term is applied to humans so much to dilute it beyond worthlessness.

      Ironically the NPC meme provoked immediate outrage and accusations of literal fascism and genocide rhetoric.

      • by gman003 ( 1693318 ) on Monday October 15, 2018 @08:40PM (#57483796)

        I disagree. The problem is not that people are being falsely accused of being bots for holding contrary opinions - the problem is that, on social media, everyone looks and talks like a bot.

        The format of most social media (Facebook and Twitter especially) pushes people towards bot-like behavior. The brevity pushes you to skip any supporting information, just blindly assert your position as correct. It tends to erode nuance - you don't say "evidence suggests that X probably causes Y", you just say "they proved X causes Y", if not just "X therefore Y". The rapid-fire structure of comment threads, and general lack of a good bio, makes it hard to look at someone's account and tell what they're really all about. And both tend to expose you to a massive wall of people who are shouting their opinions into the void, rather than any sort of community, especially when you choose to look at "trending" or "what's happening now".

        And, on top of that, there are so many obvious bots, that having to consider whether every single person you're talking to is a bot or not is rational. Every single Elon Musk tweet, for something like a year now, gets swarmed by bots pretending to be Elon giving away some cryptocurrency, if only you download this sketchy app - and this is after Twitter put special protections in place to prevent just anyone from setting their name to "Elon Musk". The people running social media clearly don't care about keeping bots-impersonating-humans out, so it falls on each user instead to worry about it. We're not given enough information to decide accurately, so it's just inevitable that some people end up falsely accused.

        And then, yes, there's the organized campaigns, of which the Russians are merely the most prolific. The "Internet Research Agency" (the one hit with Mueller indictments) didn't merely try to manipulate election news, they tried to stir up chaos. They'd organize two protests for opposing sides at the same place and time, hoping it would turn violent. They spread anti-vax stuff, just to erode trust in authorities. They've spread disaster hoaxes and fake hate crimes, just to get people to panic. And, as I mentioned, Russia isn't the only one. Remember the Chinese "50 Cent Party"? Or America's own "Operation: Earnest Voice"?

        As always, there's an XKCD for it. [xkcd.com] You don't even need bots, per se - just a decent budget and enough people who will work for cheap, and anyone can manufacture not just a consensus, but a culture. Shout enough into a crowd, and some of them will take root, and now you've got another person shouting alongside you. It's not like it's a new phenomenon - how often, back in the day, did you or I accuse someone of being a Microsoft shill, or astroturfing for some corporation or another? A lot of those may have been true, but I'm sure many of them were not.

        But what does it matter, whether someone is a machine or a human, when the opinion they shout is not their own? That is the real problem - social media has allowed too much pollution of the discourse. It's no longer about debate, building a logical case to support your position and poking flaws in your opponent's reasoning, but about who can state their position loudest and longest. It's about endurance, not smarts - forcing your opponents to waste so much time responding to a never-ending stream of lies and bullshit that they eventually give up.

        I don't know if there's a solution. I used to think Slashdot's system protected it, but after seeing a spree of explicitly anti-democratic posts here in the past few weeks, I'm starting to think it just delayed it by a few years. Perhaps something that gave users the information they need to more accurately spot the bots and troll farms, coupled with a strict moderation team to purge them when spotted and confirmed. Or maybe it's just inevitable that any sufficiently-large social network site has a collapse of trust, and the solution is to fracture into smaller communities. I still use a bunch of old web forums, and they don't seem to have fallen victim, yet.

        • The format of most social media (Facebook and Twitter especially) pushes people towards bot-like behavior.

          Disqus, in particular, does this. My comments often get flagged as spam because they are long, comprehensive thoughts. Apparently their system thinks that's probably spam. Spam in dense paragraphs, with correct grammar and spelling most of the time, using fairly high-level vocabulary, without exterior links, or if there are some, to places that are common sites to look up factual information.

          When website designers write algorithms that push people away from real discussion, it absolutely devolves into slapd

          • >The more clicks, the more views, the more ads that get views, the more revenue you get. Someone spending 15 minutes reading a comment and writing a response generates one ad view. Someone spending 10 seconds reading a post and 10 seconds replying over those 15 minutes hits two orders of magnitude more ads.

            Do you feel like your attention span has been impacted by this too? I used to find it rather effortless to spend time having long-form discussion on websites like /. but in recent years it feels like s

        • I enjoyed your post very much and I'd like to add an element that extends your argument:

          In brief: Social media's business model (indeed, the entire Internet's) is to make money in three ways:

          - Advertisements
          - Data prostitution
          - Subscriptions

          The product is "eyeballs," just as it always has been with newspapers (Enquirer), Radio (Howard Stern), TV (Jerry Springer).

          The customer is advertisers, even with subscriber-subsidized content, like newspapers, magazines, and TV.

          So the task is pretty simple: Gather eyeba

      • Most of the bots post text that was written by a human. Even if it has some kind of text generation capability, it's usually heavily pre-programmed with knowledge by a human.
      • Comment removed based on user account deletion
      • by mentil ( 1748130 )

        I hadn't heard of it, so for those interested, here's a decent writeup [kotaku.co.uk] on the 'NPC meme'. I'm surprised the article didn't make the comparison to philosophical zombies, as it's a similar concept.

        • Kotaku is pathological incapable of an honest or decent writeup of anything. They're a living example of some of the most dishonest, unethical, and misleading agitprop on the internet today. They're right up there with infowars, dailykos, and AJ+.

          And like I called in my original post they immediately went straight to raising hue and cry over dehumanization and fascism when that's exactly what they themselves have been doing for years dehumanizing everyone that disagrees with them and spouting pro-violence p

      • by AmiMoJo ( 196126 )

        It's because so many people love to jump on bandwagons and repeat the same talking points with slight variations.

        Check Amazon reviews on popular products, most of them are very similar to each other and written by people who have obviously only had to the item for about 5 seconds, or not even bought it at all but just wanted to get in on the action.

        Go read the user reviews on IMDB for movies like Black Panther and The Last Jedi, for example, and a very large number of them are just repeating standard generi

        • It's because so many people love to jump on bandwagons and repeat the same talking points.....People have started to act more like bots. Maybe deliberately in some cases, maybe unconsciously in others.....

          The internet is magical because it allows the village idiot a microphone the same size as the smartest people in the village. And often the smart folks realize that there's no reason to get into a braying match with an ass.

          That means you have a whole lot of people who aren't smart enough to contribute meaningfully, but who want to gain some of those sweet, sweet internet points. How do they do that? They make up for the lack of content with volume. How do they get that volume with a deficit in knowledge and

          • by AmiMoJo ( 196126 )

            I'm the post if there was an idiot on the radio or TV there would usually be someone sensible to contrast with them. Thus their idiocy was apparent.

            On the internet that often doesn't happen. Systems designed to make it happen are often easily gamed. Facebook tried to force it by putting Snopes links next to fake news, so the fake news peddlers starting attacking Snopes for being fake/biased.

            I don't know how to solve it.

            • >Facebook tried to force it by putting Snopes links next to fake news, so the fake news peddlers starting attacking Snopes for being fake/biased.

              And what else would you call it when they allow their partisan prejudice to affect their judgements to such an obscene degree that they will watch a live video proving something happened and say it's "false" or "inconclusive" because of some obscene batshit moon logic twisted interpretation of the precise wording of the claim made.

              • by AmiMoJo ( 196126 )

                I'd call it fake news. I'm not dumb, I know the reason you have been non-specific and not provided any links or search terms is because it's bullshit.

                • Yes because Blaire White doesn't exist, isn't a real person, and wasn't attacked repeatedly in a single night just for wearing a MAGA hat.

                  • by AmiMoJo ( 196126 )

                    Okay, you are talking about this article on Snopes [snopes.com]. It rates the claim made by White on Twitter that she was attacked merely for wearing a MAGA hat in Hollywood as "mixture".

                    That seems entirely reasonable. It's true that she did get into an altercation, but only after going to an anti-Trump rally and, with her boyfriend, crossing an LAPD line meant to keep opposing sides apart to prevent violence.

                    Clearly the way she frames it as having been attacked merely for wearing a MAGA hat in Hollywood is omitting key

                    • And thanks for that live demonstration of both the phenomenal hypocrisy of the social justice movement as well as absurd dishonest moon logic I was just talking about. The only provocation is wearing a MAGA hat around the alt-left. The only deception is snopes attempting to lie and claim that someone who was attacked was actually the perpetrator because reasons. It's remarkable that no amount of evidence is enough to defend a police officer when BLM decides to throw their weight behind a violent criminal, b

        • Reminds me of the early days when the like of Compuserve and Prodigy were joined by noobie AOL users.

          Hell, every person wants to see their comment on the brand new Internet!

          AOL was the first, "me, too" generation.

        • If I hadn't seen a string of articles from the usual suspects (polytaku et al) pushing the same narrative you are here that one might have actually superficially appeared reasonable until the last line where you let your slip show. All you're doing though is parroting the latest addition to the party line, that allowing mere peasants to have a voice and post reviews is wrong and bad and only our glorious social justice leaders should be allowed to decide what is or isn't good for us.

          • by AmiMoJo ( 196126 )

            Your obsession with the imaginary spectre of social justice both clouds your thinking and undermines your arguments.

    • It is the reverse Turing test. If you can't tell a stupid machine from a human, then the human must be stupid.

  • stoopid hoomans!
  • by careysub ( 976506 ) on Monday October 15, 2018 @06:47PM (#57483248)

    It was an opinion poll about whether people were confident they could identify social bots. No study was done to see if they really could or not.

  • by RhettLivingston ( 544140 ) on Monday October 15, 2018 @06:52PM (#57483268) Journal

    When it comes to the simplest statements, there just isn't enough content to tell whether it is a person or a bot. Many people pick up what bots say and amplify them and vice versa.

    But, bots aren't just spreading simple messages. The better ones are spreading messages handcrafted for effect by people. The lobbyist, or whatever we want to call the person using the bot to manipulate, writes the initial messages. They aren't from a bot. Then bots are used to amplify the message as well as tie it in by linking both to it and to parties that the message will likely resonate with. The better bots may also search out similar messages and amplify them as well as use AI to paraphrase the message in new ways and spread it in different forms.

    It is all under human control though and getting more and more difficult to recognize.

    • by AmiMoJo ( 196126 )

      The way to detect bots is to look for bot-like behaviour, such as posting the same material as other bots, only every liking/reposting material from a small number of people, having fake profiles with stock images, the fact that they only ever post during Moscow office hours etc.

      All of those things are relatively easy to fix, but the bot herders don't bother because they don't have to.

      • Don't want to. More content == more views, more views == more revenue. Doesn't matter if it's made by a bot or viewed by a bot. As long as it's driving clicks and eyeballs, it's worth money.

        Internet outrage is the current way we fuel the internet.

  • Maybe they're simply acknowledging that the NPC humans on social media provide no more intellectual content than a software bot. Kind of like the bogus clickbait headlines on Slashdot.
  • It formed random "sentences" based on markov chains 3 words deep constructed from a word appearance database collected over about 4 months of irc logs that I had personally collected. Each "sentence" it created was based on a single word that was randomly selected from the channel chatter since the last comment, and it made one comment every two minutes. It was nothing more sinister than that.

    Much of what it said seemed nonsequitor, and I think it was widely assumed to be trolling, although I had not coded it specifically to do so.

    There wasn't a single channel that took it to where it was not banned. In retrospect, it was an interesting social experiment, although I hadn't intended it to be such.

    • I remember you. You were on EFNET on the #c channel.
    • by Mal-2 ( 675116 )

      I've been in a channel where there were multiple bots, but they would only speak when spoken to. This was a reasonably acceptable compromise, as people who thought it was funny to troll the bots got themselves kicked, not the bots. Of course, once we realized they were all using slightly different implementations of Markov chains, we started repeating certain phrases that included a word that simply didn't see much use in our channel. That way, when someone did use such a word when querying the bot, they go

  • Most Americans on "social media" are NPCs anyway.

  • read at the 6th grade level so it's not surprising
  • This goes back to at least Eliza [wikipedia.org] program, which drew in a surprising number of folks, and was hardly AI [fullerton.edu].
  • So APK is a bot? Or it's a fake news spreader?
  • by a human and the picture linked is on topic and politically funny?
    Can a bot be more on topic and useful than the human responding in all caps that it makes fun of their side of politics?
    That the animated meme of a frail politician provided fun and joy to millions.

    A human made the art.
    The human who found the image, artwork, animation is still doing the creative work.
    The bot just allows a person to use their time in a better way. To find more art and share powerful political memes. The bot doing th
  • Most Americans can't tell the difference between people on social media and bots
  • It's obvious.

    The bots reply ;-)

  • Does the Turing Test count if people change?
    • The Turing test is for two-sided conversation: "Are there imaginable digital computers which would do well in the imitation game?"

      What is described in TFS is one-sided which is a different matter.

  • by nehumanuscrede ( 624750 ) on Monday October 15, 2018 @10:12PM (#57484126)

    Stay off of social media and the problem mentioned in the headline is irrelevant.

    Maybe the bots will get into and / or instigate arguments between themselves.

    . . . if a bot makes a statement, and no one is there to read it, would anyone care ?

  • by OrangeTide ( 124937 ) on Monday October 15, 2018 @11:26PM (#57484320) Homepage Journal

    People need to be certified before using a computer, because this has gotten more dangerous than driving cars. And luckily there is no digital equivalent of the 2nd amendment, so better to nip this one in the bud right now.

  • Off course the ultimate proof is that they elected one as president.
  • I predicted years ago that humans would fail the Turing test before AI passes it. Most on-line posting is of such a low quality I'd hope it's mostly bots. Otherwise I fear for humanities future.
    • Well, maybe if we put more emphasis on humanities we could counter this, but then again, who'd want to educate the idiots?

  • Most Americans can't tell an obviously bogus bullshit story from reality either, that's why fake news are such a problem.

  • Weird headline ... as though most Elbonians, on the other hand, would spot the bot in a New York minute, lol

    Anyway, social media is such an artificial environment that the whole thing is silly. It's like saying that nobody can spot Robbie the Robot - as long as we're all required to wear flex hose on our arms and giant fishbowls on our heads.

  • not surprising, back in the 90's I ran a bbs and had a small program called lisa installed. it acted like a real person. I would sit there and watch conversations people would have with it. One time a preacher was on my board and was talking to lisa, then he started witnessing to it. I had to break the connection since I knew lisa was not real and was embarrassed for the preacher.

    if people were fooled back then, it would not be hard to imagine them being fooled today.

Every nonzero finite dimensional inner product space has an orthonormal basis. It makes sense, when you don't think about it.

Working...