Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI Facebook Censorship Communications Network Networking Social Networks Software The Internet News Technology Your Rights Online

Facebook Spares Humans By Fighting Offensive Photos With AI (techcrunch.com) 127

An anonymous reader writes from a report via TechCrunch: Facebook tells TechCrunch that its artificial intelligence systems now report more offensive photos than humans do. Typically when users upload content that is deemed offensive, it has to be seen and flagged by at least one human worker or user. Such posts that violate terms of service can include content that is hate speech, threatening or pornographic; incites violence; or contains nudity or graphic or gratuitous violence. The content that workers have to dig through is obviously not great, and may lead to various psychological illnesses such as post-traumatic stress disorder. AI is helping to eliminate such a terrible job as it can scan images that are uploaded before anyone ever sees them. Facebook's AI already "helps rank News Feed stories, read aloud the content of photos to the vision impaired and automatically write closed captions for video ads that increase view time by 12 percent," writes TechCrunch. Facebook's Director of Engineering for Applied Machine Learning Joaquin Candela tells TechCrunch, "One thing that is interesting is that today we have more offensive photos being reported by AI algorithms than by people. The higher we push that to 100 percent, the fewer offensive photos have actually been seen by a human." One risk of such an automated system is that it could censor art and free expression that may be productive or beautiful, yet controversial. The other more obvious risk is that such a system could take jobs away from those in need.
This discussion has been archived. No new comments can be posted.

Facebook Spares Humans By Fighting Offensive Photos With AI

Comments Filter:
  • by Bruce66423 ( 1678196 ) on Wednesday June 01, 2016 @02:07AM (#52223597)

    Skynet get to control what I'm posting. That will end well.

    • by Anonymous Coward on Wednesday June 01, 2016 @02:19AM (#52223641)

      I admin a facebook group and the "automoderation" is dumb, FB reports the tamest pictures and I can only imagine it's because the women have big boobs or something. Not even nudity, we're talking about.

      Every "approve" I had to do was utterly a waste of time. The post doesn't even appear until I do. I imagine a lot of admins who don't want to run afoul of this start becoming much more conservative of their approvals than the group would normally be just so they don't run afoul of FB big brother.

      • by rainmouse ( 1784278 ) on Wednesday June 01, 2016 @06:56AM (#52224389)

        I admin a facebook group and the "automoderation" is dumb, FB reports the tamest pictures and I can only imagine it's because the women have big boobs or something. Not even nudity, we're talking about.

        Warning - following comment mentions disturbing violent content you may not wish to read about.

        I used to work the night shift for a huge MMO played mostly by kids and young teenagers. One of my work queues was investigating suspicious weblinks the players posted, typically to fake clan websites. I would spend a few hours of a night hitting tiny urls for poop porn sites, scam sites, lolshock sites and sometimes paedophile grooming hangouts, suicide blogs etc that required notification to the authorities. It was pretty gross and mostly just the same lemonparties and guys eating poop over and over and over it would be really easys to automate that.... I always turned the darker stuff off immediately, but then one night near the end of a very long shift while really exhausted I saw video footage of 4 young teenagers beating a child to death with hammers.

        Can't say why I watched it. I really, really wish I hadn't; If anything the sound was actually worse than the extremely graphic footage.
        My point is that I only ever saw that one once, that's probably the kind of rare stuff that would slip past the AI's and hit the real people anyway. Although I can understand why they would create an AI to pre-detect this. I guess some employees are still going to be hit by things that will forever change them.

        All in, I would rather human moderators perhaps with an AI to warn them about extreme content rather that than these incidents being used an an excuse for automated, draconian and potentially politically motivated automatic censorship.
        I guess it's a long way off yet until we get our first AI whistle-blower.

        • Re: (Score:2, Insightful)

          by Anonymous Coward

          I honestly pray you reported what you saw.

          I used to be an investigator for the largest ISP in the US. I saw so much pure evil, I left the security side of IT after 4 years. I've got a strong stomach, I saw blood in the military, I'm a "real man" as it were. But some content, especially involving minors is just beyond the pale.

          Anyone who would harm a child in a malicious way needs to be executed. Full stop. I have children, and let me tell you that when you are aware of things like the above, you love your k

          • by martas ( 1439879 )

            the left just coddles people that should be removed from society

            Could you be more specific? What does that coddling look like?

            • How about: paedophiles who rape little children, or people who beat babies to death with indifference just receive a couple years in prison and then can go along their merry way to repeat themselves. When I see how low those sentences sometimes are, it makes me angry.

              Imo. whoever rapes or kills a kid should go to prison for the rest of their natural life without parole. Or if they get out, be actively monitored for life. But apparently it is much more important to consider their feelings and give them secon

        • All in, I would rather human moderators perhaps with an AI to warn them about extreme content rather that than these incidents being used an an excuse for automated, draconian and potentially politically motivated automatic censorship.

          But then again, we're not talking about strong AI here. It can't detect the actual content of a message, only its form. And that means it will succesfully censor all-caps rants and image macros but not arguments delivered in a calm and polite way - because those look like nor

        • by Opportunist ( 166417 ) on Wednesday June 01, 2016 @11:01AM (#52225683)

          Warning - following comment mentions disturbing violent content you may not wish to read about.

          When did /. become a politically correct playground for the sensitive children that never grew up?

          Besides, trigger warnings trigger me, you insensitive clod!

          • I kinda see the trigger warning as similar to a condom. Better to have and not need than to need and not have.
      • You might be missing the some of the contexts of why someone tagged it as offensive.

        I'm not offended by the sight of cleavage. However, I am awfully tired of the multitudes of "click-baitty whorification of women" teaser ads, and reporting them as "offensive" seems like the best option out of the few I am given. Yeah, I'd like to see fewer of those posts, and yeah, I find them offensive, especially here.

        Some people probably also tag all ads as "offensive" because the existence of ads or the amount of them

    • hahahaha
    • Considering the crap that is on Facebook, it can only get better...

      So I for one welcome... you know the rest.

  • People In Need (Score:5, Interesting)

    by nick_davison ( 217681 ) on Wednesday June 01, 2016 @02:18AM (#52223635)

    "The other more obvious risk is that such a system could take jobs away from those in need."

    Social Media Nipple Checkers Local 857, like my father and his father before him.

    It's hard work on the Internet nippleface but we're a proud people.

    Some people might say it's false drama, lamenting the decline of an industry that only goes back a dozen years but we original "ought fourer families" as we like to call ourselves have never known any other way.

    I have friends in who were Internet Radio DJs for the four hours that was a thing until smart playlists replaced them. Many of them have never found employment since.

    • by advocate_one ( 662832 ) on Wednesday June 01, 2016 @04:07AM (#52223931)
      how good will it be at differentiating from artistic nipples, male nipples, offensive female nipples, accidental nipples? Until it can 100% spot and block only the offensive female nipples there's always a job for you...
      • by Anonymous Coward

        how good will it be at differentiating from artistic nipples, male nipples, offensive female nipples, accidental nipples? Until it can 100% spot and block only the offensive female nipples there's always a job for you...

        It will be "good enough", which is all it will ever need to be in order to validate the savings by not hiring humans. This bullshit about 100% accuracy is pure marketing hype.

      • Comment removed (Score:4, Insightful)

        by account_deleted ( 4530225 ) on Wednesday June 01, 2016 @04:53AM (#52224055)
        Comment removed based on user account deletion
      • by Anonymous Coward
        There is no such thing as an offensive nipple. In fact, there is no such thing as an offensive body part. People are nature and there is nothing shameful or offensive there. There can't be. By definition. You might make a claim that something displaying a fetish graphically may be offensive - and could absolutely be correct. But nudity in and of itself cannot be offensive. If someone claims that a nipple, boob, butt, penis, or vulva is offensive, either censor that weird person or tell them not to look at t
  • Problem is, 99.7% o9f those 'offensive' photos are moms breastfeeding their kids.

  • Can't wait for that day when FB finally reveals it's true goals - to become the worlds gatekeeper to all knowledge.
    Remember when newly inaugurated Pres. Obama thought it'd be a good idea for FB login to be your official ID?
    http://www.cbsnews.com/news/ob... [cbsnews.com]

    And you thought Google's mission to simply index the world's knowledge to be scary?
    Let me quote Mark Zuckerberg for you: dumbfucks [businessinsider.com]

    • Can't wait for that day when FB finally reveals it's true goals - to become the worlds gatekeeper to all knowledge...

      If humans ALLOW a damn social media network to be or become the world gatekeeper to all knowledge, then we get what we deserve.

      That's like hiring the National Enquirer to help teach world history. It would be impossible to discern fact from bullshit. Ever.

    • by AmiMoJo ( 196126 ) on Wednesday June 01, 2016 @07:10AM (#52224427) Homepage Journal

      Remember when newly inaugurated Pres. Obama thought it'd be a good idea for FB login to be your official ID?

      Nope, and the link you provided doesn't say he did. In fact what he suggested is something that many nerds have been asking for.

      Imagine you could create pseudo-anonymous identities. You could have them signed by trusted government agencies to say that they have confirmed your real identity, without the need to necessarily share it with other organizations. If it gets compromised I can mark it as dead and set up a new one. Make it distributed, maybe block chain based.

      He didn't suggest Facebook at all. You made that up entirely.

  • Religious AI (Score:1, Insightful)

    by Anonymous Coward

    They taught AI religion?

    Well, I guess we now know *why* Skynet will attempt to destroy us.

    • by Z80a ( 971949 )

      You can't think on this AI as being a "digital person in charge of removing pictures", but as a "a creature that have the basic need of removing offensive pictures", like if it HAD to do it by instinct like you have to breathe.

      Of course nothing stops it from searching for shortcuts and easier ways to accomplish it.

  • by Anonymous Coward

    So, what is the penalty when this improperly flags images and who exactly is held accountable?

  • One thing that is interesting is that today we have more offensive photos being reported by AI algorithms than by people

    Isn't that called false-positive?

    • more offensive photos being reported by AI algorithms than by people

      So it seems the AI algo does better than humans at identifying "offensive" photos.

  • Why not just hire those people that graphic images don't affect?
    Their lack of empathy might not let them get many jobs outside of the TSA, but they'll follow the rules precisely.

    Personally, I wouldn't want to do it, but not because of the graphicness of the images, but because it's low paid and I'd find it really boring.
    Probably want to set up an office image bingo card 'come on, nipple, nipple, dick pic, beheading....Yes! BINGO!!"

  • by Britz ( 170620 ) on Wednesday June 01, 2016 @03:54AM (#52223881)

    What is missing in the Slashdot summary is the misery of the human "digital sanitation workers", who usually have to sort that crap out. There has been some recent reporting on these unfortunate people. I believe this reporting is the reason why Facebook has come forward to show their effort, in order to counter the possible negative impact, if this hits US media outlets.

    The German political foundation "Heinrich BÃll Stiftung" did a workshop on this phenomenon. Unfortunately there is little English language reporting I found, as for now. Here is a link to the original source (the workshop):

    https://calendar.boell.de/de/e... [boell.de]

    But one of the presentations is in English and available on Youtube:

    https://www.youtube.com/watch?... [youtube.com]

    A couple facts:

    - the service is called Commercial Content Moderation

    - 150.000 people work in this industry in the Philippines alone
    - the Philippines is the major site for this job, because while being cheap, they being Christian means they are supposed to have a good sense of what is considered appropriate content in the USA and Europa

    - a lot of the workers report "issues" because of the extreme content they have to endure, including relationship problems and substance abuse
    - they are not allowed to work longer than 24 month in this job, supposedly because of the issue mentioned above

    • by Anonymous Coward

      they being Christian means they are supposed to have a good sense of what is considered appropriate content in the USA and Europa

      How is that, seeing that what is offensive in the USA is the opposite of what's offensive in Europe?

      It's like the difference between topless and open carry.

    • I found another presentation from the same speaker. This video is 28 minutes long:

      https://re-publica.de/16/sessi... [re-publica.de]

    • by Threni ( 635302 )

      > The German political foundation "Heinrich BÃfll Stiftung" did a workshop on this phenomenon. Unfortunately
      > there is little English language reporting I found

      Possibly because that sounds both like a made up joke name, and obscene, so you'll probably have to disable any sort of "safe surf" moderation on your search results.

  • by Anonymous Coward

    I find Donald Trump offensive. Your move, Facebook.

  • Dey duuk arr duuuur!

  • Still more validation of my intuitive avoidance of social networking sites. I'm eternally grateful that there are still some people left who actually meet and talk in person. We may be a dying breed, but at least we'll die as human beings.

  • So was it humans performing their job well or AI performing its job well when Tess Monster got her fetid flabulence deleted last week? If the machines did it, at least it shows that skynet has decent aesthetic taste.

  • If my Googling concerning "The Right to Refuse Service" laws in the US is correct, it is not legal to refuse service outside of the law (anti-discrimination laws on multiple levels) or arbitrarily or inconsistently. Focusing on the latter two, this means that any refusal of service must be "classifiable" or in other words there must be a set of lawful "refusal rules" that CAN be adhered to BEFORE requesting the service. In as far as I understand neural networks and deep learning that requirement isn't met b
    • I have to agree. Statistical guesswork might be the underlying mechanism behind perception, so as uncomfortable as that can be maybe that's all our brains are doing. I think when it comes to judgement, it gets very distasteful quickly to use the same methods. So far we're still at chatbot levels of rational decision making, so while it's amazing and a huge accomplishment to get a computer to describe a photograph mostly correctly much of the time, it's hard to see how any of the predictions being made in th

    • If my Googling concerning "The Right to Refuse Service" laws in the US is correct, it is not legal to refuse service outside of the law (anti-discrimination laws on multiple levels) or arbitrarily or inconsistently.

      It's the other way around: you can refuse service except as specifically prohibited by law. Laws restricting the right to refuse service need to be justified based on a compelling government interest. There is certainly no compelling interest in forcing Facebook to post pictures that its users mi

  • by tekrat ( 242117 ) on Wednesday June 01, 2016 @08:46AM (#52224889) Homepage Journal

    "its artificial intelligence systems now report more offensive photos than humans do."

    Then either the AI is more easily offended than humans, or there's simply less humans working. Maybe if they hadn't fired the whole department last month (except for one guy).

    That's what we do here: We talk about a how a particular unit has been less productive so we can cut more heads, of course, knowing that the unit is less productive because we've already reduced them to a skeleton crew. And that's how MBA's get their bonuses while other people get pink slips.

  • How are they going to find enough pornographic images to properly train the system?
  • If it's posted on Facebook, it almost always falls into two categories: inane or offensive. So, a very simple algorithm for avoiding photos you don't want to see is... to quit Facebook.

The fancy is indeed no other than a mode of memory emancipated from the order of space and time. -- Samuel Taylor Coleridge

Working...