Facebook Spares Humans By Fighting Offensive Photos With AI (techcrunch.com) 127
An anonymous reader writes from a report via TechCrunch: Facebook tells TechCrunch that its artificial intelligence systems now report more offensive photos than humans do. Typically when users upload content that is deemed offensive, it has to be seen and flagged by at least one human worker or user. Such posts that violate terms of service can include content that is hate speech, threatening or pornographic; incites violence; or contains nudity or graphic or gratuitous violence. The content that workers have to dig through is obviously not great, and may lead to various psychological illnesses such as post-traumatic stress disorder. AI is helping to eliminate such a terrible job as it can scan images that are uploaded before anyone ever sees them. Facebook's AI already "helps rank News Feed stories, read aloud the content of photos to the vision impaired and automatically write closed captions for video ads that increase view time by 12 percent," writes TechCrunch. Facebook's Director of Engineering for Applied Machine Learning Joaquin Candela tells TechCrunch, "One thing that is interesting is that today we have more offensive photos being reported by AI algorithms than by people. The higher we push that to 100 percent, the fewer offensive photos have actually been seen by a human." One risk of such an automated system is that it could censor art and free expression that may be productive or beautiful, yet controversial. The other more obvious risk is that such a system could take jobs away from those in need.
What could possibly go wrong? (Score:5, Funny)
Skynet get to control what I'm posting. That will end well.
Re:What could possibly go wrong? (Score:4, Interesting)
I admin a facebook group and the "automoderation" is dumb, FB reports the tamest pictures and I can only imagine it's because the women have big boobs or something. Not even nudity, we're talking about.
Every "approve" I had to do was utterly a waste of time. The post doesn't even appear until I do. I imagine a lot of admins who don't want to run afoul of this start becoming much more conservative of their approvals than the group would normally be just so they don't run afoul of FB big brother.
Re:What could possibly go wrong? (Score:5, Interesting)
I admin a facebook group and the "automoderation" is dumb, FB reports the tamest pictures and I can only imagine it's because the women have big boobs or something. Not even nudity, we're talking about.
Warning - following comment mentions disturbing violent content you may not wish to read about.
I used to work the night shift for a huge MMO played mostly by kids and young teenagers. One of my work queues was investigating suspicious weblinks the players posted, typically to fake clan websites. I would spend a few hours of a night hitting tiny urls for poop porn sites, scam sites, lolshock sites and sometimes paedophile grooming hangouts, suicide blogs etc that required notification to the authorities. It was pretty gross and mostly just the same lemonparties and guys eating poop over and over and over it would be really easys to automate that.... I always turned the darker stuff off immediately, but then one night near the end of a very long shift while really exhausted I saw video footage of 4 young teenagers beating a child to death with hammers.
Can't say why I watched it. I really, really wish I hadn't; If anything the sound was actually worse than the extremely graphic footage.
My point is that I only ever saw that one once, that's probably the kind of rare stuff that would slip past the AI's and hit the real people anyway. Although I can understand why they would create an AI to pre-detect this. I guess some employees are still going to be hit by things that will forever change them.
All in, I would rather human moderators perhaps with an AI to warn them about extreme content rather that than these incidents being used an an excuse for automated, draconian and potentially politically motivated automatic censorship.
I guess it's a long way off yet until we get our first AI whistle-blower.
Re: (Score:2, Insightful)
I honestly pray you reported what you saw.
I used to be an investigator for the largest ISP in the US. I saw so much pure evil, I left the security side of IT after 4 years. I've got a strong stomach, I saw blood in the military, I'm a "real man" as it were. But some content, especially involving minors is just beyond the pale.
Anyone who would harm a child in a malicious way needs to be executed. Full stop. I have children, and let me tell you that when you are aware of things like the above, you love your k
Re: (Score:2)
the left just coddles people that should be removed from society
Could you be more specific? What does that coddling look like?
Re: (Score:2)
How about: paedophiles who rape little children, or people who beat babies to death with indifference just receive a couple years in prison and then can go along their merry way to repeat themselves. When I see how low those sentences sometimes are, it makes me angry.
Imo. whoever rapes or kills a kid should go to prison for the rest of their natural life without parole. Or if they get out, be actively monitored for life. But apparently it is much more important to consider their feelings and give them secon
Re: (Score:2)
Prepare for many dead kids if you get your wish.
If there is no reason to not eliminate a witness, the witness croaks.
Re: (Score:2)
Which short sentences have you read about? I don't remember reading about any.
Re: (Score:2)
I used to track down pedo posts on an ISP as well, to hand over to the Ontario Provincial Police.
It was difficult having to phone some small town computer illiterate sheriff at 5:am his time and try to explain what an IP number is, and how he needed to phone the ISP using the IP and a time stamp to try and get the physical location of the computer to track down someone who was for example threatening suicide.
Re: (Score:2)
But then again, we're not talking about strong AI here. It can't detect the actual content of a message, only its form. And that means it will succesfully censor all-caps rants and image macros but not arguments delivered in a calm and polite way - because those look like nor
Re:What could possibly go wrong? (Score:4, Insightful)
Warning - following comment mentions disturbing violent content you may not wish to read about.
When did /. become a politically correct playground for the sensitive children that never grew up?
Besides, trigger warnings trigger me, you insensitive clod!
Re: (Score:2)
Re: (Score:2)
>
Censoring this type of thing and pretending that it doesn't exist will never change the fact that it does in fact happen, a lot. It also makes it that much harder to drag out into the light of day.
Is a trigger warning censorship? Or an opportunity to self censor?
Re: (Score:2)
You might be missing the some of the contexts of why someone tagged it as offensive.
I'm not offended by the sight of cleavage. However, I am awfully tired of the multitudes of "click-baitty whorification of women" teaser ads, and reporting them as "offensive" seems like the best option out of the few I am given. Yeah, I'd like to see fewer of those posts, and yeah, I find them offensive, especially here.
Some people probably also tag all ads as "offensive" because the existence of ads or the amount of them
Re: (Score:1)
Re: (Score:2)
Considering the crap that is on Facebook, it can only get better...
So I for one welcome... you know the rest.
People In Need (Score:5, Interesting)
"The other more obvious risk is that such a system could take jobs away from those in need."
Social Media Nipple Checkers Local 857, like my father and his father before him.
It's hard work on the Internet nippleface but we're a proud people.
Some people might say it's false drama, lamenting the decline of an industry that only goes back a dozen years but we original "ought fourer families" as we like to call ourselves have never known any other way.
I have friends in who were Internet Radio DJs for the four hours that was a thing until smart playlists replaced them. Many of them have never found employment since.
Re:People In Need (Score:4, Funny)
Re: (Score:1)
how good will it be at differentiating from artistic nipples, male nipples, offensive female nipples, accidental nipples? Until it can 100% spot and block only the offensive female nipples there's always a job for you...
It will be "good enough", which is all it will ever need to be in order to validate the savings by not hiring humans. This bullshit about 100% accuracy is pure marketing hype.
Comment removed (Score:4, Insightful)
Re: (Score:1)
Re: (Score:3)
You can milk anything with nipples.
Re: (Score:1)
Re: People In Need (Score:2)
Offensive female nipples?
I once hooked up with this chick and the way I found out she had hairy nipples was with my tongue... :/
Re: (Score:2)
Mmmm (Score:2)
Problem is, 99.7% o9f those 'offensive' photos are moms breastfeeding their kids.
When offensive becomes politically inconvenient (Score:1)
Can't wait for that day when FB finally reveals it's true goals - to become the worlds gatekeeper to all knowledge.
Remember when newly inaugurated Pres. Obama thought it'd be a good idea for FB login to be your official ID?
http://www.cbsnews.com/news/ob... [cbsnews.com]
And you thought Google's mission to simply index the world's knowledge to be scary?
Let me quote Mark Zuckerberg for you: dumbfucks [businessinsider.com]
Re: (Score:2)
Can't wait for that day when FB finally reveals it's true goals - to become the worlds gatekeeper to all knowledge...
If humans ALLOW a damn social media network to be or become the world gatekeeper to all knowledge, then we get what we deserve.
That's like hiring the National Enquirer to help teach world history. It would be impossible to discern fact from bullshit. Ever.
Re:When offensive becomes politically inconvenient (Score:4, Informative)
Remember when newly inaugurated Pres. Obama thought it'd be a good idea for FB login to be your official ID?
Nope, and the link you provided doesn't say he did. In fact what he suggested is something that many nerds have been asking for.
Imagine you could create pseudo-anonymous identities. You could have them signed by trusted government agencies to say that they have confirmed your real identity, without the need to necessarily share it with other organizations. If it gets compromised I can mark it as dead and set up a new one. Make it distributed, maybe block chain based.
He didn't suggest Facebook at all. You made that up entirely.
Re: (Score:1)
"trusted government agencies"
oxymoron
Re: (Score:2)
More like after seeing beheadings, extreme animal abuse, snuff etc.
Re:"Post traumatic stress disorder" (Score:4, Funny)
I've been more traumatized by not seeing boobs when I hoped I would. :(
I was just about to see them, but then something happened and didn't. I was depressed for weeks.
Seeing boobs would have uplifted me and buoyed my spirits.
Religious AI (Score:1, Insightful)
They taught AI religion?
Well, I guess we now know *why* Skynet will attempt to destroy us.
Re: (Score:2)
You can't think on this AI as being a "digital person in charge of removing pictures", but as a "a creature that have the basic need of removing offensive pictures", like if it HAD to do it by instinct like you have to breathe.
Of course nothing stops it from searching for shortcuts and easier ways to accomplish it.
Penalty..?? (Score:1)
So, what is the penalty when this improperly flags images and who exactly is held accountable?
Bad, AI, bad! (Score:1)
One thing that is interesting is that today we have more offensive photos being reported by AI algorithms than by people
Isn't that called false-positive?
Re: (Score:2)
more offensive photos being reported by AI algorithms than by people
So it seems the AI algo does better than humans at identifying "offensive" photos.
Human robots need jobs too (Score:2)
Why not just hire those people that graphic images don't affect?
Their lack of empathy might not let them get many jobs outside of the TSA, but they'll follow the rules precisely.
Personally, I wouldn't want to do it, but not because of the graphicness of the images, but because it's low paid and I'd find it really boring.
Probably want to set up an office image bingo card 'come on, nipple, nipple, dick pic, beheading....Yes! BINGO!!"
Missing in the summary: The human misery (Score:5, Interesting)
What is missing in the Slashdot summary is the misery of the human "digital sanitation workers", who usually have to sort that crap out. There has been some recent reporting on these unfortunate people. I believe this reporting is the reason why Facebook has come forward to show their effort, in order to counter the possible negative impact, if this hits US media outlets.
The German political foundation "Heinrich BÃll Stiftung" did a workshop on this phenomenon. Unfortunately there is little English language reporting I found, as for now. Here is a link to the original source (the workshop):
https://calendar.boell.de/de/e... [boell.de]
But one of the presentations is in English and available on Youtube:
https://www.youtube.com/watch?... [youtube.com]
A couple facts:
- the service is called Commercial Content Moderation
- 150.000 people work in this industry in the Philippines alone
- the Philippines is the major site for this job, because while being cheap, they being Christian means they are supposed to have a good sense of what is considered appropriate content in the USA and Europa
- a lot of the workers report "issues" because of the extreme content they have to endure, including relationship problems and substance abuse
- they are not allowed to work longer than 24 month in this job, supposedly because of the issue mentioned above
Re: (Score:1)
How is that, seeing that what is offensive in the USA is the opposite of what's offensive in Europe?
It's like the difference between topless and open carry.
Another, shorter Speech (Score:2)
I found another presentation from the same speaker. This video is 28 minutes long:
https://re-publica.de/16/sessi... [re-publica.de]
Re: (Score:1)
> The German political foundation "Heinrich BÃfll Stiftung" did a workshop on this phenomenon. Unfortunately
> there is little English language reporting I found
Possibly because that sounds both like a made up joke name, and obscene, so you'll probably have to disable any sort of "safe surf" moderation on your search results.
What is the lowest common denominator? (Score:1)
I find Donald Trump offensive. Your move, Facebook.
Our jobs? (Score:2)
Dey duuk arr duuuur!
Bias Confirmation (Score:2)
Still more validation of my intuitive avoidance of social networking sites. I'm eternally grateful that there are still some people left who actually meet and talk in person. We may be a dying breed, but at least we'll die as human beings.
So was it (Score:1)
So was it humans performing their job well or AI performing its job well when Tess Monster got her fetid flabulence deleted last week? If the machines did it, at least it shows that skynet has decent aesthetic taste.
Shouldn't be legal (Score:1)
Re: (Score:1)
A refusal of service need not be classifiable down to the exact wording of the rules. If it did, then a person could wear a scarf and no shirt, and claim the scarf was a short shirt, and demand service in a 'no shirt no service' restaurant. The restaurant would presumably have to come up with an exact minimum length of shirt which qualifies, and that's just plain silly.
Classifiable was meant as being able to name a requirement without digressing into a discussion about its inherent properties. Eg. "no shirt no service" restaurant actually says "SHIRT" and not "some garments we'll arbitrarily classify not befitting our restaurant". In case of discussion I guess ultimately a 3rd party or a court can judge on what constitutes a shirt.
The rules need to be intelligible and consistent, yes. Facebook's rules are. They refuse "content that is hate speech, threatening or pornographic; incites violence; or contains nudity or graphic or gratuitous violence." There is nothing unintelligible here
Facebook's STATED rules are indeed intelligible and consistent however the crux of my post was that a NN or DL AI system has no idea what these
Re: (Score:2)
I have to agree. Statistical guesswork might be the underlying mechanism behind perception, so as uncomfortable as that can be maybe that's all our brains are doing. I think when it comes to judgement, it gets very distasteful quickly to use the same methods. So far we're still at chatbot levels of rational decision making, so while it's amazing and a huge accomplishment to get a computer to describe a photograph mostly correctly much of the time, it's hard to see how any of the predictions being made in th
Re: (Score:2)
It's the other way around: you can refuse service except as specifically prohibited by law. Laws restricting the right to refuse service need to be justified based on a compelling government interest. There is certainly no compelling interest in forcing Facebook to post pictures that its users mi
here's your problem! (Score:3)
"its artificial intelligence systems now report more offensive photos than humans do."
Then either the AI is more easily offended than humans, or there's simply less humans working. Maybe if they hadn't fired the whole department last month (except for one guy).
That's what we do here: We talk about a how a particular unit has been less productive so we can cut more heads, of course, knowing that the unit is less productive because we've already reduced them to a skeleton crew. And that's how MBA's get their bonuses while other people get pink slips.
Hard to train (Score:1)
Re: (Score:2)
/b/
even simpler algorithm (Score:2)
If it's posted on Facebook, it almost always falls into two categories: inane or offensive. So, a very simple algorithm for avoiding photos you don't want to see is... to quit Facebook.
Re: (Score:3)
I've seen absolutely horrible things on the internet and I don't have PTSD.
People are different, and one of the great achievements of modern society is that we have developed a culture that shows some level of consideration to even the weakest members of society. And to be fair - there is always a possibility that it isn't the more sensitive that are too sensitive, but the less sensitive that are simply too callous. And in practical terms, if you really enjoy watching graphical portrayals of cruelty, then you will be able to find it, even if it isn't readily available, whereas if
Re: (Score:2)
PTSD from looking at images on the screen of a computer? Seriously??
Yes, seroiusly. PTSD does not arise from the graveness of the danger you were in, but from the feeling of complete loss of control and the prolonged state of emotional stress you are subjected to, so it is quite credible that you can get PTSD from something that most people would not be affected by. Are people too sensitive, if they are affected that much? Perhaps - but what would you do? Lock them up just so you don't feel that too much consideration is given to them? It doesn't cost most people a lot to s
Re: (Score:2)
one of the great achievements of modern society is that we have developed a culture that shows some level of consideration to even the weakest members of society.
some level of consideration to even the weakest members of society.
I'd say our biggest problem is that we have shown too much consideration to those who choose to be offended and upset by even the most mundane of things. Reality hardens an individual. While we may not all be ready to be soldiers on the front lines, I would think it reasonable to expect a human being to be able to maintain some degree of sanity after being exposed to some of the darker truths of this world.
Re: (Score:2)
one of the great achievements of modern society is that we have developed a culture that shows some level of consideration to even the weakest members of society.
some level of consideration to even the weakest members of society.
I'd say our biggest problem is that we have shown too much consideration to those who choose to be offended and upset by even the most mundane of things. Reality hardens an individual. While we may not all be ready to be soldiers on the front lines, I would think it reasonable to expect a human being to be able to maintain some degree of sanity after being exposed to some of the darker truths of this world.
Nanny state needs people to be raised soft. Otherwise they wouldn't need nanny state. As it is, in these Western countries where grown adults are as naive and pathetic as toddlers, nanny state has actually come to be necessary; without nanny state they'd go 'Lord of the Flies' in a week and be worshiping pigs heads on sticks.
Re: (Score:2)
That's... an interesting opinion. I think there's likely a considerable amount of truth in it. I'd mod you up for it if I had points today.
Re: (Score:2)
I've heard the current sentiment described as "tyranny of the minority".
Re:See you at -1! (Score:4, Interesting)
Re: (Score:3)
Re: (Score:2)
Or perhaps we would all benefit from having more people too wussy to make the choices which make reality terrible. Because I've witnessed an awful lot of terrible things - such as poverty - being blamed on forces outside human control despite being the direct consequences of choices people make. It's you who should man up and stop being part of