Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Censorship Facebook Media Social Networks News

Anti-Porn Facebook Page is Deleted, Then Restored 145

Slashdot regular contributor Bennett Haselton writes: "An anti-porn organization's Facebook page is disabled by Facebook, and then resurrected. Was the page the victim of a 'complaint mob,' and could the previously-discussed 'voting algorithm' have saved the page from being shut down?"

Speaking of Facebook pages being unjustly shut down, on Monday the anti-porn Facebook page http://www.facebook.com/PornHarms/, run by the non-profit Morality in Media, was abruptly disabled by Facebook. The page had 35,000 "likes" at the time the plug was pulled. Morality in Media CEO Patrick Trueman, who also ran the Facebook page, says he never received any warning from Facebook before the page was removed.

Some time on Wednesday, the page was restored. I had emailed a contact at Facebook to ask why the page was shut down, and he replied later to say that it had been deleted in error and the page had been restored. (He didn't say whether the page was on track to being restored anyway, or whether it would have remained down indefinitely if I hadn't pinged him.)

Facebook did not respond to inquiries as to why the page was removed, but as Evgeny Morozov has pointed out regarding political pages (and as many other users have heard from people's anecdotal experiences having pages pulled without explanation), it's common for pages on Facebook and YouTube to get removed that were almost certainly not violating those sites' Terms of Service. If enough users decide to file "abuse complaints" simultaneously against a piece of content on Facebook or YouTube, this has a good chance of getting the content removed, whether the complaints were legitimate or were simply part of an organized campaign of filing false complaints.

Meanwhile, I correspond with dozens of people every week on Facebook (usually people who use my proxy sites to get on Facebook at school or work), and about once a week I get an automated message from Facebook that says, "You have been sending harassing messages to other users," and goes on to sternly list the types of messages that violate Facebook's TOS. (Only twice has this resulted in my account actually getting locked, and it was unlocked after I bugged my friend at Facebook about it.)

I figure that these are either the result of users clicking "Report this message" accidentally, or parents hacking into their kids' accounts, reading their messages, and then trying to get the account shut down of the person who was talking their kid about proxy sites. In either case, I assume it's not the result of an "organized campaign," but perhaps your account gets locked if you're unlucky enough that two or three people file complaints within the same short time frame.

So I have no reason to doubt Mr. Trueman's claim that the PornHarms Facebook page never contained any content that violated Facebook's TOS. He says the page mostly contained links to academic research supposedly demonstrating the harmful effects of pornography, and that while the target audience was adult academics, there was nothing in the content that most parents would consider inappropriate for underage viewers. There was certainly no actual pornography on the page, not even in censored form with the fun parts blurred out (although I didn't check every single academic paper linked from the site to see if any of them might have used pixellated/censored porn for illustrative purposes). Trueman also says that they prevented third-party users from posting on the PornHarms page directly, and regularly monitored the page's content to remove any "inappropriate" comments that users had written in response to the officially authorized posts. (Of course, even if the page admins hadn't done this, inappropriate comments should be the basis for penalizing the user who posted them, not the Facebook page that they were posted on, but it was a moot point in this case.)

Because of the word "Pornography" in the title of the page, it's also of course possible that a human at Facebook actually did review the complaints, but thought the word "pornography" meant the page was a porn-trading hub, without looking to closely at it. (It's also possible that the word triggered an automated filter at Facebook. Obviously, there is no filter pre-emptively preventing pages with words like "pornography" in the title from being created, since otherwise the page never could have existed in the first place. But it's possible that an automated algorithm does something like the following: If a page receives X complains within time period Y, and the page contains certain keywords in the title or the content, then shut down the page automatically.)

Previously I'd suggested an algorithm that Facebook could use to stop users from coordinating phony complaints in order to shut a page down. The gist was: If a page receives a sufficient number of complaints, have the page reviewed by a random sample (chosen by Facebook) of Facebook users who had signed up to review abuse cases in situations such as these. If enough of those users vote that the page was violating the TOS, the page gets shut down, but if not, then it stays up. What makes this algorithm difficult to abuse, is that in order for a "coordinated mob" to swing the vote of the jury, they would have to comprise a majority (or a significant minority) of the entire set of users that the randomly-selected jury could have been chosen from -- a difficult task if thousands of people have signed up as content reviewers. I offered a $100 prize to be split between readers who submitted the best suggested improvements or criticisms of the idea; their ideas were summarized in a follow-up article. A couple of readers commented that there was no point in debating the idea since I don't work for Facebook and have no influence there; they have a point, but the idea has to start somewhere. If engineers at Facebook are looking for a way to fix the problem, one thing that can be said about this suggestion is that it was posted to a large audience of smart people, and several readers suggested very clever improvements, while nobody found any obviously fatal flaws in it.

It seems pretty likely that a process like that for reviewing abuse complaints, would have saved the Pornography Harms page from being yanked from Facebook. Anybody who seriously reviewed the page's contents for more than twenty seconds would have understood the page's real purpose and seen that it was not actually distributing pornography or otherwise violating the Facebook TOS. In my experiences posting surveys on sites like Mechanical Turk, where you can pay users a penny apiece for filling out surveys or performing other tasks, I've gotten the impression that people will take such tasks seriously, even for zero (or virtually zero) pay, if they find them interesting. In the case of the Facebook "jurors" who are voting on whether a page violated the TOS, you're talking about users who voluntarily signed up to be jurors, after all -- not underpaid workers grinding through as many tasks as they can squeeze into their working hours.

Finally, it would be easy to point out the irony of a pro-censorship group being censored (and some people did, on the mailing lists where I saw this news announced), but I don't think that's really fair to Morality in Media, since even MIM doesn't oppose people's right to express their opinions in favor of pornography. Likewise, MIM presumably supports the use of Internet blocking programs in schools, even though their Facebook page (as well as the companion website PornHarms.com) would probably be blocked by default by most Internet blockers because of the word "porn" in the URL -- but even that is not as richly ironic as it would seem. Neither Morality in Media, nor almost anyone else, is in favor of political sites about pornography being blocked because of the word "porn" in the address; presumably they'd just want the error corrected by the blocking company, and if a left-wing site on the opposite side of the debate happened to be blocked because of the word "porn" in the URL, I have no reason to think that Morality in Media would be opposed to correcting that error and unblocking that site as well. So this really isn't a case of them being given "a taste of their own medicine."

No, the real irony in this particular case -- at least, if I did have a role in getting their Facebook page restored -- is that not only would I support their right to express their view (duh), I would support students' right to bypass their school's Internet blocker to view the page from school if they had to, and I would even support the right of under-18-year-olds to view the page even if their parents were specifically trying to block them from it. I highly doubt that even anyone at Morality in Media would go that far.

This discussion has been archived. No new comments can be posted.

Anti-Porn Facebook Page is Deleted, Then Restored

Comments Filter:
  • by thijsh ( 910751 ) on Friday May 27, 2011 @11:42AM (#36263706) Journal

    It's Slashdot. It's always been this way. People submit worthless articles, counterproductive articles, pointless articles, slashvertising articles, articles which perceptibly reduce the collective IQ of the universe.... which editors (don't) improve...

    ... and people complain about this... Don't forget the complaining part!!! It's an integral part of the traditions in Slashdot history... As is complaining about all the complaints when people should know better because this is Slashdot after all... ;)

What is research but a blind date with knowledge? -- Will Harvey

Working...