Competition Produces Vandalism Detection For Wikis 62
marpot writes "Recently, the 1st International Competition on Wikipedia Vandalism Detection (PDF) finished: 9 groups (5 from the USA, 1 affiliated with Google) tried their best in detecting all vandalism cases from a large-scale evaluation corpus. The winning approach (PDF) detects 20% of all vandalism cases without misclassifying regular edits; moreover, it can be adjusted to detect 95% of the vandalism edits while misclassifying only 30% of all regular edits. Thus, by applying both settings, manual double-checking would only be required on 34% of all edits. Nothing is known, yet, whether the rule-based bots on Wikipedia can compete with this machine learning-based strategy. Anyway, there is still a lot potential for improvements since the top 2 detectors use entirely different detection paradigms: the first analyzes an edit's content, whereas the second (PDF) analyzes an edit's context using WikiTrust."
Machine learning - right (Score:5, Informative)
Wikipedia already has programs which detect most of the blatant vandalism. Page blanking and big deletions are caught immediately. Deletions that delete references generate warnings. Incoming text that duplicates other content on the Web is caught. That gets rid of most of the blatant vandalism. It's not a serious problem on Wikipedia.
The current headaches are mostly advertising, fancruft, and pushing of some political point of view. That's hard to deal with using what is, after all, a rather dumb machine learning algorithm that has no model of the content or subject matter.
Rules can only get so much (Score:3, Informative)
Re:20% with no false positives? (Score:3, Informative)
Care to show us even one article where 99% of good edits are reverted? Remember, that will mean that over 99% of all edits are reverted.
not if there are bad edits that are not reverted.
Re:Manual double checking? (Score:1, Informative)
According to the 2nd link, the vandalism rate on Wikipedia is 2391/28468 = 0.084, not 0.60!
The second link actually says:
The corpus compiles 32452 edits on 28468 Wikipedia articles, among which 2391 vandalism edits have been identified.
So that is a vandalism rate of 2391/32452 = 0.074. When I do the math I get 33% of all edits requiring a manual check. The vast majority of them are false positives.
0.074 * (0.95-0.20) + (1-0.074) * 0.30 = 0.0555 + 0.2778 = 0.3333
Re:Rules can only get so much (Score:1, Informative)
It looks like the winning entry [uni-weimar.de] uses all of those attributes plus a bunch more. From pages 3-4 of the paper.
Vandals are likely to be anonymous. This feature is used in a way or another in
most antivandalism working bots such as ClueBot and AVBOT. In the PAN-WVC-
10 training set (Potthast, 2010) anonymous edits represent 29% of the regular edits
and 87% of vandalism edits.
Long comments might indicate regular editing and short or blank ones might suggest vandalism, however, this feature is quite weak, since leaving an empty comment in regular editing is a common practice.
Vandals often do not follow capitalization rules, writing everything in lowercase or
in uppercase.
This feature helps to spot minor edits that only change numbers, which might help to find some cases of subtle vandalism where the vandal changes arbitrarily a date or a number to introduce misinformation.
An excess of non-alphanumeric characters in short texts might indicate excessive
use of exclamation marks or emoticons.
This feature helps to spot random keyboard hits and other non-sense. It should take
into account QWERTY keyboard layout in the future.
Useful to detect non-sense, repetitions of the same character or words, etc.
The value of this feature is already well-established. ClueBot uses various thresholds of size increment for its heuristics, e.g., a big size decrement is considered an
indicator of blanking.
Complements size increment.
revision.
In long and well-established articles too many words that do not appear in the rest
of the article indicates that the edit might be including non-sense or non-related
content.
Useful to detect non-sense.
the inserted text.
Long sequences of the same character are frequent in vandalism (e.g. aaggggghhhhhhh!!!!!, soooooo huge).
Along with analyzing those basic stats, the winning entry also examines categories of words.
dosent), etc.
words.