Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI The Courts Software The Internet News Science Technology Your Rights Online

Scientists Create AI Program That Can Predict Human Rights Trials With 79 Percent Accuracy (theverge.com) 83

An anonymous reader quotes a report from The Verge: Computer scientists have created an AI program capable of predicting the outcome of human rights trials. The program was trained on data from nearly 600 cases brought before the European Court of Human Rights (ECHR), and was able to predict the court's final judgement with 79 percent accuracy. Its creators say it could be useful in identifying common patterns in court cases, but stress that they do not believe AI will be able to replace human judgement. As described in a study published in the journal PeerJ Computer Science, the AI program worked by analyzing descriptions of court cases submitted to the ECHR. These descriptions included summaries of legal arguments, a brief case history, and an outline of the relevant legislation. The cases were grouped into three main violations of human rights law, including the prohibition on torture and degrading treatment; the right to a fair trial; and the right to "respect for private and family life." (Used in a wide range of cases including illegal searches and surveillance.) The AI program then looked for patterns in this data, correlating the courts' final judgements with, for example, the type of evidence submitted, and the exact part of the European Convention on Human Rights the case was alleged to violate. Aletras says a number of patterns emerged. For example, cases concerning detention conditions (eg access to food, legal support, etc.) were more likely to end in a positive judgement that an individual's human rights had been violated; while cases involving sentencing issues (i.e., how long someone had been imprisoned) were more likely to end in acquittal. The researchers also found that the judgements of the court were more dependent on the facts of the case itself (that is to say, its history and its particulars) than the legal arguments (i.e., how exactly the Convention on Human Rights had or had not been violated).
This discussion has been archived. No new comments can be posted.

Scientists Create AI Program That Can Predict Human Rights Trials With 79 Percent Accuracy

Comments Filter:
  • by Anonymous Coward

    Here I am, only weeks away from completing my graduate degree in Predictive Human Rights Jurisprudence, and some jerkhead has built an artificial intelligence to take away my job!

    I knew I should have stayed with medicine... we could have a cure for diabetic mice by now, if I hadn't listened to my advisor.

  • by sittingnut ( 88521 ) <sittingnut.gmail@com> on Tuesday October 25, 2016 @08:20PM (#53151201) Homepage

    why do people slap "AI" label on unnecessarily? publicity?

    "The AI program then looked for patterns in this data, correlating the courts' final judgements with, for example, the type of evidence submitted ...a number of patterns emerged ...For example, cases concerning detention conditions ... more likely ...cases involving sentencing ...more likely"

    this is mere data analysis. or is that what so called "AI" amount to?

    and this,
    "judgements of the court were more dependent on the facts of the case itself "
    duh?!
    how smart of so called "AI" to find that out?

    • by ShanghaiBill ( 739463 ) on Tuesday October 25, 2016 @09:04PM (#53151383)

      this is mere data analysis. or is that what so called "AI" amount to?

      "Mere" data analysis is when a human looks at the data and tries to find patterns. But it is "AI" when the algorithm is open ended, and finds it's own patterns and correlations. That is "machine learning" is certainly a branch of AI.

      • by Anonymous Coward

        5 years ago we called that data mining.

        I skimmed the paper. They converted the cases documents into word n-grams, did some clustering, made some type of n-gram by n-gram matrix that I've never learned about, did some more clustering, then fed those clusters into a SVM.

        Data mining is a much better term, but I guess that's not fashionable anymore. In 5 more years is everything going to be about multidimensional awareness (MA)?

        *All MA rights belong to Slashdot's ACs. Fear my pending patient.

        • ... then fed those clusters into a SVM.

          SVMs [wikipedia.org] are used in machine learning, which is a branch of AI.

          Data mining is a much better term

          "Data mining" and "machine learning" are two orthogonal concepts. You can do data mining with or without machine learning. Machine learning can be used for many things besides data mining.

      • "Mere" data analysis is when a human looks at the data and tries to find patterns. But it is "AI" when the algorithm is open ended, and finds it's own patterns and correlations.

        I certainly agree that this falls under the classification of "AI" as a field. I'm guessing that part of the concern is also whether what was done here qualifies as "intelligence" rather than just a slightly more advanced algorithm for processing data.

        Most research studies these days use fairly complex statistical computations -- often, lamentably, that the researchers themselves don't fully understand (or at least don't fully understand the limitations of). So, basically by the time many researchers ar

    • You misunderstand. This is an "Al" program, as in Al Gore or Al Franklin, extremely liberal and so sure of its view on human rights that it is wrong 21% of the time. A.I. would never make that mistake.
  • God, not another "AI Program". We used to just call them programs.
  • Most of them never come to trial

    Terrorist like IISIS and Boko Haram will die rather than be captured

    The leaders of large countries (eg Putin, GWB and the chinese) never submit to international tribunal

  • Seriously, what is the endgame here? Having robots adjudicate human rights? How in the world does that seem like progress to anyone?
  • I am very glad on Scientist's this invention, but I want to ask you something as I don't know, is this a recent invention, I think this is an old research. What you say?
  • by Archfeld ( 6757 )

    Not to be pissy or anything but let's face it. By the time a human rights violation has case has 'come to trial' it is nearly a forgone conclusion. Were 10K people massacred ? Was an entire countryside gassed ? Did the members of an entire tribe suddenly disappear ? Was a mushroom cloud, and nerve gas involved ? It doesn't take much of an AI or computer to come to a conclusion in those kinds of cases.

  • In fact we shouldn't expect these to be too predictable. This is why they go all the way to the ECtHR in the first place
    • by AHuxley ( 892839 )
      In the US you can get that way up in the federal system
      Conviction rate https://en.wikipedia.org/wiki/... [wikipedia.org]
      "For 2012, the US Department of Justice reported a 93% conviction rate."
      • So an AI that said "guilty" every time would get much better accuracy here :)

        That's not really the best comparison though. ECtHR is more comparable to the US Supreme Court. Mostly dealing with appeals based on incompatibilities of legislation with fundamental law.
        • by AHuxley ( 892839 )
          Default to guilt and then look for any strange "not guilty" outlier in every case to really seem accurate.
  • The outcome of Bush, Obama, Cheney and Yoo's trials.

    So they like facts more than legal arguments? Thats great, we have lots of those. Drone strikes targeted by cell phone data killing innocent people. A whole system of assasination based on paid informants and lies. A torture program that was swept under the rug rather than exposed and prosecuted....

    Lots and lots of facts for them.

  • I can predict Egyptian and Russian human rights trials with 100% accuracy with a simple shell script.

    echo "Guilty!"

  • So I coin toss could predict it with 50% accuracy. How well can a human being perform the same task? Not sure that 20% better then a coin toss makes your AI very intelligent. Of coarse I'm sure it is just the beginning of a lot of cool work.

  • If you don't, you have no rights.

    BTW, there is no right to "standing" or for the government to reveal that you even potentially have "standing.

The last person that quit or was fired will be held responsible for everything that goes wrong -- until the next person quits or is fired.

Working...