Forgot your password?
typodupeerror
Education Technology

Grading Software Fooled By Nonsense Essay Generator 187

Posted by samzenpus
from the it-is-good-report dept.
An anonymous reader writes "A former MIT instructor and students have come up with software that can write an entire essay in less than one second; just feed it up to three keywords.The essays, though grammatically correct and structurally sound, have no coherent meaning and have proved to be graded highly by automated essay-grading software. From The Chronicle of Higher Education article: 'Critics of automated essay scoring are a small but lively band, and Mr. Perelman is perhaps the most theatrical. He has claimed to be able to guess, from across a room, the scores awarded to SAT essays, judging solely on the basis of length. (It’s a skill he happily demonstrated to a New York Times reporter in 2005.) In presentations, he likes to show how the Gettysburg Address would have scored poorly on the SAT writing test. (That test is graded by human readers, but Mr. Perelman says the rubric is so rigid, and time so short, that they may as well be robots.).'"
This discussion has been archived. No new comments can be posted.

Grading Software Fooled By Nonsense Essay Generator

Comments Filter:
  • Re:Irrelevant (Score:4, Interesting)

    by TheMeuge (645043) on Wednesday April 30, 2014 @09:08PM (#46885863)

    At least helicopter daddy and blackhawk mommy give a shit about the Precious. Or do you prefer the absent daddy and welfare mommy? People DO go overboard... but I feel like the pendulum is starting to swing entirely too far the other way.

  • Re:Quid pro quo (Score:4, Interesting)

    by clifyt (11768) <(moc.liamg) (ta) (rettamkinos)> on Thursday May 01, 2014 @12:15AM (#46886645) Homepage

    As someone that wrote software like this -- and disagreed with the subject of the story a decade ago when he tried to get us with both the Gettysburg Address as well as Kennedy's inaugural address (both of which are GREAT speeches with historical value, but shitty college entrance exams) -- you are looking at this entirely wrong.

    I can give you background of how these things are generally graded. 3 people get an essay, look at it for 30 to 45 seconds, throw a score and it and if they are all within a margin of error, they move on. If not, a senior rater comes in and and they can replace one other person and it is now within margin of error, they move on as well. If not, it is workshopped for 5 minutes.

    In 99% of the cases, you have less than 2 minutes of viewing on your essay between 3 people.

    Enter the computer...the raters are told they are going to be rated themselves. We can throw a lot more prerated essays that had been normed by a large group of raters, and train the rater. They know they are being measured and the average rater spends two or more minutes reading through these. You actually have MORE time with eyes on your essay with a computer rater involved than you do without. Having a computer rater doesn't remove humans -- it adds a safe guard. It means one person spends more time and is verified with something that is unbiased (within reason...actually was able to figure out subtle racism and otherwise that wouldn't have been detected with purely human raters...'black' or 'hispanic' names and scores go down...'asian' names and the scores go up...give the same essay with the names switched and the humans change ratings...the computer was actually more objective).

    I haven't been involved with this sort of thing in a decade, and I can only assume it is much better than when I left my project...but lazy isn't the right word. Underpaid and overworked? Yeah...but not lazy.

"And do you think (fop that I am) that I could be the Scarlet Pumpernickel?" -- Looney Tunes, The Scarlet Pumpernickel (1950, Chuck Jones)

Working...