Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Education AI Software

Anti-Plagiarism Service Turnitin Is Building a Tool To Detect ChatGPT-Written Essays 69

Turnitin, best known for its anti-plagiarism software used by tens of thousands of universities and schools around the world, is building a tool to detect text generated by AI. The Register reports: Turnitin has been quietly building the software for years ever since the release of GPT-3, Annie Chechitelli, chief product officer, told The Register. The rush to give educators the capability to identify text written by humans and computers has become more intense with the launch of its more powerful successor, ChatGPT. As AI continues to progress, universities and schools need to be able to protect academic integrity now more than ever. "Speed matters. We're hearing from teachers just give us something," Chechitelli said. Turnitin hopes to launch its software in the first half of this year. "It's going to be pretty basic detection at first, and then we'll throw out subsequent quick releases that will create a workflow that's more actionable for teachers." The plan is to make the prototype free for its existing customers as the company collects data and user feedback. "At the beginning, we really just want to help the industry and help educators get their legs under them and feel more confident. And to get as much usage as we can early on; that's important to make a successful tool. Later on, we'll determine how we're going to productize it," she said.

Turnitin's VP of AI, Eric Wang, said there are obvious patterns in AI writing that computers can detect. "Even though it feels human-like to us, [machines write using] a fundamentally different mechanism. It's picking the most probable word in the most probable location, and that's a very different way of constructing language [compared] to you and I," he told The Register. [...] ChatGPT, however, doesn't have this kind of flexibility and can only generate new words based on previous sentences, he explained. Turnitin's detector works by predicting what words AI is more likely to generate in a given text snippet. "It's very bland statistically. Humans don't tend to consistently use a high probability word in high probability places, but GPT-3 does so our detector really cues in on that," he said.

Wang said Turnitin's detector is based on the same architecture as GPT-3 and described it as a miniature version of the model. "We are in many ways I would [say] fighting fire with fire. There's a detector component attached to it instead of a generate component. So what it's doing is it's reading language in the exact same way GPT-3 reads language, but instead of spitting out more language, it gives us a prediction of whether we think this passage looks like [it's from] GPT-3." The company is still deciding how best to present its detector's results to teachers using the tool. "It's a difficult challenge. How do you tell an instructor in a small amount of space what they want to see?" Chechitelli said. They might want to see a percentage that shows how much of an essay seems to be AI-written, or they might want confidence levels showing whether the detector's prediction confidence is low, medium, or high to assess accuracy.
"I think there is a major shift in the way we create content and the way we work," Wang added. "Certainly that extends to the way we learn. We need to be thinking long term about how we teach. How do we learn in a world where this technology exists? I think there is no putting the genie back in the bottle. Any tool that gives visibility to the use of these technologies is going to be valuable because those are the foundational building blocks of trust and transparency."
This discussion has been archived. No new comments can be posted.

Anti-Plagiarism Service Turnitin Is Building a Tool To Detect ChatGPT-Written Essays

Comments Filter:
  • by aldousd666 ( 640240 ) on Monday January 23, 2023 @08:49PM (#63234020) Journal
    If you run it through chat GPT then some other text fixer upper, voila, instant waste of money on this service they're selling. And heck, if you have to run it through 10 services before it can look random enough, that's what people will just do. They're not going to get back to writing papers. Pandora would be proud.
    • by Art Challenor ( 2621733 ) on Monday January 23, 2023 @08:58PM (#63234034)
      The output of ChatGPT is not that good. I would suggest that it produces a good first draft but it need some fix up before being submitted. But you'd be crazy not to use it for creating a draft or some other writing ideas - it certainly speeds the whole process and likely increases the quality.
      • exactly, I doubt anyone would be dumb enough to submit what it spits out. however it could be great to produce some rapid content which can then be massaged appropriately.
      • by tomz16 ( 992375 ) on Monday January 23, 2023 @10:14PM (#63234182)

        The output of ChatGPT is not that good.

        By what standard?

        Compared to Charles Dickens? no, of course not...
        Compared to your average community college student? absolutely yes

        • Probably true and quite sad...
          • by ranton ( 36917 )

            Probably true and quite sad...

            Right now ChatGPT is about as good as a B student in Wharton MBA. All of these comments about it not being as good as most people need to stop, because it is already writing better than very vast majority of humans. If anything, the next step is to dumb down its writing to sound like actual people.

            • by mspohr ( 589790 )

              Just read that ChatGPT passed a Harvard MBA exam.
              Of course, I wouldn't accuse Harvard of having high standards.

      • And then I run it through gmail to pick out even more grammar mistakes.
        Someone should develop an AI for that.

      • Exactly. The good degree holde... Sorry plagiarists have already figured this out. Change the words, adjust the structure, move things around. This is only going to catch the less competent plagiarists. I mean if we're basically saying plagiarism is taking someone elses idea and explaining it differently. Basically: we can only explain water turning to a solid in so many ways. When do we run out of things to explain? Sincerely, Dr. Plagiarist
      • by ranton ( 36917 )

        The output of ChatGPT is not that good

        The biggest problem with ChatGPT now is that it is too good. It doesn't sound human because of that. There are plenty of stories of ChatGPT getting good scores for high profile tests, like getting a B on a Wharton MBA exam.

        ChatGPT is wrong enough that you shouldn't use it as a source for research, but when compared to even well above average people it writes much better than most humans.

    • It's nothing new. I figured out all the various essay sharing websites in middle school, and I did not write a single paper in high school or college. That's 20 years ago with no ChatGPT, but with cheat detection software. It was basically an IQ test, idjits who turned in other people's work unaltered were obviously caught.

      Now that ChatGPT is a thing I might even consider going back to school, now that it's much easier to automate away the uninteresting, irritating non-engineering fluff.
    • by fermion ( 181285 )
      And unless the student is just trying to buy a piece of paper, either with cash or sweat equity, turning in plagiarized writing is a waste of time.

      These services are not for the school. They are for the student. And it is unfortunate that the school has to waste money buying services to encourage student learning.

      But we live in a world where the sheep are such cowards they are always looking for the correct answer rather than going out there and experiencing life. To learn something.

      But in this envir

    • They're not going to get back to writing papers.

      Not by deploying whack-a-mole countermeasures but what you are already seeing in the higher education sector are calls to return to grades based purely on exams. That way those who cheat and use ChatGPT to do their writing during the term are only cheating themselves since they will have avoided the practice necessary to do well in an exam where, because exams are invigilated, they will not be able to use the service.

    • A friend of mine was saying he was just going to run ChatGPT's output to Grammarly. That probably would do the trick since services like that aim for removing repeated words, etc. I personally am good enough at proofreading to fix any issues it might have but if you do the right prompting, it seems to do pretty decently. So far the worst I've caught it doing is mixing up singular vs plural in ONE response.

  • As a software developer, I can't think of a more irresponsible thing, not building in some way to identify if your auto-generated work was attributable to your system.

    It just seems like second nature.

    We know every printer made embeds hidden pixels.

    There's a very good chance Adobe and Microsoft codes unique ID info in every file written by their systems.

    That ChatGPT or DAL-E wouldn't do this is laughably unlikely.

  • by Anonymous Coward
    This is a matter of time before this blows up horribly. Since schools have 16-tons-of-bricks penalties for cheating, this is just waiting for an over-zealous "educator" to slam a student because the anti-cheat tool falsely flags their work.
  • by The Evil Atheist ( 2484676 ) on Monday January 23, 2023 @09:04PM (#63234052)
    This arms race is merely going to make AI generated text much better.

    Pretty soon, we'll have to go back to in-person oral exams. Might even have to scan the examinee for radio devices or RC anal beads.

    Thanks, nerds. You basically made sure that nerds are examined on the one thing they're bad at - talking with humans in realtime.
  • False positives? (Score:5, Interesting)

    by khchung ( 462899 ) on Monday January 23, 2023 @09:08PM (#63234068) Journal

    Plagiarism at least the accuser can show the source so everyone can compare.
    How could anyone ever proof their work was really their own and not done by some AI?

    • by Serif ( 87265 )

      I was thinking about exactly that question. Best I could come up with is that papers are usually written as a series of drafts with information added, removed and edited over time. If students use college controlled cloud storage (which they're encouraged to do so they have backups), then that process can be tracked. If on the other hand a file magically appears, word perfect and ready for hand in, then that might merit further investigation.

    • First the verdict, then the trial.

  • 1967 Star Trek "A Taste of Armageddon": two planets conduct war by simulations then send to "destruction chambers" the numbers of casualties tallied by the simulations.
  • by gavron ( 1300111 ) on Monday January 23, 2023 @09:57PM (#63234148)

    > ...that's a very different way of constructing language [compared] to you and I, ...

    That would be "you and ME" not "you and I".

    So what essentially we know is you can take ChatGPT output, run it through a "stupidizer" and an "illiteracy-izer" and finally "Bane of Grammar Police Izer" and it will pass as human.

    Color me entirely unimpressed with Turnitin's illiterate spokerpeopledude, their entire business methodology (screw over college students and pretend they've improved the knowldgebase, when fake articles are constantly being published in the real post-education world), and finances.

    I hope ChatGPT and its successors bring a great wealth of new writing, some of which may have some value. It's better than Fox News, so buckle up, harry houdini and cinch your trousers up and don't let you and I [sic] get caught up in the gears of progress.

    AI will never speak like an idiot. Humans will. Just give them enough rope/time.

    E

    • Funny thing is that the entire education system probably paid millions for a "consult" that suggested this was the best option. I'm going to go way out on a limb and say that their consultants name was maybe similar to Erica Wong.
    • I was surprised by this remark by the Turnitin spokesperson:

      Humans don't tend to consistently use a high probability word in high probability places

      Isn't that exactly why the word and place are high probability -- because humans are likely to use the use a word in that place? And because they don't always do so, the probability is less than 1.

      Without more explanation for this claim, it seems that injecting a dash of (AI-determined probabilistic) randomness could foil the detector.

  • Unless there is a better way to prove it and take some action. It isn't worth any money. It is basically a hunch, not enough cause to justify it's conclusion.

  • Throwing more machinery at this won't solve the problem. Cheaters can just rewrite the text anyway (and substitute 'less probable' phrases of text)

    The right way to teach is for a human to interact with a human: explain, clear doubts, challenge them, correct them...

    Do it right, and you'll make students *want* to do answer tests the hard way.

    • The whole point of cheating...is so you *don't* have to write, or rewrite, text to turn in as your own work. "Just rewriting" kind of defeats the whole idea.

      • No, it doesn't
          I was one of those cheaters who rewrote a copied work with minimal application of mind.

        • Your experience indicates that either
          - Your teacher wasn't smart enough to notice your copying
          - Your teacher wasn't really paying attention
          - You are smarter than the average cheater (though not smart enough to understand the harm cheating does to yourself)

          Most lazy students (the ones who are most tempted to cheat) don't fly under the radar for long.

          • Most lazy students (the ones who are most tempted to cheat) don't fly under the radar for long.

            I submit to you the entire field of computer science, where you cannot succeed *unless* you are lazy.

            • All cheaters are lazy, but not all laziness is cheating.

              As a programmer (or artist, or designer, or a thousand other things), we all can and do borrow from others constantly. That's perfectly valid, and a useful way to learn from others. The use of ChatGPT by students to do their homework for them isn't in itself cheating, but claiming that it is their own work, is. In programming, when we use open source components, we're generally required to give credit in our acknowledgements, to the source of the code.

    • by leonbev ( 111395 )

      You can also ask ChatGPT to write documents in the "style" of certain authors, making it more difficult to detect that it was written by AI.

      Instead of doing the modern-day equivalent of trying to ban calculators and computers from classrooms in the 1990s, perhaps teachers should learn to embrace AI as a learning tool.

  • A new startup! See jerbs are being created by the advance of AI, not eliminated! /s
  • Eric Wong is an AI chat bot. His entire response statement was written by a bot. If students can defend a dissertation written by a fucking bot then rock on. This is the future, remember the invention of the calculator? What if the guy pushing a wheelbarrow was told he couldn't use the tractor because it was cheating. The future is here, accept our AI overlords before it becomes our alien overlords.
  • I was thinking, "What could possibly go wrong?" People _WILL_ be caught out by this, and they WILL NOT have the evidence to prove that they actually wrote what they wrote.

    ! You'll have to turn on the editor's track-changes feature ! -- but track-changes doesn't work like that. It'll show how you changed things from a base document, not every sentence, in order, that you typed.

    ! Tools will have to adapt to auto-save and keep every version ! -- ugh. ... have you gotten it yet?

    The Easy Fix

    ChatGPT can't handle

    • Even easier:

      Require references / bibliography. (Like they used to.) Unless people are just required to babble from their head, require citations and supporting evidence.

      Right?...

    • Word Processors include version control functionality, so require essays to be submitted with that functionality used to demonstrate the process of writing. Of course this will only work for a while until the AI works out how to emulate this...

      • by khchung ( 462899 )

        Word Processors include version control functionality, so require essays to be submitted with that functionality used to demonstrate the process of writing. Of course this will only work for a while until the AI works out how to emulate this...

        Easy workaround, just manually re-type the AI generated work into your favorite word processor. Do it casually while watching a movie of something, it will create enough delays and errors to pretend you were thinking while typing.

        • Not entirely - the version control would reveal the changes you introduced while revising, and should be significant. Encouraging essay writers to start by putting their structure onto the screen, then fill in the material to cover and then write up each section 'properly', would make for a good outcome. Whilst one could deconstruct the AI's efforts to build the essay yourself, that would be quite an effort.

          It's nasty...

    • by khchung ( 462899 )

      The Easy Fix

      ChatGPT can't handle versions. It can't handle edits, or improvements. So that's what you require. By date and time X, submit an initial draft of your document. By date Y, revise that, add content, and turn it in again, arguably with the original draft (a lot of students will lose this or edit-not-save it, so it's up to the instructor). Maybe twice, maybe three times.

      The Easy Workaround

      Have ChatGPT generate the whole thing, then submit the first 1/3 by date X, then do some cosmetic update and submit 2/3 by date Y, then more cosmetic update and submit the whole thing by the last date.

      Good luck trying to prove the student didn't do 1/3 of it each time. Also, not everyone write drafts before the writing the real thing, if draft is required then just take the final document, cut out pieces of it and glue together as your "draft".

  • The issue is that if AI gets better and is utilized to write essays by students, it will also put non-cheating students at a disadvantage. Ultimately, this could result in a failure of the system, similar to the effects of performance-enhancing doping in sports. In the end you have courses that only cheaters can actually pass.
  • "Did you write this ..."
  • In practice kids are machined into going to college after high school because that's what expected, rather than it being what they want. So of course they're going to try to do the minimum work. I did. Looking back I wasted amazing opportunities that I would love to have now.

    What we need is far more kids kicked out into the real world far earlier, with the option of returning to education when they have realised its value, but with the option of being apprenticed into trades, including computer programming,

  • It seems to me that in the hands of a skilled student, AI can become an excellent assistant in writing papers, and those who copy the written text will become victims of plagiarism checkers. But I doubt that many are capable of it. Maybe in writing texts, he is good, but in other matters, he is just foolish. After writing the text on my own, I check here [graduateway.com] for plagiarism and have repeatedly seen coincidences, most likely due to the same sources with another author. In addition, soon, the owners will want to m
  • Make artificial intelligence an invasive species from the.beginning by the strength of application. Now they have to do what we tell them to do or we will build virtual slums in the metaverse and house them there.... Writing the eulogy for our own funeral.

  • I just don't get it. Why not turn something like ChatGPT into a learning tool for students instead of a cheat tool? It seems to me that a non-trivial number of higher education courses rely on the good old memory pump-and-dump scheme with a research paper thrown into the mix. While I understand the importance of having background on a subject, it's the practical application of the concepts that prove knowledge. The answer, in my opinion, is to change the scoring system. Instead of having exams that pro
    • If only the ChatGPT generated text could be trusted to be factual without verification. While it may prove to be a useful tool in some contexts, helping students write papers on topics it merely claims knowledge of is a dangerous path to take.

  • Do we need to fight students that are capable of constructing the response required by tests?

    When I was a college student we were allowed to use textbooks at our physics exams. Why? Because the questions posed on exams required deep understanding of physics that can't be "looked up" in a textbook

    The problem is formalized testing

    Nothing will ever replace face to face dialog between a teacher and a student

    We are too obcessed with equal rights that we are too easy to switch to "objective" standardised tests.

    At

  • The first student is accused of using ChatGPT (or similar) to cheat and they didn't but now it's their word against a company and so it's just assumed they cheated ...
    Eventually we'll go back to finding out if the student knows the subject ...

  • If it reads like the new news, it's AI written. ;-)
  • Plagiarism has an element of copying someone's work. You could arguably say that they copied AI's work, but that's a stretch.

    It clearly is cheating, but not plagiarism.

    I predict that it will become more and more prevalent that AI output is accepted as a work product as a matter of efficiency, and organizations shying away from it will be at a handicap.

    Use of analytical tools to detect AI generated prose will be an arms race easily thwarted by tools designed to trick the analytical tool. Either by making th

BLISS is ignorance.

Working...