Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI Education

Are AI-Powered Tools - and Cheating-Detection Tools - Hurting College Students? (theguardian.com) 38

A 19-year-old wrongfully accused of using AI told the Guardian's reporter that "to be accused of it because of 'signpost phrases', such as 'in addition to' and 'in contrast', felt very demeaning." And another student "told me they had been pulled into a misconduct hearing — despite having a low score on Turnitin's AI detection tool — after a tutor was convinced the student had used ChatGPT, because some of his points had been structured in a list, which the chatbot has a tendency to do." Dr Mike Perkins, a generative AI researcher at British University Vietnam, believes there are "significant limitations" to AI detection software. "All the research says time and time again that these tools are unreliable," he told me. "And they are very easily tricked." His own investigation found that AI detectors could detect AI text with an accuracy of 39.5%. Following simple evasion techniques — such as minor manipulation to the text — the accuracy dropped to just 22.1%. As Perkins points out, those who do decide to cheat don't simply cut and paste text from ChatGPT, they edit it, or mould it into their own work. There are also AI "humanisers", such as CopyGenius and StealthGPT, the latter which boasts that it can produce undetectable content and claims to have helped half a million students produce nearly 5m papers...

Many academics seem to believe that "you can always tell" if an assignment was written by an AI, that they can pick up on the stylistic traits associated with these tools. Evidence is mounting to suggest they may be overestimating their ability. Researchers at the University of Reading recently conducted a blind test in which ChatGPT-written answers were submitted through the university's own examination system: 94% of the AI submissions went undetected and received higher scores than those submitted by the humans...

Many universities are already adapting their approach to assessment, penning "AI-positive" policies. At Cambridge University, for example, appropriate use of generative AI includes using it for an "overview of new concepts", "as a collaborative coach", or "supporting time management". The university warns against over-reliance on these tools, which could limit a student's ability to develop critical thinking skills. Some lecturers I spoke to said they felt that this sort of approach was helpful, but others said it was capitulating. One conveyed frustration that her university didn't seem to be taking academic misconduct seriously any more; she had received a "whispered warning" that she was no longer to refer cases where AI was suspected to the central disciplinary board.

The Guardian notes one teacher's idea of more one-to-one teaching and live lectures — though he added an obvious flaw: "But that would mean hiring staff, or reducing student numbers." The pressures on his department are such, he says, that even lecturers have admitted using ChatGPT to dash out seminar and tutorial plans. No wonder students are at it, too.
The article points out "More than half of students now use generative AI to help with their assessments, according to a survey by the Higher Education Policy Institute, and about 5% of students admit using it to cheat." This leads to a world where the anti-cheating software Turnitin "has processed more than 130m papers and says it has flagged 3.5m as being 80% AI-written. But it is also not 100% reliable; there have been widely reported cases of false positives and some universities have chosen to opt out. Turnitin says the rate of error is below 1%, but considering the size of the student population, it is no wonder that many have found themselves in the line of fire." There is also evidence that suggests AI detection tools disadvantage certain demographics. One study at Stanford found that a number of AI detectors have a bias towards non-English speakers, flagging their work 61% of the time, as opposed to 5% of native English speakers (Turnitin was not part of this particular study). Last month, Bloomberg Businessweek reported the case of a student with autism spectrum disorder whose work had been falsely flagged by a detection tool as being written by AI. She described being accused of cheating as like a "punch in the gut". Neurodivergent students, as well as those who write using simpler language and syntax, appear to be disproportionately affected by these systems.
Thanks to Slashdot reader Bruce66423 for sharing the article.

Are AI-Powered Tools - and Cheating-Detection Tools - Hurting College Students?

Comments Filter:
  • by Harvey Manfrenjenson ( 1610637 ) on Sunday December 15, 2024 @04:39PM (#65015437)

    For as long as we've had universities, some students have cheated on their written work, either by plagiarizing other authors or by paying someone to write it for them. It's only very recently that we developed computerized "anti-plagiarism tools" to try to catch the students. So historically, a certain number of students have always gotten away with it.

    Using ChatGPT to write your paper is not really different from copying paragraphs out of an encyclopedia. It should be treated the same way (extremely seriously, with penalties up to and including expulsion). And just like there is no "magic bullet" to detect ordinary plagiarism or ghostwriting, there is no magic bullet to detect this kind of cheating. Why should we expect there to be one?

    • The problem of cheating isn't new, the problem of non-cheaters being accused of cheating however is. At no point in our history have we had universities use automated systems to *falsely* accuse students of cheating. Automated false positives in detection systems is new.

      • One concerning problem is that AI is and will be used to grade human work and often you will not even know that you are being graded, how the work is being graded and how come you got poor job performance ratings (due to AI grading used in performance feedback)

        Think of AI based code quality ratings, AI based code reviews, and that feeding into some job performance spreadsheet.

        Or school administered standardized tests

        https://www.aacrao.org/edge/em... [aacrao.org]

        Texas Education Agency using AI to grade parts of STAAR tes

  • by xack ( 5304745 ) on Sunday December 15, 2024 @04:44PM (#65015443)
    Students have already just looked up everything on Google and Wikipedia for 20+ years now. Do we want education or just being taught to write assignments in a "human" way? I feel that assignments should be gotten rid of and only human to human interaction should be used for assessment, and cut the computers out entirely.
    • I think that the ability for students lookup things on the internet has detracted from teaching, instead of being forced to teach the children stuff teachers have offloaded that to a google search, where they can get wildly varying quality of results. How are students meant to know if those results are good, they are just learning. To me you need a lot of knowledge about a subject area before you can tell if someone is talking nonsense.

    • There is a massive difference between looking something up and expressing what it is you looked up. Plagiarism has always been considered a form of cheating. The issue here is that students are not plagiarising, but getting an automated system to do what they should be able to do.

      Any idiot can look something up, that's usually not what is being graded. From the typical business school looking for the ability to justify the unjustifiable in a rational way to woo investors, to the engineering assignments asse

    • by AmiMoJo ( 196126 )

      This was a problem even when I was at school, pre internet. We were taught the things that exam markers look for, the phrases and terminology to use. Fit fairness they have a marking guide so of course you could play to that, not general knowledge of the subject.

  • by Wolfling1 ( 1808594 ) on Sunday December 15, 2024 @04:45PM (#65015445) Journal
    There seems to be two fundamental problems:

    1. Late stage capitalism has turned the colleges into cookie cutters instead of institutions of learning and education.
    2. LLMs and AI are changing the face of human knowledge and education. Pumping facts in and out of malleable brains was great for a long time, but it won't serve the next generation. We need a completely different philosophy and it is up to the colleges to lead the way. Unfortunately, refer to point 1.
    • need more trades and less college for all!

      • by Rujiel ( 1632063 )
        Trade schools are great for oligarchs who want a worker base of replaceable automata
      • need more trades and less college for all!

        Why, so you can be surrounded by out of work welders and plumbers who don't know how anything outside of their trade works? That's a great recipe for more shitty presidential picks, but it's definitely not going to improve anything.

        • at least they will not have an $250K to $500K student loan that can't get rid of but. But they can use chapter 11 and 7 when there trade business goes under.

      • by Anonymous Coward
        Better yet, they should make a trade university. Learn your trade and earn an AA at the same time.
      • by will4 ( 7250692 )

        Except auto repair due to the

        1) high cost of education and training (large investment every year),
        2) need to buy thousands of dollars in tools
        3) Flat rate hours per repair job book,
        4) Effectively low pay less than $25 per hour when the dealership charges $100+ per hour
        5) Working in unheated and uncoiled garages,
        6) Having to do recall work for nearly $0
        7) Cars becoming less reliable (plastic parts on the hot engine instead of metal parts)
        8) Cars becoming much more complex

        Google "Why are auto mechanics leavin

  • Any academic who places faith in AI detection or generation, to produce anything of value worth detecting, is either unqualified, incompetence, confused, or lying. AI can't critically think, you can't ask it a question such as: “Please give me an honest opinion of Lord of the Flies”, and expect to get an original analysis.

    Since AI can only produce text that has already been used, written or compiled, the chances it can detect original work, is so low, as to be meaningless. What is it using
  • by timholman ( 71886 ) on Sunday December 15, 2024 @05:22PM (#65015509)

    My opinion is that universities, university faculty members, and society as a whole are drastically underestimating the long-term effects of generative AI. Arguably it will completely undermine the value of many college degrees within a decade.

    Any university instructor can tell you that students will tend to fall into three categories:

    (1) The ones who will cheat at every opportunity, no matter what others so.

    (2) The ones who are scrupulously honest regardless of what others do.

    (3) The ones who will not cheat as long as they perceive a fair playing field, but will resort to cheating if they see other students flagrantly doing it who are not being caught.

    Most faculty put their effort into dealing with the students in category (1), but generative AI has thrown a huge wrench in the works. ChatGPT is already a much better writer than 95% of college students (or professionals, for that matter), and it is getting better all the time. There's no end to the arms race between cheating and cheating detection, and more and more students in category (3) will resort to using ChatGPT because they will know that their peers are using it and not being caught.

    We are heading towards a world where almost all students will begin using generative AI to write their papers and do their homework starting in junior high school. Cheating levels in many subjects will approach 100%, especially at exclusive private schools where parents will turn a blind eye to what is happening so long as their children get a step up on the competition. Then those same students will go to college, keep right on cheating, and graduate with highest honors while only functioning at a 6th to 8th grade intellectual level.

    At that point two things will happen. First, employers will realize that almost all students from University XX who majored in YY are functionally illiterate without the help of generative AI, and will cease hiring them. Second, those same employers will realize the generative AI itself is actually all they need; the college graduates themselves have become redundant. And at that point everyone will realize that it is pointless to go to an expensive private college and incur $250K to $500K in debt to earn a liberal arts degree that is literally not worth the paper it's written on.

    STEM-based programs won't be impacted quite as badly by the trend (at least in the short term), but classic liberal arts is doomed. Generative AI will be a better writer, researcher, and teacher than any human could possibly hope to be. (Try asking the paid version of ChatGPT if it can help your children learn how to read if you're not convinced.)

    It will be a very different world for most universities a decade from now unless academic programs are completely redesigned from top to bottom, and I doubt most faculty will be willing or able to adapt quickly enough. A great many small private schools will find themselves closing their doors, and many top-ranked elite schools will find themselves hit hardest.

  • That doesn't mean that if they flag one there is 99% chance it is AI generated. Based on there numbers about 3% is flagged as AI generated. With a 1% error rate, that means a flagged paper has about a 1 in 3 chance of being a false positive; but saying 1% makes it seem like "if you are flagged you are guilty."
    • Based on there numbers about 3% is flagged as AI generated.

      And that sentence was all I needed to read to know that this post wasn't written by an AI.
      • Based on there numbers about 3% is flagged as AI generated. And that sentence was all I needed to read to know that this post wasn't written by an AI.

        On /., they're many was for someone to prove there posts were posted their by a human.

  • The Guardian notes one teacher's idea of more one-to-one teaching and live lectures — though he added an obvious flaw:
    "But that would mean hiring staff, or reducing student numbers." The pressures on his department are such, he says, that even lecturers have admitted using ChatGPT to dash out seminar and tutorial plans. No wonder students are at it, too.

    Believe it or not there are lots of pedagogical approaches besides "assign and grade homework assignments."

    And without getting in to the many alternatives, it's pretty easy to to solve this problem just by (a) all work is optional, only graded for those who want feedback (b) all evaluation is conducted by end-of-term -- or end-of-degree -- in person testing. In that case there would be zero incentive or benefit to use generative AI for assignments (other than as a study aid).

    The reason that would not be exc

  • From one of the links;
    "ChatGPT generated exam answers, submitted for several undergraduate psychology modules, went undetected in 94% of cases and, on average, attained higher grades than real student submissions."

    I suspect we all will have to get used to the idea that the machines already do better than we can at certain kinds of things that used to be exclusively in the realm of human beings. They can already offload some of our thinking for us. I sniff that they are getting into our intellectual space an

  • by couchslug ( 175151 ) on Sunday December 15, 2024 @06:36PM (#65015623)

    Aircraft mechanics are required to perform those to standard because no amount of regurgitating text is a physical performance test.

    Diploma mills love electronic testing and computer courses, but they don't show performance.

  • Much like when a New York Times reporter gets the front page story completely wrong and the retraction if there is one is on page 19. There are no consequences until the system collapses and everyone looks around covered in ash and epiphanically utters⦠âoewait there was a fire?â. Because the stakes for baseless accusations are extremely low and the review process so onerous for the accused you end up with a one sided grinder. Now if the TA was had to raise their concerns with the profe
    • That said, having chat GPT write a students homework is on the rise and learning standards at universities continues to decline. Itâ(TM)s so bad that the last time I taught college in 2014 3 of the 14 students in the class couldnâ(TM)t write a coherent paper if their lives depended on itâ¦. I wasnâ(TM)t the English professor so I didnâ(TM)t grade them on their communication skills.
  • The answer to this is simple and has been around for decades.

    Prof gives a topic. You are instructed to show up able to write a paper on that topic without notes.

    Papers are then collected and graded.

    Simplest is easiest.

Swap read error. You lose your mind.

Working...