Are AI-Powered Tools - and Cheating-Detection Tools - Hurting College Students? (theguardian.com) 44
A 19-year-old wrongfully accused of using AI told the Guardian's reporter that "to be accused of it because of 'signpost phrases', such as 'in addition to' and 'in contrast', felt very demeaning." And another student "told me they had been pulled into a misconduct hearing — despite having a low score on Turnitin's AI detection tool — after a tutor was convinced the student had used ChatGPT, because some of his points had been structured in a list, which the chatbot has a tendency to do."
Dr Mike Perkins, a generative AI researcher at British University Vietnam, believes there are "significant limitations" to AI detection software. "All the research says time and time again that these tools are unreliable," he told me. "And they are very easily tricked." His own investigation found that AI detectors could detect AI text with an accuracy of 39.5%. Following simple evasion techniques — such as minor manipulation to the text — the accuracy dropped to just 22.1%. As Perkins points out, those who do decide to cheat don't simply cut and paste text from ChatGPT, they edit it, or mould it into their own work. There are also AI "humanisers", such as CopyGenius and StealthGPT, the latter which boasts that it can produce undetectable content and claims to have helped half a million students produce nearly 5m papers...
Many academics seem to believe that "you can always tell" if an assignment was written by an AI, that they can pick up on the stylistic traits associated with these tools. Evidence is mounting to suggest they may be overestimating their ability. Researchers at the University of Reading recently conducted a blind test in which ChatGPT-written answers were submitted through the university's own examination system: 94% of the AI submissions went undetected and received higher scores than those submitted by the humans...
Many universities are already adapting their approach to assessment, penning "AI-positive" policies. At Cambridge University, for example, appropriate use of generative AI includes using it for an "overview of new concepts", "as a collaborative coach", or "supporting time management". The university warns against over-reliance on these tools, which could limit a student's ability to develop critical thinking skills. Some lecturers I spoke to said they felt that this sort of approach was helpful, but others said it was capitulating. One conveyed frustration that her university didn't seem to be taking academic misconduct seriously any more; she had received a "whispered warning" that she was no longer to refer cases where AI was suspected to the central disciplinary board.
The Guardian notes one teacher's idea of more one-to-one teaching and live lectures — though he added an obvious flaw: "But that would mean hiring staff, or reducing student numbers." The pressures on his department are such, he says, that even lecturers have admitted using ChatGPT to dash out seminar and tutorial plans. No wonder students are at it, too.
The article points out "More than half of students now use generative AI to help with their assessments, according to a survey by the Higher Education Policy Institute, and about 5% of students admit using it to cheat." This leads to a world where the anti-cheating software Turnitin "has processed more than 130m papers and says it has flagged 3.5m as being 80% AI-written. But it is also not 100% reliable; there have been widely reported cases of false positives and some universities have chosen to opt out. Turnitin says the rate of error is below 1%, but considering the size of the student population, it is no wonder that many have found themselves in the line of fire." There is also evidence that suggests AI detection tools disadvantage certain demographics. One study at Stanford found that a number of AI detectors have a bias towards non-English speakers, flagging their work 61% of the time, as opposed to 5% of native English speakers (Turnitin was not part of this particular study). Last month, Bloomberg Businessweek reported the case of a student with autism spectrum disorder whose work had been falsely flagged by a detection tool as being written by AI. She described being accused of cheating as like a "punch in the gut". Neurodivergent students, as well as those who write using simpler language and syntax, appear to be disproportionately affected by these systems.
Thanks to Slashdot reader Bruce66423 for sharing the article.
Many academics seem to believe that "you can always tell" if an assignment was written by an AI, that they can pick up on the stylistic traits associated with these tools. Evidence is mounting to suggest they may be overestimating their ability. Researchers at the University of Reading recently conducted a blind test in which ChatGPT-written answers were submitted through the university's own examination system: 94% of the AI submissions went undetected and received higher scores than those submitted by the humans...
Many universities are already adapting their approach to assessment, penning "AI-positive" policies. At Cambridge University, for example, appropriate use of generative AI includes using it for an "overview of new concepts", "as a collaborative coach", or "supporting time management". The university warns against over-reliance on these tools, which could limit a student's ability to develop critical thinking skills. Some lecturers I spoke to said they felt that this sort of approach was helpful, but others said it was capitulating. One conveyed frustration that her university didn't seem to be taking academic misconduct seriously any more; she had received a "whispered warning" that she was no longer to refer cases where AI was suspected to the central disciplinary board.
The Guardian notes one teacher's idea of more one-to-one teaching and live lectures — though he added an obvious flaw: "But that would mean hiring staff, or reducing student numbers." The pressures on his department are such, he says, that even lecturers have admitted using ChatGPT to dash out seminar and tutorial plans. No wonder students are at it, too.
The article points out "More than half of students now use generative AI to help with their assessments, according to a survey by the Higher Education Policy Institute, and about 5% of students admit using it to cheat." This leads to a world where the anti-cheating software Turnitin "has processed more than 130m papers and says it has flagged 3.5m as being 80% AI-written. But it is also not 100% reliable; there have been widely reported cases of false positives and some universities have chosen to opt out. Turnitin says the rate of error is below 1%, but considering the size of the student population, it is no wonder that many have found themselves in the line of fire." There is also evidence that suggests AI detection tools disadvantage certain demographics. One study at Stanford found that a number of AI detectors have a bias towards non-English speakers, flagging their work 61% of the time, as opposed to 5% of native English speakers (Turnitin was not part of this particular study). Last month, Bloomberg Businessweek reported the case of a student with autism spectrum disorder whose work had been falsely flagged by a detection tool as being written by AI. She described being accused of cheating as like a "punch in the gut". Neurodivergent students, as well as those who write using simpler language and syntax, appear to be disproportionately affected by these systems.
Thanks to Slashdot reader Bruce66423 for sharing the article.
Re: (Score:3)
I just see that the best way is not to just flag for AI or not but instead actually combine the paper with a personal presentation of excerpts of the content and answering questions about it.
After all - what the college is about is to learn and understand what you study and if you actually can also talk about it then you can prove that you have understood.
Re: (Score:3, Interesting)
Correct. Doing some form of oral defense is the traditional filter, not only to prevent someone from just having someone else do the work for them, but to dynamically assess and probe areas of weakness that might be papered over or concealed with a written exam. (And for those pointing out that is disadvantages people who do badly in these scenarios - you could set up accommodations, but let's leave that aside for this example.)
Unfortunately this takes time and effort. Something that these "educational i
Re: (Score:2)
The simplest & easiest counter-measure to deliberate plagiarism is to get to know your stude
So the question is: 'what are universities for?' (Score:2)
And, indeed, lectures when there is no meaningful feedback from the listeners. What is the point of putting 600 18yos into a class together? Online teaching - or even, shock horror, BOOKS - surely do a better job for the most part. Add in access to AIs to sharpen skills.
Re: (Score:2)
Re: (Score:2)
So...going back to analog methods to prove you (as a student) have done the work to your professor?
Seems to be AI only appears to be the way forward, but it is actually societal regression in disguise.
Re: Stop the witch hunts (Score:2)
Using tools is societal regression? Even monkeys disagree.
I use chatgpt a lot in my work. So will these students. I think we may be dealing with teachers that already find Wikipedia and Google hard to understand, in some of these cases.
Require oral performance tests in person (Score:2)
Automating school is purely to make money.
This is not really a new problem (Score:5, Insightful)
For as long as we've had universities, some students have cheated on their written work, either by plagiarizing other authors or by paying someone to write it for them. It's only very recently that we developed computerized "anti-plagiarism tools" to try to catch the students. So historically, a certain number of students have always gotten away with it.
Using ChatGPT to write your paper is not really different from copying paragraphs out of an encyclopedia. It should be treated the same way (extremely seriously, with penalties up to and including expulsion). And just like there is no "magic bullet" to detect ordinary plagiarism or ghostwriting, there is no magic bullet to detect this kind of cheating. Why should we expect there to be one?
This IS a new problem. (Score:2)
The problem of cheating isn't new, the problem of non-cheaters being accused of cheating however is. At no point in our history have we had universities use automated systems to *falsely* accuse students of cheating. Automated false positives in detection systems is new.
This is not the only problem (Score:2)
One concerning problem is that AI is and will be used to grade human work and often you will not even know that you are being graded, how the work is being graded and how come you got poor job performance ratings (due to AI grading used in performance feedback)
Think of AI based code quality ratings, AI based code reviews, and that feeding into some job performance spreadsheet.
Or school administered standardized tests
https://www.aacrao.org/edge/em... [aacrao.org]
Texas Education Agency using AI to grade parts of STAAR tes
Re: This IS a new problem. (Score:2)
Automated false positives in detection systems may be new to university teachers outside the fields that actually do deal with this, but it certainly isn't news to anyone dealing with statistical algorithms.
Re: (Score:2)
some students have cheated on their written work, either by plagiarizing other authors or by paying someone to write it for them.
This reminded me that I used to get paid, in beer, in 1987 to type people's papers into a word processor (Word Perfect back then). The kids didn't really know how to use a computer, let alone the intricacies of using a word processor. But the main reason they had me do it for them was because I was familiar with all the different methods of elongating their paper: margins, font size, line spacing, paragraph spacing, and so on. The awesome thing was that my price was one beer per page, so when they asked
Re: (Score:2)
> And just like there is no "magic bullet" to detect ordinary plagiarism
Not plagiarism but there IS a GOOD bullet to prove your innocence that it is an original work -- provide video evidence of you completing the homework for 100% of the assignment from start to finish.
A good word processors has the ability to "undo" after saving so it is relatively easy to "unwind time" showing HOW the documented changed over time.
Also, it isn't THAT hard to just TALK to the student about a subject and gauge their lev
Students already had Google. (Score:5, Interesting)
Re: (Score:2)
I think that the ability for students lookup things on the internet has detracted from teaching, instead of being forced to teach the children stuff teachers have offloaded that to a google search, where they can get wildly varying quality of results. How are students meant to know if those results are good, they are just learning. To me you need a lot of knowledge about a subject area before you can tell if someone is talking nonsense.
Re: (Score:3)
There is a massive difference between looking something up and expressing what it is you looked up. Plagiarism has always been considered a form of cheating. The issue here is that students are not plagiarising, but getting an automated system to do what they should be able to do.
Any idiot can look something up, that's usually not what is being graded. From the typical business school looking for the ability to justify the unjustifiable in a rational way to woo investors, to the engineering assignments asse
Re: (Score:2)
This was a problem even when I was at school, pre internet. We were taught the things that exam markers look for, the phrases and terminology to use. Fit fairness they have a marking guide so of course you could play to that, not general knowledge of the subject.
Change is in the wind (Score:5, Interesting)
1. Late stage capitalism has turned the colleges into cookie cutters instead of institutions of learning and education.
2. LLMs and AI are changing the face of human knowledge and education. Pumping facts in and out of malleable brains was great for a long time, but it won't serve the next generation. We need a completely different philosophy and it is up to the colleges to lead the way. Unfortunately, refer to point 1.
need more trades and less college for all! (Score:2)
need more trades and less college for all!
Re: (Score:3)
Re: (Score:2)
need more trades and less college for all!
Why, so you can be surrounded by out of work welders and plumbers who don't know how anything outside of their trade works? That's a great recipe for more shitty presidential picks, but it's definitely not going to improve anything.
Re: (Score:2)
at least they will not have an $250K to $500K student loan that can't get rid of but. But they can use chapter 11 and 7 when there trade business goes under.
Re: (Score:1)
Re: (Score:2)
Except auto repair due to the
1) high cost of education and training (large investment every year),
2) need to buy thousands of dollars in tools
3) Flat rate hours per repair job book,
4) Effectively low pay less than $25 per hour when the dealership charges $100+ per hour
5) Working in unheated and uncoiled garages,
6) Having to do recall work for nearly $0
7) Cars becoming less reliable (plastic parts on the hot engine instead of metal parts)
8) Cars becoming much more complex
Google "Why are auto mechanics leavin
Re: (Score:2)
I have a lot of mechanics in my family and I agree with everything except #7. Cars are getting more reliable despite the cheap plastic parts. The cheap plastic parts just make things difficult for the mechanics.
The problem with being an auto mechanic is that during your younger years you are constantly learning, and with flat rate pay that can be costly. Then you have to basically take out a second mortgage to buy all your tools. In your late 20s and in your 30s—when you have the tools, know-how, and
Is unrelible AI, unreliable? (Score:2, Interesting)
Since AI can only produce text that has already been used, written or compiled, the chances it can detect original work, is so low, as to be meaningless. What is it using
It will hurt colleges, not just students (Score:4, Interesting)
My opinion is that universities, university faculty members, and society as a whole are drastically underestimating the long-term effects of generative AI. Arguably it will completely undermine the value of many college degrees within a decade.
Any university instructor can tell you that students will tend to fall into three categories:
(1) The ones who will cheat at every opportunity, no matter what others so.
(2) The ones who are scrupulously honest regardless of what others do.
(3) The ones who will not cheat as long as they perceive a fair playing field, but will resort to cheating if they see other students flagrantly doing it who are not being caught.
Most faculty put their effort into dealing with the students in category (1), but generative AI has thrown a huge wrench in the works. ChatGPT is already a much better writer than 95% of college students (or professionals, for that matter), and it is getting better all the time. There's no end to the arms race between cheating and cheating detection, and more and more students in category (3) will resort to using ChatGPT because they will know that their peers are using it and not being caught.
We are heading towards a world where almost all students will begin using generative AI to write their papers and do their homework starting in junior high school. Cheating levels in many subjects will approach 100%, especially at exclusive private schools where parents will turn a blind eye to what is happening so long as their children get a step up on the competition. Then those same students will go to college, keep right on cheating, and graduate with highest honors while only functioning at a 6th to 8th grade intellectual level.
At that point two things will happen. First, employers will realize that almost all students from University XX who majored in YY are functionally illiterate without the help of generative AI, and will cease hiring them. Second, those same employers will realize the generative AI itself is actually all they need; the college graduates themselves have become redundant. And at that point everyone will realize that it is pointless to go to an expensive private college and incur $250K to $500K in debt to earn a liberal arts degree that is literally not worth the paper it's written on.
STEM-based programs won't be impacted quite as badly by the trend (at least in the short term), but classic liberal arts is doomed. Generative AI will be a better writer, researcher, and teacher than any human could possibly hope to be. (Try asking the paid version of ChatGPT if it can help your children learn how to read if you're not convinced.)
It will be a very different world for most universities a decade from now unless academic programs are completely redesigned from top to bottom, and I doubt most faculty will be willing or able to adapt quickly enough. A great many small private schools will find themselves closing their doors, and many top-ranked elite schools will find themselves hit hardest.
Less than 1% error rate (Score:2)
Re: (Score:3)
And that sentence was all I needed to read to know that this post wasn't written by an AI.
Re: (Score:3)
Based on there numbers about 3% is flagged as AI generated. And that sentence was all I needed to read to know that this post wasn't written by an AI.
On /., they're many was for someone to prove there posts were posted their by a human.
Only a problem if you make it one (Score:2)
The Guardian notes one teacher's idea of more one-to-one teaching and live lectures — though he added an obvious flaw:
"But that would mean hiring staff, or reducing student numbers." The pressures on his department are such, he says, that even lecturers have admitted using ChatGPT to dash out seminar and tutorial plans. No wonder students are at it, too.
Believe it or not there are lots of pedagogical approaches besides "assign and grade homework assignments."
And without getting in to the many alternatives, it's pretty easy to to solve this problem just by (a) all work is optional, only graded for those who want feedback (b) all evaluation is conducted by end-of-term -- or end-of-degree -- in person testing. In that case there would be zero incentive or benefit to use generative AI for assignments (other than as a study aid).
The reason that would not be exc
But that's discriminatory (Score:2)
This will be trotted out - and there's some evidence that females don't do as well as males in written exams. Which is one of the reasons home written essays arrived....
A transition period (Score:2)
From one of the links;
"ChatGPT generated exam answers, submitted for several undergraduate psychology modules, went undetected in 94% of cases and, on average, attained higher grades than real student submissions."
I suspect we all will have to get used to the idea that the machines already do better than we can at certain kinds of things that used to be exclusively in the realm of human beings. They can already offload some of our thinking for us. I sniff that they are getting into our intellectual space an
Give oral and practical exams for what matter (Score:3)
Aircraft mechanics are required to perform those to standard because no amount of regurgitating text is a physical performance test.
Diploma mills love electronic testing and computer courses, but they don't show performance.
One sided power and lack of consequences when wron (Score:1)
Re: One sided power and lack of consequences when (Score:1)
Proctored paper writing (Score:2)
The answer to this is simple and has been around for decades.
Prof gives a topic. You are instructed to show up able to write a paper on that topic without notes.
Papers are then collected and graded.
Simplest is easiest.
Nah - that just benefits the good memory (Score:2)
Get ChatGPT to write your essay. Learn it by heart. Trot it out.
Short-sighted ignorance and hypocrisy. (Score:2)
AI is used/abused to write a paper. AI is then used as a detection tool by the “teacher” to find those who are cheating at the task at hand? Who’s bullshitting who here? Sounds like no one actually knows what the fuck they’re doing without the help of AI. Should we use AI to start finding teachers cheating at their job too? Or should we start re-defining what educating a human means, since AI isn’t going anywhere but to the workplace to gobble jobs?
We’re worried abou