Are AI-Powered Tools - and Cheating-Detection Tools - Hurting College Students? (theguardian.com) 89
A 19-year-old wrongfully accused of using AI told the Guardian's reporter that "to be accused of it because of 'signpost phrases', such as 'in addition to' and 'in contrast', felt very demeaning." And another student "told me they had been pulled into a misconduct hearing — despite having a low score on Turnitin's AI detection tool — after a tutor was convinced the student had used ChatGPT, because some of his points had been structured in a list, which the chatbot has a tendency to do."
Dr Mike Perkins, a generative AI researcher at British University Vietnam, believes there are "significant limitations" to AI detection software. "All the research says time and time again that these tools are unreliable," he told me. "And they are very easily tricked." His own investigation found that AI detectors could detect AI text with an accuracy of 39.5%. Following simple evasion techniques — such as minor manipulation to the text — the accuracy dropped to just 22.1%. As Perkins points out, those who do decide to cheat don't simply cut and paste text from ChatGPT, they edit it, or mould it into their own work. There are also AI "humanisers", such as CopyGenius and StealthGPT, the latter which boasts that it can produce undetectable content and claims to have helped half a million students produce nearly 5m papers...
Many academics seem to believe that "you can always tell" if an assignment was written by an AI, that they can pick up on the stylistic traits associated with these tools. Evidence is mounting to suggest they may be overestimating their ability. Researchers at the University of Reading recently conducted a blind test in which ChatGPT-written answers were submitted through the university's own examination system: 94% of the AI submissions went undetected and received higher scores than those submitted by the humans...
Many universities are already adapting their approach to assessment, penning "AI-positive" policies. At Cambridge University, for example, appropriate use of generative AI includes using it for an "overview of new concepts", "as a collaborative coach", or "supporting time management". The university warns against over-reliance on these tools, which could limit a student's ability to develop critical thinking skills. Some lecturers I spoke to said they felt that this sort of approach was helpful, but others said it was capitulating. One conveyed frustration that her university didn't seem to be taking academic misconduct seriously any more; she had received a "whispered warning" that she was no longer to refer cases where AI was suspected to the central disciplinary board.
The Guardian notes one teacher's idea of more one-to-one teaching and live lectures — though he added an obvious flaw: "But that would mean hiring staff, or reducing student numbers." The pressures on his department are such, he says, that even lecturers have admitted using ChatGPT to dash out seminar and tutorial plans. No wonder students are at it, too.
The article points out "More than half of students now use generative AI to help with their assessments, according to a survey by the Higher Education Policy Institute, and about 5% of students admit using it to cheat." This leads to a world where the anti-cheating software Turnitin "has processed more than 130m papers and says it has flagged 3.5m as being 80% AI-written. But it is also not 100% reliable; there have been widely reported cases of false positives and some universities have chosen to opt out. Turnitin says the rate of error is below 1%, but considering the size of the student population, it is no wonder that many have found themselves in the line of fire." There is also evidence that suggests AI detection tools disadvantage certain demographics. One study at Stanford found that a number of AI detectors have a bias towards non-English speakers, flagging their work 61% of the time, as opposed to 5% of native English speakers (Turnitin was not part of this particular study). Last month, Bloomberg Businessweek reported the case of a student with autism spectrum disorder whose work had been falsely flagged by a detection tool as being written by AI. She described being accused of cheating as like a "punch in the gut". Neurodivergent students, as well as those who write using simpler language and syntax, appear to be disproportionately affected by these systems.
Thanks to Slashdot reader Bruce66423 for sharing the article.
Many academics seem to believe that "you can always tell" if an assignment was written by an AI, that they can pick up on the stylistic traits associated with these tools. Evidence is mounting to suggest they may be overestimating their ability. Researchers at the University of Reading recently conducted a blind test in which ChatGPT-written answers were submitted through the university's own examination system: 94% of the AI submissions went undetected and received higher scores than those submitted by the humans...
Many universities are already adapting their approach to assessment, penning "AI-positive" policies. At Cambridge University, for example, appropriate use of generative AI includes using it for an "overview of new concepts", "as a collaborative coach", or "supporting time management". The university warns against over-reliance on these tools, which could limit a student's ability to develop critical thinking skills. Some lecturers I spoke to said they felt that this sort of approach was helpful, but others said it was capitulating. One conveyed frustration that her university didn't seem to be taking academic misconduct seriously any more; she had received a "whispered warning" that she was no longer to refer cases where AI was suspected to the central disciplinary board.
The Guardian notes one teacher's idea of more one-to-one teaching and live lectures — though he added an obvious flaw: "But that would mean hiring staff, or reducing student numbers." The pressures on his department are such, he says, that even lecturers have admitted using ChatGPT to dash out seminar and tutorial plans. No wonder students are at it, too.
The article points out "More than half of students now use generative AI to help with their assessments, according to a survey by the Higher Education Policy Institute, and about 5% of students admit using it to cheat." This leads to a world where the anti-cheating software Turnitin "has processed more than 130m papers and says it has flagged 3.5m as being 80% AI-written. But it is also not 100% reliable; there have been widely reported cases of false positives and some universities have chosen to opt out. Turnitin says the rate of error is below 1%, but considering the size of the student population, it is no wonder that many have found themselves in the line of fire." There is also evidence that suggests AI detection tools disadvantage certain demographics. One study at Stanford found that a number of AI detectors have a bias towards non-English speakers, flagging their work 61% of the time, as opposed to 5% of native English speakers (Turnitin was not part of this particular study). Last month, Bloomberg Businessweek reported the case of a student with autism spectrum disorder whose work had been falsely flagged by a detection tool as being written by AI. She described being accused of cheating as like a "punch in the gut". Neurodivergent students, as well as those who write using simpler language and syntax, appear to be disproportionately affected by these systems.
Thanks to Slashdot reader Bruce66423 for sharing the article.
Re:Stop the witch hunts (Score:4, Insightful)
I just see that the best way is not to just flag for AI or not but instead actually combine the paper with a personal presentation of excerpts of the content and answering questions about it.
After all - what the college is about is to learn and understand what you study and if you actually can also talk about it then you can prove that you have understood.
Re:Stop the witch hunts (Score:4, Interesting)
Correct. Doing some form of oral defense is the traditional filter, not only to prevent someone from just having someone else do the work for them, but to dynamically assess and probe areas of weakness that might be papered over or concealed with a written exam. (And for those pointing out that is disadvantages people who do badly in these scenarios - you could set up accommodations, but let's leave that aside for this example.)
Unfortunately this takes time and effort. Something that these "educational institutions" are apparently unable or unwilling to put in:
"They all agreed that a shift to different forms of teaching and assessment – one-to-one tuition, viva voces and the like – would make it far harder for students to use AI to do the heavy lifting. “That’s how we’d need to do it, if we’re serious about authentically assessing students and not just churning them through a £9,000-a-year course hoping they don’t complain,” one lecturer at a redbrick university told me. “But that would mean hiring staff, or reducing student numbers.”"
What is a degree really worth these days, if the educational institution granting it resembles the diploma mills of yore?
Re: (Score:2)
The simplest & easiest counter-measure to deliberate plagiarism is to get to know your stude
So the question is: 'what are universities for?' (Score:2)
And, indeed, lectures when there is no meaningful feedback from the listeners. What is the point of putting 600 18yos into a class together? Online teaching - or even, shock horror, BOOKS - surely do a better job for the most part. Add in access to AIs to sharpen skills.
Re: (Score:2)
Re: (Score:2)
Lectures alone are insufficient but the vast majority of courses aren't just lectures. Reading BOOKS & other materials typically takes up most of most students' time.
Off the cuff, I’d estimate at least 50% of a four-year education is both pointless and worthless to an employer. That’s based on the sheer amount of mandatory courses that students are forced to take that have fuck-all to do with their chosen career.
Given that work experience is often accepted as a replacement to education, I’m probably lowballing the shit out of that 50%.
50% of THAT price tag, isn’t just a waste of time for students. It becomes a financial fucking nightmare follow
Re: (Score:2)
Re: (Score:2)
That other 50% is important but isn't really being given its fair share. [slashdot.org]
Re: So the question is: 'what are universities for (Score:2)
Yes. But which 50%? Some of that knowledge may not be direct.y applicable to your current job. But a good indicator of how well you can self educate on new subjects throughout your career. It can also be used by you as an indicator of the quality of your employer. Anecdote:
Some time ago, an employer of mine put me through their "executive training" program. To test my suitability to rise through their ranks. The first class (run by a middle management rusing star) gave us an excercize. To list on a 3x5 car
Re: So the question is: 'what are universities fo (Score:2)
rusing star
rising star (sorry for the Freudian slip)
Re: (Score:2)
What is a degree really worth these days, if the educational institution granting it resembles the diploma mills of yore?
The entire system is designed as cheaply as it can be to turn out as many diplomas as possible. The goal isn't education, it isn't to impart social skills, cultural expectations, or job training, it's to print a receipt that the student will work off for the next decade or more of their lives as indentured servants of the wealthy. The wealthy who rigged the education system long ago for their own benefit, and the politicians who let them for easy campaign contributions, scapegoats, and political talking po
Re: (Score:2)
So...going back to analog methods to prove you (as a student) have done the work to your professor?
Seems to be AI only appears to be the way forward, but it is actually societal regression in disguise.
Re: Stop the witch hunts (Score:2)
Using tools is societal regression? Even monkeys disagree.
I use chatgpt a lot in my work. So will these students. I think we may be dealing with teachers that already find Wikipedia and Google hard to understand, in some of these cases.
Re: (Score:2)
Going back to testing knowledge rather than completion of busywork?
Heavens to betsy.
Re: (Score:2)
I just see that the best way is not to just flag for AI or not but instead actually combine the paper with a personal presentation of excerpts of the content and answering questions about it.
After all - what the college is about is to learn and understand what you study and if you actually can also talk about it then you can prove that you have understood.
When a four-year degree is reduced to a job application checkbox, I doubt we can still prove the point of college.
Especially when equivalent work experience is often accepted as the alternative and is even preferred at times.
Re:Stop the [vacuous Subjects] (Score:2)
Andy why did you feed the troll?
I have a substantive comment from Generative Deep Learning by David Foster, but... Not worth it.
Require oral performance tests in person (Score:2)
Automating school is purely to make money.
Re: (Score:2)
Making money is the root of all enshittification. It is no surprise it also happens in the education system.
This is not really a new problem (Score:5, Insightful)
For as long as we've had universities, some students have cheated on their written work, either by plagiarizing other authors or by paying someone to write it for them. It's only very recently that we developed computerized "anti-plagiarism tools" to try to catch the students. So historically, a certain number of students have always gotten away with it.
Using ChatGPT to write your paper is not really different from copying paragraphs out of an encyclopedia. It should be treated the same way (extremely seriously, with penalties up to and including expulsion). And just like there is no "magic bullet" to detect ordinary plagiarism or ghostwriting, there is no magic bullet to detect this kind of cheating. Why should we expect there to be one?
This IS a new problem. (Score:4, Informative)
The problem of cheating isn't new, the problem of non-cheaters being accused of cheating however is. At no point in our history have we had universities use automated systems to *falsely* accuse students of cheating. Automated false positives in detection systems is new.
This is not the only problem (Score:3)
One concerning problem is that AI is and will be used to grade human work and often you will not even know that you are being graded, how the work is being graded and how come you got poor job performance ratings (due to AI grading used in performance feedback)
Think of AI based code quality ratings, AI based code reviews, and that feeding into some job performance spreadsheet.
Or school administered standardized tests
https://www.aacrao.org/edge/em... [aacrao.org]
Texas Education Agency using AI to grade parts of STAAR tes
Re: This IS a new problem. (Score:2)
Automated false positives in detection systems may be new to university teachers outside the fields that actually do deal with this, but it certainly isn't news to anyone dealing with statistical algorithms.
Re: (Score:2)
Correct. Completely off topic since we're talking specifically about *checks notes* "hurting college students", but otherwise correct.
Re: (Score:3)
The problem of cheating isn't new, the problem of non-cheaters being accused of cheating however is. At no point in our history have we had universities use automated systems to *falsely* accuse students of cheating. Automated false positives in detection systems is new.
If an educator was accused of “cheating” by using AI to “read” the paper instead of them using their own (highly educated expert?) brain to simply determine if a paper is acceptable, what would be the accepted response? Would they even be offended if that were *not* true?
Yes. To your point I’d say we have a whole new set of problems. Starting with whatever the hell we’re defining as integrity these days.
Re: (Score:2)
The educator isn't the one being taught here nor are they being graded on their ability to read assignments (many universities actually do solicit student feedback on their educators), but they are usually in the form of how materials were taught, not what software was used during the grading of assignments.
AI is a tool. It should be able to be used where it is capable of doing a job and where it doesn't conflict with the goal of the job itself. It is this latter concept that you're missing when complaining
Re: (Score:2)
some students have cheated on their written work, either by plagiarizing other authors or by paying someone to write it for them.
This reminded me that I used to get paid, in beer, in 1987 to type people's papers into a word processor (Word Perfect back then). The kids didn't really know how to use a computer, let alone the intricacies of using a word processor. But the main reason they had me do it for them was because I was familiar with all the different methods of elongating their paper: margins, font size, line spacing, paragraph spacing, and so on. The awesome thing was that my price was one beer per page, so when they asked
Re: (Score:2)
> And just like there is no "magic bullet" to detect ordinary plagiarism
Not plagiarism but there IS a GOOD bullet to prove your innocence that it is an original work -- provide video evidence of you completing the homework for 100% of the assignment from start to finish.
A good word processors has the ability to "undo" after saving so it is relatively easy to "unwind time" showing HOW the documented changed over time.
Also, it isn't THAT hard to just TALK to the student about a subject and gauge their lev
Re: (Score:2)
Colleges and universities want to spend the bare minimum detecting cheaters. Unfortunately that means there WILL be some false positives but they don't seem to recognize this.
They also assume they can sustain the problem of false positives indefinitely.
Those aren’t just students being wrongly accused. Those are paying customers. Good luck finding enough of them when your institution becomes infamous for false accusations by comparison. Plenty of places to get an education without having to be abused.
Re: (Score:2)
Colleges and universities want to spend the bare minimum detecting cheaters. Unfortunately that means there WILL be some false positives but they don't seem to recognize this.
Looks like these people are neither smart nor particularly educated. And then you remember that the "Peter Principle" was found in education organizations...
Old Problem. New Society. (Score:2)
For as long as we've had universities, some students have cheated on their written work, either by plagiarizing other authors or by paying someone to write it for them. It's only very recently that we developed computerized "anti-plagiarism tools" to try to catch the students. So historically, a certain number of students have always gotten away with it.
Using ChatGPT to write your paper is not really different from copying paragraphs out of an encyclopedia. It should be treated the same way (extremely seriously, with penalties up to and including expulsion). And just like there is no "magic bullet" to detect ordinary plagiarism or ghostwriting, there is no magic bullet to detect this kind of cheating. Why should we expect there to be one?
We wrote a tool to detect cheating by insiders trading information illegally. We then highlighted the fact that American lawmakers were (and still are) engaged in the worst of that behavior. What was the response from lawmakers? Dismissing cheating as a fucking job perk.
The problem may be the same, but our society sure as shit isn’t. The hell do we ignorantly expect from teenage college students when American lawmakers behave like that?
We used to remember the fucking point of leading by example.
Re: (Score:2)
We used to remember the fucking point of leading by example.
Do you mean like when we sold fuel to the S.S.?
Maybe you mean like when Prescott Bush ran a company whose entire purpose was to send money to Hitler.
Students already had Google. (Score:5, Interesting)
Re: (Score:2)
I think that the ability for students lookup things on the internet has detracted from teaching, instead of being forced to teach the children stuff teachers have offloaded that to a google search, where they can get wildly varying quality of results. How are students meant to know if those results are good, they are just learning. To me you need a lot of knowledge about a subject area before you can tell if someone is talking nonsense.
Re:Students already had Google. (Score:4, Insightful)
There is a massive difference between looking something up and expressing what it is you looked up. Plagiarism has always been considered a form of cheating. The issue here is that students are not plagiarising, but getting an automated system to do what they should be able to do.
Any idiot can look something up, that's usually not what is being graded. From the typical business school looking for the ability to justify the unjustifiable in a rational way to woo investors, to the engineering assignments assessing on people's ability to write a technical report with brevity while still covering all important concepts.
In many cases for university assignments the actual answer isn't actually interesting, everyone knows they will be looked up, or heck sometimes they are even given in the assignment itself. It's about showing your working and thought process.
Re: (Score:2)
This was a problem even when I was at school, pre internet. We were taught the things that exam markers look for, the phrases and terminology to use. Fit fairness they have a marking guide so of course you could play to that, not general knowledge of the subject.
Re: (Score:2)
I feel that assignments should be gotten rid of
The result given by all of those who were never taught the purpose of the assignments to begin with. We don't care if you can look something up or not. We care about whether or not you can link the things you look up to new concepts and problems without being told they could relate to one another. In a way that makes logical sense. The assignments are meant to encourage developing that skill, but like everything else, they were degraded to the point that they were rendered unable to do so by those who woul
Change is in the wind (Score:5, Interesting)
1. Late stage capitalism has turned the colleges into cookie cutters instead of institutions of learning and education.
2. LLMs and AI are changing the face of human knowledge and education. Pumping facts in and out of malleable brains was great for a long time, but it won't serve the next generation. We need a completely different philosophy and it is up to the colleges to lead the way. Unfortunately, refer to point 1.
need more trades and less college for all! (Score:2)
need more trades and less college for all!
Re: (Score:3)
Re: (Score:2)
Re: (Score:2)
need more trades and less college for all!
Why, so you can be surrounded by out of work welders and plumbers who don't know how anything outside of their trade works? That's a great recipe for more shitty presidential picks, but it's definitely not going to improve anything.
Re: (Score:2)
at least they will not have an $250K to $500K student loan that can't get rid of but. But they can use chapter 11 and 7 when there trade business goes under.
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
Re: (Score:2)
Except auto repair due to the
1) high cost of education and training (large investment every year),
2) need to buy thousands of dollars in tools
3) Flat rate hours per repair job book,
4) Effectively low pay less than $25 per hour when the dealership charges $100+ per hour
5) Working in unheated and uncoiled garages,
6) Having to do recall work for nearly $0
7) Cars becoming less reliable (plastic parts on the hot engine instead of metal parts)
8) Cars becoming much more complex
Google "Why are auto mechanics leavin
Re: (Score:2)
I have a lot of mechanics in my family and I agree with everything except #7. Cars are getting more reliable despite the cheap plastic parts. The cheap plastic parts just make things difficult for the mechanics.
The problem with being an auto mechanic is that during your younger years you are constantly learning, and with flat rate pay that can be costly. Then you have to basically take out a second mortgage to buy all your tools. In your late 20s and in your 30s—when you have the tools, know-how, and
Re: (Score:2)
There seems to be two fundamental problems: 1. Late stage capitalism has turned the colleges into cookie cutters instead of institutions of learning and education. 2. LLMs and AI are changing the face of human knowledge and education. Pumping facts in and out of malleable brains was great for a long time, but it won't serve the next generation. We need a completely different philosophy and it is up to the colleges to lead the way. Unfortunately, refer to point 1.
You’re worried about LLMs and AI changing the face of education? Rest assured college viability will be the least of our concerns in a society suffering violently with a 25% unemployment rate. AI is changing the face of human employment. Also known as that thing that sustains human life until we create an answer for the unemployable human problem. Which Greed won’t. Best Greed will do is pretend UBI isn’t Welfare redefined, while probably killing off a billion or two in the process.
Re: (Score:2)
We're sliding into the time where knowledge for knowledge's own sake is no longer going to be of value. Knowledge will become an anchor around your neck, because it will become malleable to the larger society. Whatever the machines say is the truth today, and if you aren't on the correct pipeline of information, you won't be "up to date." I remember reading about such societies in sci-fi books as a kid and thinking what an amazing way to live that would be. All knowledge, at your fingertips, or directly in
Is unrelible AI, unreliable? (Score:2, Interesting)
Since AI can only produce text that has already been used, written or compiled, the chances it can detect original work, is so low, as to be meaningless. What is it using
Re: (Score:2)
I don't know if that's a good book or no
Re: (Score:2)
It will hurt colleges, not just students (Score:5, Insightful)
My opinion is that universities, university faculty members, and society as a whole are drastically underestimating the long-term effects of generative AI. Arguably it will completely undermine the value of many college degrees within a decade.
Any university instructor can tell you that students will tend to fall into three categories:
(1) The ones who will cheat at every opportunity, no matter what others so.
(2) The ones who are scrupulously honest regardless of what others do.
(3) The ones who will not cheat as long as they perceive a fair playing field, but will resort to cheating if they see other students flagrantly doing it who are not being caught.
Most faculty put their effort into dealing with the students in category (1), but generative AI has thrown a huge wrench in the works. ChatGPT is already a much better writer than 95% of college students (or professionals, for that matter), and it is getting better all the time. There's no end to the arms race between cheating and cheating detection, and more and more students in category (3) will resort to using ChatGPT because they will know that their peers are using it and not being caught.
We are heading towards a world where almost all students will begin using generative AI to write their papers and do their homework starting in junior high school. Cheating levels in many subjects will approach 100%, especially at exclusive private schools where parents will turn a blind eye to what is happening so long as their children get a step up on the competition. Then those same students will go to college, keep right on cheating, and graduate with highest honors while only functioning at a 6th to 8th grade intellectual level.
At that point two things will happen. First, employers will realize that almost all students from University XX who majored in YY are functionally illiterate without the help of generative AI, and will cease hiring them. Second, those same employers will realize the generative AI itself is actually all they need; the college graduates themselves have become redundant. And at that point everyone will realize that it is pointless to go to an expensive private college and incur $250K to $500K in debt to earn a liberal arts degree that is literally not worth the paper it's written on.
STEM-based programs won't be impacted quite as badly by the trend (at least in the short term), but classic liberal arts is doomed. Generative AI will be a better writer, researcher, and teacher than any human could possibly hope to be. (Try asking the paid version of ChatGPT if it can help your children learn how to read if you're not convinced.)
It will be a very different world for most universities a decade from now unless academic programs are completely redesigned from top to bottom, and I doubt most faculty will be willing or able to adapt quickly enough. A great many small private schools will find themselves closing their doors, and many top-ranked elite schools will find themselves hit hardest.
Re: (Score:1)
My opinion is that universities, university faculty members, and society as a whole are drastically underestimating the long-term effects of generative AI. Arguably it will completely undermine the value of many college degrees within a decade.
Degrees have already been undermined by the colleges themselves by passing students who would have failed in the past. They do this for the revenue additionally they take on a lot of foreign full fee paying students who would never pass the normal process in western countries again for the revenue
First, employers will realize that almost all students from University XX who majored in YY are functionally illiterate without the help of generative AI, and will cease hiring them. Second, those same employers will realize the generative AI itself is actually all they need; the college graduates themselves have become redundant. And at that point everyone will realise that it is pointless to go to an expensive private college and incur $250K to $500K in debt to earn a liberal arts degree that is literally not worth the paper it's written on.
Many employers looking for real talent based on merit don't look at degrees anymore for the first reason above they know many students should never have been allowed into College/University
Most people who do liber
Re: (Score:2)
Liberals arts is supposed to teach the skills (arts) of citizens that are necessary for a thriving democratic society (liberal). It goes back to ancient Greek democracies. The original seven liberal arts were Grammar, Logic, & Rhetoric; and Arithmetic, Geometry, Astronomy, & Music. The usefulness of those are mostly self-evident, but back then astronomy was useful for keeping time/calendars and fo
Re: (Score:2)
Generative AI will be a better writer, researcher, and teacher than any human could possibly hope to be.
Based on what? We're definitely into the levelling off phase of LLMs, after the initial massive performance boosts. Improvements are now more incremental and more hard won than they were.
Half of writing is the mechanics of writing, the other half is having something to say. Gen AI falls completely short on the latter which is why it produces this samey slop all the time.
And researchers need to be
Re: (Score:2)
What happens when every one of your arguments against the “dumb” AI, becomes extinct?
We hire good enough humans today. They will be replaced by good enough AI in the future. Anyone assuming we will wait for some kind of “perfect” AI to prove itself, doesn’t grasp the reality of good enough.
Autonomous driving cars were certainly good enough at driving to become legal. Let’s not assume about AI needing to even be anywhere close to smarter than us. It only has to be chea
Re: (Score:2)
What happens is that GenAI is a better writer than something like 90% of all people. Turns out these 90% are really crappy writers. Of course, if some thinking is required, that number drops to something like 80% as that is about the number of people who are really crappy thinkers.
Re: (Score:2)
A great many small private schools will find themselves closing their doors, and many top-ranked elite schools will find themselves hit hardest.
Totally agree with your concerns, but we should remember one fact with those we assume will be hit hardest. Many elite schools can survive the education apocalypse because they’re sitting on massive endowments that are heavily invested. If they go under, we probably have much larger problems in society than attending college.
I suspect as the education apocalypse comes down on higher education, the only colleges left standing will be those who survived financially because of a metric fuckton of endow
Re: (Score:2)
1) The purpose of student assignments and homework is to practice thought patterns and facts. If the studends don't want to do them by themselves, why force them? We will eventually go back to the historical practice of elite universities: lecture the students for 3 years, and at the end do one (1) in person oral questioning exam covering everything. If they fail, they fail. If they succeed, they succeed. Meanwhile, the students are free to do as much or as little as they like. Some of th
Re: (Score:2)
We had something a bit like (1) in most courses after the first two years (5 year degree, awarding a CS masters-equivalent, although no fixed time was stated): There were exercises, there sometimes were solutions or result discussions, but exercises went completely ungraded and there was one exam at the end of the course. Doing that in a larger scale would certainly weed out those that do not get what the whole experience is about.
I do agree about the prompt engineers and the "prompt engineer herders". They
Re: (Score:2)
Well, I do IT security teaching academically and we are throwing out AI in one course were it was allowed in exams for a year. Most others I teach are in my responsibility and exams are on paper. Period. This one was not and I sort-of expected that doing "open Internet" exams stops working in the age of AI. Turns out you get too many of type (1) and (2) not learning anything and still passing exams. And that is not good at all.
Less than 1% error rate (Score:2)
Re:Less than 1% error rate (Score:4, Insightful)
And that sentence was all I needed to read to know that this post wasn't written by an AI.
Re: (Score:3)
Based on there numbers about 3% is flagged as AI generated. And that sentence was all I needed to read to know that this post wasn't written by an AI.
On /., they're many was for someone to prove there posts were posted their by a human.
Only a problem if you make it one (Score:2)
The Guardian notes one teacher's idea of more one-to-one teaching and live lectures — though he added an obvious flaw:
"But that would mean hiring staff, or reducing student numbers." The pressures on his department are such, he says, that even lecturers have admitted using ChatGPT to dash out seminar and tutorial plans. No wonder students are at it, too.
Believe it or not there are lots of pedagogical approaches besides "assign and grade homework assignments."
And without getting in to the many alternatives, it's pretty easy to to solve this problem just by (a) all work is optional, only graded for those who want feedback (b) all evaluation is conducted by end-of-term -- or end-of-degree -- in person testing. In that case there would be zero incentive or benefit to use generative AI for assignments (other than as a study aid).
The reason that would not be exc
But that's discriminatory (Score:2)
This will be trotted out - and there's some evidence that females don't do as well as males in written exams. Which is one of the reasons home written essays arrived....
Re: (Score:2)
Oh? I have not noticed that at all. Of course, I teach IT Security with a focus on insight, so that difference may not be present there.
Here's an example (Score:2)
But it's something I've heard generally
https://www.sciencedirect.com/... [sciencedirect.com]
Re: (Score:2)
The reason that would not be exceptionally popular is because universities, rather than being forums for the great ideas of the world to be pursued by the adept, are now a place to shove listless young adults, stamp them with the correct views, and push them out with a coming of age certificate. Impediments due to lack of ability, dedication, or character are considered problems for the university to deal with, usually by finding a way to make those issues less relevant to the final outcome.
Sad but true. In a very real sense, many universities are not universities anymore.
A transition period (Score:2)
From one of the links;
"ChatGPT generated exam answers, submitted for several undergraduate psychology modules, went undetected in 94% of cases and, on average, attained higher grades than real student submissions."
I suspect we all will have to get used to the idea that the machines already do better than we can at certain kinds of things that used to be exclusively in the realm of human beings. They can already offload some of our thinking for us. I sniff that they are getting into our intellectual space an
Re: (Score:2)
Undergrad modules in the US are typically pretty shallow and generic. Do not read to much into these results.
Re: (Score:2)
I expect they are shallow, but then the humans should have handled them more easily as well.
Give oral and practical exams for what matter (Score:3)
Aircraft mechanics are required to perform those to standard because no amount of regurgitating text is a physical performance test.
Diploma mills love electronic testing and computer courses, but they don't show performance.
One sided power and lack of consequences when wron (Score:1)
Re: One sided power and lack of consequences when (Score:1)
Proctored paper writing (Score:2)
The answer to this is simple and has been around for decades.
Prof gives a topic. You are instructed to show up able to write a paper on that topic without notes.
Papers are then collected and graded.
Simplest is easiest.
Nah - that just benefits the good memory (Score:2)
Get ChatGPT to write your essay. Learn it by heart. Trot it out.
Re: (Score:2)
It's amazing the sheer level of effort some people will pour into not learning something. But most people aren't good rote memorizers anyway.
Re: (Score:2)
That only works if you know the exact topic beforehand. And, incidentally, that is obvious. Guess you do not qualify.
Short-sighted ignorance and hypocrisy. (Score:2)
AI is used/abused to write a paper. AI is then used as a detection tool by the “teacher” to find those who are cheating at the task at hand? Who’s bullshitting who here? Sounds like no one actually knows what the fuck they’re doing without the help of AI. Should we use AI to start finding teachers cheating at their job too? Or should we start re-defining what educating a human means, since AI isn’t going anywhere but to the workplace to gobble jobs?
We’re worried abou
More like stupid "educators" are harmful... (Score:2)
If you want to prevent AI use, then prevent AI use. Yes, that means in-class work and exams. For homework, you cannot assure or detect anything reliably, deal with it.
U-shaped exams (Score:2)
I tell my students they are free to use AI, but they need to understand what it suggests and why. Learn from it. Also, catch it, when it screws up.
The effect I am noticing is that the students who would pass anyway are passing with better grades. Meanwhile, those who would have failed before AI now fail dramatically. I've had exams with U-shaped distributions: only top or flop, with almost no grades in the middle.
Maybe that's actually a good thing?
There's a reason why we have exams... (Score:2)
Cheating has always been a big thing with assignments and essays. Students can cheat off each other. They can find assignments from previous years...
What is new is 'newer' teaching methods want to de-emphasize exams. There are reasons for this in that some students may not show their best during exams. In reality though, a proctored exam is simply unmatched in terms of grading a student.
Even things like essays can be done in proctored exams. I remember taking philosophy and history elective when I went to u