Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI Education

Are AI-Powered Tools - and Cheating-Detection Tools - Hurting College Students? (theguardian.com) 89

A 19-year-old wrongfully accused of using AI told the Guardian's reporter that "to be accused of it because of 'signpost phrases', such as 'in addition to' and 'in contrast', felt very demeaning." And another student "told me they had been pulled into a misconduct hearing — despite having a low score on Turnitin's AI detection tool — after a tutor was convinced the student had used ChatGPT, because some of his points had been structured in a list, which the chatbot has a tendency to do." Dr Mike Perkins, a generative AI researcher at British University Vietnam, believes there are "significant limitations" to AI detection software. "All the research says time and time again that these tools are unreliable," he told me. "And they are very easily tricked." His own investigation found that AI detectors could detect AI text with an accuracy of 39.5%. Following simple evasion techniques — such as minor manipulation to the text — the accuracy dropped to just 22.1%. As Perkins points out, those who do decide to cheat don't simply cut and paste text from ChatGPT, they edit it, or mould it into their own work. There are also AI "humanisers", such as CopyGenius and StealthGPT, the latter which boasts that it can produce undetectable content and claims to have helped half a million students produce nearly 5m papers...

Many academics seem to believe that "you can always tell" if an assignment was written by an AI, that they can pick up on the stylistic traits associated with these tools. Evidence is mounting to suggest they may be overestimating their ability. Researchers at the University of Reading recently conducted a blind test in which ChatGPT-written answers were submitted through the university's own examination system: 94% of the AI submissions went undetected and received higher scores than those submitted by the humans...

Many universities are already adapting their approach to assessment, penning "AI-positive" policies. At Cambridge University, for example, appropriate use of generative AI includes using it for an "overview of new concepts", "as a collaborative coach", or "supporting time management". The university warns against over-reliance on these tools, which could limit a student's ability to develop critical thinking skills. Some lecturers I spoke to said they felt that this sort of approach was helpful, but others said it was capitulating. One conveyed frustration that her university didn't seem to be taking academic misconduct seriously any more; she had received a "whispered warning" that she was no longer to refer cases where AI was suspected to the central disciplinary board.

The Guardian notes one teacher's idea of more one-to-one teaching and live lectures — though he added an obvious flaw: "But that would mean hiring staff, or reducing student numbers." The pressures on his department are such, he says, that even lecturers have admitted using ChatGPT to dash out seminar and tutorial plans. No wonder students are at it, too.
The article points out "More than half of students now use generative AI to help with their assessments, according to a survey by the Higher Education Policy Institute, and about 5% of students admit using it to cheat." This leads to a world where the anti-cheating software Turnitin "has processed more than 130m papers and says it has flagged 3.5m as being 80% AI-written. But it is also not 100% reliable; there have been widely reported cases of false positives and some universities have chosen to opt out. Turnitin says the rate of error is below 1%, but considering the size of the student population, it is no wonder that many have found themselves in the line of fire." There is also evidence that suggests AI detection tools disadvantage certain demographics. One study at Stanford found that a number of AI detectors have a bias towards non-English speakers, flagging their work 61% of the time, as opposed to 5% of native English speakers (Turnitin was not part of this particular study). Last month, Bloomberg Businessweek reported the case of a student with autism spectrum disorder whose work had been falsely flagged by a detection tool as being written by AI. She described being accused of cheating as like a "punch in the gut". Neurodivergent students, as well as those who write using simpler language and syntax, appear to be disproportionately affected by these systems.
Thanks to Slashdot reader Bruce66423 for sharing the article.

Are AI-Powered Tools - and Cheating-Detection Tools - Hurting College Students?

Comments Filter:
  • by Harvey Manfrenjenson ( 1610637 ) on Sunday December 15, 2024 @04:39PM (#65015437)

    For as long as we've had universities, some students have cheated on their written work, either by plagiarizing other authors or by paying someone to write it for them. It's only very recently that we developed computerized "anti-plagiarism tools" to try to catch the students. So historically, a certain number of students have always gotten away with it.

    Using ChatGPT to write your paper is not really different from copying paragraphs out of an encyclopedia. It should be treated the same way (extremely seriously, with penalties up to and including expulsion). And just like there is no "magic bullet" to detect ordinary plagiarism or ghostwriting, there is no magic bullet to detect this kind of cheating. Why should we expect there to be one?

    • by thegarbz ( 1787294 ) on Sunday December 15, 2024 @07:52PM (#65015717)

      The problem of cheating isn't new, the problem of non-cheaters being accused of cheating however is. At no point in our history have we had universities use automated systems to *falsely* accuse students of cheating. Automated false positives in detection systems is new.

      • One concerning problem is that AI is and will be used to grade human work and often you will not even know that you are being graded, how the work is being graded and how come you got poor job performance ratings (due to AI grading used in performance feedback)

        Think of AI based code quality ratings, AI based code reviews, and that feeding into some job performance spreadsheet.

        Or school administered standardized tests

        https://www.aacrao.org/edge/em... [aacrao.org]

        Texas Education Agency using AI to grade parts of STAAR tes

      • Automated false positives in detection systems may be new to university teachers outside the fields that actually do deal with this, but it certainly isn't news to anyone dealing with statistical algorithms.

        • Correct. Completely off topic since we're talking specifically about *checks notes* "hurting college students", but otherwise correct.

      • The problem of cheating isn't new, the problem of non-cheaters being accused of cheating however is. At no point in our history have we had universities use automated systems to *falsely* accuse students of cheating. Automated false positives in detection systems is new.

        If an educator was accused of “cheating” by using AI to “read” the paper instead of them using their own (highly educated expert?) brain to simply determine if a paper is acceptable, what would be the accepted response? Would they even be offended if that were *not* true?

        Yes. To your point I’d say we have a whole new set of problems. Starting with whatever the hell we’re defining as integrity these days.

        • The educator isn't the one being taught here nor are they being graded on their ability to read assignments (many universities actually do solicit student feedback on their educators), but they are usually in the form of how materials were taught, not what software was used during the grading of assignments.

          AI is a tool. It should be able to be used where it is capable of doing a job and where it doesn't conflict with the goal of the job itself. It is this latter concept that you're missing when complaining

    • by lsllll ( 830002 )

      some students have cheated on their written work, either by plagiarizing other authors or by paying someone to write it for them.

      This reminded me that I used to get paid, in beer, in 1987 to type people's papers into a word processor (Word Perfect back then). The kids didn't really know how to use a computer, let alone the intricacies of using a word processor. But the main reason they had me do it for them was because I was familiar with all the different methods of elongating their paper: margins, font size, line spacing, paragraph spacing, and so on. The awesome thing was that my price was one beer per page, so when they asked

    • > And just like there is no "magic bullet" to detect ordinary plagiarism

      Not plagiarism but there IS a GOOD bullet to prove your innocence that it is an original work -- provide video evidence of you completing the homework for 100% of the assignment from start to finish.

      A good word processors has the ability to "undo" after saving so it is relatively easy to "unwind time" showing HOW the documented changed over time.

      Also, it isn't THAT hard to just TALK to the student about a subject and gauge their lev

      • Colleges and universities want to spend the bare minimum detecting cheaters. Unfortunately that means there WILL be some false positives but they don't seem to recognize this.

        They also assume they can sustain the problem of false positives indefinitely.

        Those aren’t just students being wrongly accused. Those are paying customers. Good luck finding enough of them when your institution becomes infamous for false accusations by comparison. Plenty of places to get an education without having to be abused.

      • by gweihir ( 88907 )

        Colleges and universities want to spend the bare minimum detecting cheaters. Unfortunately that means there WILL be some false positives but they don't seem to recognize this.

        Looks like these people are neither smart nor particularly educated. And then you remember that the "Peter Principle" was found in education organizations...

    • For as long as we've had universities, some students have cheated on their written work, either by plagiarizing other authors or by paying someone to write it for them. It's only very recently that we developed computerized "anti-plagiarism tools" to try to catch the students. So historically, a certain number of students have always gotten away with it.

      Using ChatGPT to write your paper is not really different from copying paragraphs out of an encyclopedia. It should be treated the same way (extremely seriously, with penalties up to and including expulsion). And just like there is no "magic bullet" to detect ordinary plagiarism or ghostwriting, there is no magic bullet to detect this kind of cheating. Why should we expect there to be one?

      We wrote a tool to detect cheating by insiders trading information illegally. We then highlighted the fact that American lawmakers were (and still are) engaged in the worst of that behavior. What was the response from lawmakers? Dismissing cheating as a fucking job perk.

      The problem may be the same, but our society sure as shit isn’t. The hell do we ignorantly expect from teenage college students when American lawmakers behave like that?

      We used to remember the fucking point of leading by example.

      • We used to remember the fucking point of leading by example.

        Do you mean like when we sold fuel to the S.S.?

        Maybe you mean like when Prescott Bush ran a company whose entire purpose was to send money to Hitler.

  • by xack ( 5304745 ) on Sunday December 15, 2024 @04:44PM (#65015443)
    Students have already just looked up everything on Google and Wikipedia for 20+ years now. Do we want education or just being taught to write assignments in a "human" way? I feel that assignments should be gotten rid of and only human to human interaction should be used for assessment, and cut the computers out entirely.
    • I think that the ability for students lookup things on the internet has detracted from teaching, instead of being forced to teach the children stuff teachers have offloaded that to a google search, where they can get wildly varying quality of results. How are students meant to know if those results are good, they are just learning. To me you need a lot of knowledge about a subject area before you can tell if someone is talking nonsense.

    • by thegarbz ( 1787294 ) on Sunday December 15, 2024 @07:56PM (#65015723)

      There is a massive difference between looking something up and expressing what it is you looked up. Plagiarism has always been considered a form of cheating. The issue here is that students are not plagiarising, but getting an automated system to do what they should be able to do.

      Any idiot can look something up, that's usually not what is being graded. From the typical business school looking for the ability to justify the unjustifiable in a rational way to woo investors, to the engineering assignments assessing on people's ability to write a technical report with brevity while still covering all important concepts.

      In many cases for university assignments the actual answer isn't actually interesting, everyone knows they will be looked up, or heck sometimes they are even given in the assignment itself. It's about showing your working and thought process.

    • by AmiMoJo ( 196126 )

      This was a problem even when I was at school, pre internet. We were taught the things that exam markers look for, the phrases and terminology to use. Fit fairness they have a marking guide so of course you could play to that, not general knowledge of the subject.

    • I feel that assignments should be gotten rid of

      The result given by all of those who were never taught the purpose of the assignments to begin with. We don't care if you can look something up or not. We care about whether or not you can link the things you look up to new concepts and problems without being told they could relate to one another. In a way that makes logical sense. The assignments are meant to encourage developing that skill, but like everything else, they were degraded to the point that they were rendered unable to do so by those who woul

  • by Wolfling1 ( 1808594 ) on Sunday December 15, 2024 @04:45PM (#65015445) Journal
    There seems to be two fundamental problems:

    1. Late stage capitalism has turned the colleges into cookie cutters instead of institutions of learning and education.
    2. LLMs and AI are changing the face of human knowledge and education. Pumping facts in and out of malleable brains was great for a long time, but it won't serve the next generation. We need a completely different philosophy and it is up to the colleges to lead the way. Unfortunately, refer to point 1.
    • need more trades and less college for all!

      • by Rujiel ( 1632063 )
        Trade schools are great for oligarchs who want a worker base of replaceable automata
      • need more trades and less college for all!

        Why, so you can be surrounded by out of work welders and plumbers who don't know how anything outside of their trade works? That's a great recipe for more shitty presidential picks, but it's definitely not going to improve anything.

        • at least they will not have an $250K to $500K student loan that can't get rid of but. But they can use chapter 11 and 7 when there trade business goes under.

          • And who's paying for that? You know it won't be the wall street clowns who handed out the loans.
          • by Rujiel ( 1632063 )
            Not pictured is the fact that in many cases the loan itself has ballooned far past its prior value.
      • by Anonymous Coward
        Better yet, they should make a trade university. Learn your trade and earn an AA at the same time.
      • by will4 ( 7250692 )

        Except auto repair due to the

        1) high cost of education and training (large investment every year),
        2) need to buy thousands of dollars in tools
        3) Flat rate hours per repair job book,
        4) Effectively low pay less than $25 per hour when the dealership charges $100+ per hour
        5) Working in unheated and uncoiled garages,
        6) Having to do recall work for nearly $0
        7) Cars becoming less reliable (plastic parts on the hot engine instead of metal parts)
        8) Cars becoming much more complex

        Google "Why are auto mechanics leavin

        • I have a lot of mechanics in my family and I agree with everything except #7. Cars are getting more reliable despite the cheap plastic parts. The cheap plastic parts just make things difficult for the mechanics.

          The problem with being an auto mechanic is that during your younger years you are constantly learning, and with flat rate pay that can be costly. Then you have to basically take out a second mortgage to buy all your tools. In your late 20s and in your 30s—when you have the tools, know-how, and

    • There seems to be two fundamental problems: 1. Late stage capitalism has turned the colleges into cookie cutters instead of institutions of learning and education. 2. LLMs and AI are changing the face of human knowledge and education. Pumping facts in and out of malleable brains was great for a long time, but it won't serve the next generation. We need a completely different philosophy and it is up to the colleges to lead the way. Unfortunately, refer to point 1.

      You’re worried about LLMs and AI changing the face of education? Rest assured college viability will be the least of our concerns in a society suffering violently with a 25% unemployment rate. AI is changing the face of human employment. Also known as that thing that sustains human life until we create an answer for the unemployable human problem. Which Greed won’t. Best Greed will do is pretend UBI isn’t Welfare redefined, while probably killing off a billion or two in the process.

    • We're sliding into the time where knowledge for knowledge's own sake is no longer going to be of value. Knowledge will become an anchor around your neck, because it will become malleable to the larger society. Whatever the machines say is the truth today, and if you aren't on the correct pipeline of information, you won't be "up to date." I remember reading about such societies in sci-fi books as a kid and thinking what an amazing way to live that would be. All knowledge, at your fingertips, or directly in

  • Any academic who places faith in AI detection or generation, to produce anything of value worth detecting, is either unqualified, incompetence, confused, or lying. AI can't critically think, you can't ask it a question such as: “Please give me an honest opinion of Lord of the Flies”, and expect to get an original analysis.

    Since AI can only produce text that has already been used, written or compiled, the chances it can detect original work, is so low, as to be meaningless. What is it using
    • by jbengt ( 874751 )

      “One of the more boring, stupid, idiotic, and dull books ever written. To call it, a literary masterpiece would equate masterpieces with literal piles of shit, to which, I've made one this morning, which we should admire. That book should be removed from any required reading list because to enjoy it is, to have a mental disability, and at that point use that shit as finger paint.”, that's pretty close to what I wrote, I got in trouble for hating the book.

      I don't know if that's a good book or no

      • That was the summary of what I wrote, paper was a few pages, but the book is still terrible. That overall point is that academics don't want original thought, they want regurgitated thought. They don't want students using AI because it will spit out existing work, but that's what they want.
  • by timholman ( 71886 ) on Sunday December 15, 2024 @05:22PM (#65015509)

    My opinion is that universities, university faculty members, and society as a whole are drastically underestimating the long-term effects of generative AI. Arguably it will completely undermine the value of many college degrees within a decade.

    Any university instructor can tell you that students will tend to fall into three categories:

    (1) The ones who will cheat at every opportunity, no matter what others so.

    (2) The ones who are scrupulously honest regardless of what others do.

    (3) The ones who will not cheat as long as they perceive a fair playing field, but will resort to cheating if they see other students flagrantly doing it who are not being caught.

    Most faculty put their effort into dealing with the students in category (1), but generative AI has thrown a huge wrench in the works. ChatGPT is already a much better writer than 95% of college students (or professionals, for that matter), and it is getting better all the time. There's no end to the arms race between cheating and cheating detection, and more and more students in category (3) will resort to using ChatGPT because they will know that their peers are using it and not being caught.

    We are heading towards a world where almost all students will begin using generative AI to write their papers and do their homework starting in junior high school. Cheating levels in many subjects will approach 100%, especially at exclusive private schools where parents will turn a blind eye to what is happening so long as their children get a step up on the competition. Then those same students will go to college, keep right on cheating, and graduate with highest honors while only functioning at a 6th to 8th grade intellectual level.

    At that point two things will happen. First, employers will realize that almost all students from University XX who majored in YY are functionally illiterate without the help of generative AI, and will cease hiring them. Second, those same employers will realize the generative AI itself is actually all they need; the college graduates themselves have become redundant. And at that point everyone will realize that it is pointless to go to an expensive private college and incur $250K to $500K in debt to earn a liberal arts degree that is literally not worth the paper it's written on.

    STEM-based programs won't be impacted quite as badly by the trend (at least in the short term), but classic liberal arts is doomed. Generative AI will be a better writer, researcher, and teacher than any human could possibly hope to be. (Try asking the paid version of ChatGPT if it can help your children learn how to read if you're not convinced.)

    It will be a very different world for most universities a decade from now unless academic programs are completely redesigned from top to bottom, and I doubt most faculty will be willing or able to adapt quickly enough. A great many small private schools will find themselves closing their doors, and many top-ranked elite schools will find themselves hit hardest.

    • by Anonymous Coward

      My opinion is that universities, university faculty members, and society as a whole are drastically underestimating the long-term effects of generative AI. Arguably it will completely undermine the value of many college degrees within a decade.

      Degrees have already been undermined by the colleges themselves by passing students who would have failed in the past. They do this for the revenue additionally they take on a lot of foreign full fee paying students who would never pass the normal process in western countries again for the revenue

      First, employers will realize that almost all students from University XX who majored in YY are functionally illiterate without the help of generative AI, and will cease hiring them. Second, those same employers will realize the generative AI itself is actually all they need; the college graduates themselves have become redundant. And at that point everyone will realise that it is pointless to go to an expensive private college and incur $250K to $500K in debt to earn a liberal arts degree that is literally not worth the paper it's written on.

      Many employers looking for real talent based on merit don't look at degrees anymore for the first reason above they know many students should never have been allowed into College/University

      Most people who do liber

      • by jbengt ( 874751 )

        Most people who do liberal arts end up doing menial jobs or are doing the only course they can pass

        Liberals arts is supposed to teach the skills (arts) of citizens that are necessary for a thriving democratic society (liberal). It goes back to ancient Greek democracies. The original seven liberal arts were Grammar, Logic, & Rhetoric; and Arithmetic, Geometry, Astronomy, & Music. The usefulness of those are mostly self-evident, but back then astronomy was useful for keeping time/calendars and fo

    • Generative AI will be a better writer, researcher, and teacher than any human could possibly hope to be.

      Based on what? We're definitely into the levelling off phase of LLMs, after the initial massive performance boosts. Improvements are now more incremental and more hard won than they were.

      Half of writing is the mechanics of writing, the other half is having something to say. Gen AI falls completely short on the latter which is why it produces this samey slop all the time.

      And researchers need to be

      • What happens when every one of your arguments against the “dumb” AI, becomes extinct?

        We hire good enough humans today. They will be replaced by good enough AI in the future. Anyone assuming we will wait for some kind of “perfect” AI to prove itself, doesn’t grasp the reality of good enough.

        Autonomous driving cars were certainly good enough at driving to become legal. Let’s not assume about AI needing to even be anywhere close to smarter than us. It only has to be chea

      • by gweihir ( 88907 )

        What happens is that GenAI is a better writer than something like 90% of all people. Turns out these 90% are really crappy writers. Of course, if some thinking is required, that number drops to something like 80% as that is about the number of people who are really crappy thinkers.

    • A great many small private schools will find themselves closing their doors, and many top-ranked elite schools will find themselves hit hardest.

      Totally agree with your concerns, but we should remember one fact with those we assume will be hit hardest. Many elite schools can survive the education apocalypse because they’re sitting on massive endowments that are heavily invested. If they go under, we probably have much larger problems in society than attending college.

      I suspect as the education apocalypse comes down on higher education, the only colleges left standing will be those who survived financially because of a metric fuckton of endow

    • You're not wrong.

      1) The purpose of student assignments and homework is to practice thought patterns and facts. If the studends don't want to do them by themselves, why force them? We will eventually go back to the historical practice of elite universities: lecture the students for 3 years, and at the end do one (1) in person oral questioning exam covering everything. If they fail, they fail. If they succeed, they succeed. Meanwhile, the students are free to do as much or as little as they like. Some of th

      • by gweihir ( 88907 )

        We had something a bit like (1) in most courses after the first two years (5 year degree, awarding a CS masters-equivalent, although no fixed time was stated): There were exercises, there sometimes were solutions or result discussions, but exercises went completely ungraded and there was one exam at the end of the course. Doing that in a larger scale would certainly weed out those that do not get what the whole experience is about.

        I do agree about the prompt engineers and the "prompt engineer herders". They

    • by gweihir ( 88907 )

      Well, I do IT security teaching academically and we are throwing out AI in one course were it was allowed in exams for a year. Most others I teach are in my responsibility and exams are on paper. Period. This one was not and I sort-of expected that doing "open Internet" exams stops working in the age of AI. Turns out you get too many of type (1) and (2) not learning anything and still passing exams. And that is not good at all.

  • That doesn't mean that if they flag one there is 99% chance it is AI generated. Based on there numbers about 3% is flagged as AI generated. With a 1% error rate, that means a flagged paper has about a 1 in 3 chance of being a false positive; but saying 1% makes it seem like "if you are flagged you are guilty."
  • The Guardian notes one teacher's idea of more one-to-one teaching and live lectures — though he added an obvious flaw:
    "But that would mean hiring staff, or reducing student numbers." The pressures on his department are such, he says, that even lecturers have admitted using ChatGPT to dash out seminar and tutorial plans. No wonder students are at it, too.

    Believe it or not there are lots of pedagogical approaches besides "assign and grade homework assignments."

    And without getting in to the many alternatives, it's pretty easy to to solve this problem just by (a) all work is optional, only graded for those who want feedback (b) all evaluation is conducted by end-of-term -- or end-of-degree -- in person testing. In that case there would be zero incentive or benefit to use generative AI for assignments (other than as a study aid).

    The reason that would not be exc

    • This will be trotted out - and there's some evidence that females don't do as well as males in written exams. Which is one of the reasons home written essays arrived....

    • by gweihir ( 88907 )

      The reason that would not be exceptionally popular is because universities, rather than being forums for the great ideas of the world to be pursued by the adept, are now a place to shove listless young adults, stamp them with the correct views, and push them out with a coming of age certificate. Impediments due to lack of ability, dedication, or character are considered problems for the university to deal with, usually by finding a way to make those issues less relevant to the final outcome.

      Sad but true. In a very real sense, many universities are not universities anymore.

  • From one of the links;
    "ChatGPT generated exam answers, submitted for several undergraduate psychology modules, went undetected in 94% of cases and, on average, attained higher grades than real student submissions."

    I suspect we all will have to get used to the idea that the machines already do better than we can at certain kinds of things that used to be exclusively in the realm of human beings. They can already offload some of our thinking for us. I sniff that they are getting into our intellectual space an

    • by gweihir ( 88907 )

      Undergrad modules in the US are typically pretty shallow and generic. Do not read to much into these results.

  • by couchslug ( 175151 ) on Sunday December 15, 2024 @06:36PM (#65015623)

    Aircraft mechanics are required to perform those to standard because no amount of regurgitating text is a physical performance test.

    Diploma mills love electronic testing and computer courses, but they don't show performance.

  • Much like when a New York Times reporter gets the front page story completely wrong and the retraction if there is one is on page 19. There are no consequences until the system collapses and everyone looks around covered in ash and epiphanically utters⦠âoewait there was a fire?â. Because the stakes for baseless accusations are extremely low and the review process so onerous for the accused you end up with a one sided grinder. Now if the TA was had to raise their concerns with the profe
    • That said, having chat GPT write a students homework is on the rise and learning standards at universities continues to decline. Itâ(TM)s so bad that the last time I taught college in 2014 3 of the 14 students in the class couldnâ(TM)t write a coherent paper if their lives depended on itâ¦. I wasnâ(TM)t the English professor so I didnâ(TM)t grade them on their communication skills.
  • The answer to this is simple and has been around for decades.

    Prof gives a topic. You are instructed to show up able to write a paper on that topic without notes.

    Papers are then collected and graded.

    Simplest is easiest.

  • AI is used/abused to write a paper. AI is then used as a detection tool by the “teacher” to find those who are cheating at the task at hand? Who’s bullshitting who here? Sounds like no one actually knows what the fuck they’re doing without the help of AI. Should we use AI to start finding teachers cheating at their job too? Or should we start re-defining what educating a human means, since AI isn’t going anywhere but to the workplace to gobble jobs?

    We’re worried abou

  • If you want to prevent AI use, then prevent AI use. Yes, that means in-class work and exams. For homework, you cannot assure or detect anything reliably, deal with it.

  • I tell my students they are free to use AI, but they need to understand what it suggests and why. Learn from it. Also, catch it, when it screws up.

    The effect I am noticing is that the students who would pass anyway are passing with better grades. Meanwhile, those who would have failed before AI now fail dramatically. I've had exams with U-shaped distributions: only top or flop, with almost no grades in the middle.

    Maybe that's actually a good thing?

  • Cheating has always been a big thing with assignments and essays. Students can cheat off each other. They can find assignments from previous years...

    What is new is 'newer' teaching methods want to de-emphasize exams. There are reasons for this in that some students may not show their best during exams. In reality though, a proctored exam is simply unmatched in terms of grading a student.

    Even things like essays can be done in proctored exams. I remember taking philosophy and history elective when I went to u

Real Programmers don't eat quiche. They eat Twinkies and Szechwan food.

Working...