Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI Education

Cheating Fears Over Chatbots Were Overblown, New Research Suggests (nytimes.com) 55

Natasha Singer reports via The New York Times: According to new research from Stanford University, the popularization of A.I. chatbots has not boosted overall cheating rates in schools (Warning: source may be paywalled; alternative source). In surveys this year of more than 40 U.S. high schools, some 60 to 70 percent of students said they had recently engaged in cheating -- about the same percent as in previous years, Stanford education researchers said. "There was a panic that these A.I. models will allow a whole new way of doing something that could be construed as cheating," said Denise Pope, a senior lecturer at Stanford Graduate School of Education who has surveyed high school students for more than a decade through an education nonprofit she co-founded. But "we're just not seeing the change in the data."

ChatGPT, developed by OpenAI in San Francisco, began to capture the public imagination late last year with its ability to fabricate human-sounding essays and emails. Almost immediately, classroom technology boosters started promising that A.I. tools like ChatGPT would revolutionize education. And critics began warning that such tools -- which liberally make stuff up -- would enable widespread cheating, and amplify misinformation, in schools. Now the Stanford research, along with a recent report from the Pew Research Center, are challenging the notion that A.I. chatbots are upending public schools.

This discussion has been archived. No new comments can be posted.

Cheating Fears Over Chatbots Were Overblown, New Research Suggests

Comments Filter:
  • by SmaryJerry ( 2759091 ) on Thursday December 14, 2023 @07:46PM (#64082715)
    When the cheating rate is 100%, it can't go up any more.
    • Re:It's peaked! (Score:4, Insightful)

      by ShanghaiBill ( 739463 ) on Thursday December 14, 2023 @07:53PM (#64082725)

      When the cheating rate is 100%, it can't go up any more.

      The percentage in TFA is the number of students who cheated at least once. Although the number who cheat can't go over 100%, the frequency of their cheating can still increase.

      Disclaimer: I only cheated in English classes.

      • When the cheating rate is 100%, it can't go up any more.

        The percentage in TFA is the number of students who cheated at least once.

        Once, eh?

        Dunno why this translated in my mind to "I cheated, but I didn't inhale." Still gave me a laugh.

      • by DarkOx ( 621550 )

        I was going to make the same comment. The sort of person that decides to cheat once is probably a lot more likely to cheat again the next time motive an opportunity present, the person who refuses to cheat will probably refuse the temptation a subsequent time as will at least at higher rate than the general population.

        https://www.onlineeducation.co... [onlineeducation.com]. -> we see lots of evidence that students are making big use of LLMs. Which is not to say their use of it is cheating, but we have to assume they are havin

  • Cheating rate (Score:5, Insightful)

    by Calydor ( 739835 ) on Thursday December 14, 2023 @07:51PM (#64082723)

    I'm sorry, but if 70% of the students engage in cheating you have a serious problem.

    And what about the kind of cheating? There's a difference between sneaking in a cheat-sheet of formulas but still having to do the math versus just asking a robot what the answer is.

    • Re:Cheating rate (Score:4, Interesting)

      by ShanghaiBill ( 739463 ) on Thursday December 14, 2023 @08:00PM (#64082735)

      I'm sorry, but if 70% of the students engage in cheating you have a serious problem.

      When I hire a coder, I'd much rather hire someone who knows how to use Google and cut-n-paste, and is done in an hour, than someone who takes a week to write an original solution.

      In academia, when you cheat, you get an F.

      In the real world, when you cheat, you get a raise.

      • Re:Cheating rate (Score:4, Insightful)

        by Firethorn ( 177587 ) on Thursday December 14, 2023 @08:03PM (#64082741) Homepage Journal

        In academia, when you cheat, you get an F.

        Actually, you get an F when you're caught cheating, which makes it a lot more like "the real world", where getting caught can cost you your job, but getting away with it gets you your raise.

        • In academia, when you cheat, you get an F.

          Actually, you get an F when you're caught cheating, which makes it a lot more like "the real world", where getting caught can cost you your job, but getting away with it gets you your raise.

          I believe the high point of my academic career in this regard was in a particularly painful graduate engineering class where in one assignment we had to formulate a system of differential equations across several variables. Written out by hand it was about 10-15 pages of math, and we were working in small groups. After formulating the system and starting to solve 4 pages in, two of the group left. Just me and one guy went on to about page 8 but by this time it was around 11pm. I told him I was done, us

          • This reminds me of some of the stuff I encountered in college: The awareness that if you ever did this stuff out "in the real world", you would not only be allowed to use said calculation tools, they'd insist that you use them. Probably much more extensive tools at that.

            As for the sign, in calculus class we used to joke about needing "Remedial Integer Addition" and how the more math we learned, the less capable of the basics we were.

            You didn't cheat. The other student just didn't fully understand the req

      • Re:Cheating rate (Score:5, Insightful)

        by iAmWaySmarterThanYou ( 10095012 ) on Thursday December 14, 2023 @09:41PM (#64082817)

        You want code from someone who has no idea what their code does, where it came from, what bugs it has, how it works or how to fix it?

        Uhm.... ok. I hope you're not producing software for anything important.

        • Re:Cheating rate (Score:5, Insightful)

          by ShanghaiBill ( 739463 ) on Thursday December 14, 2023 @10:05PM (#64082835)

          I don't cut-and-paste from Stackoverflow because I am incapable of writing the code myself. I cut-and-paste so I can do in an hour what takes you a week.

          I put the StackOverflow URL in a comment at the beginning of the file, so if there is ever a question of provenance, I have a reference.

          As for bugs, that's what unit tests are for. Code from a Stackoverflow post by someone with 10,000 karma points is a lot less likely to have bugs than some roll-yer-own solution by the noob in the next cubicle.

          • Re: (Score:3, Insightful)

            If unit tests caught all bugs there'd be no bugs but I'm not here to argue with you about your "coding style".

            Just assure me you don't work on code for anything related to public safety, construction, transportation, medical or anything else that will get people killed when it breaks.

            • Just assure me you don't work on code for anything related to public safety, construction, transportation, medical

              Riiiiight. Because that stuff should always be rewritten from scratch by the lower bidder. Sure. Whatever.

              • No, it just shouldn't be copy pasted from god knows who by someone incapable of writing their own code for anything important.

                At least your silence on the question has assured me your code isn't important.

          • Good that you're citing your sources. But how does it work legally? Does the CC license in SE allow you to copy and reuse code that is more than a short snippet? Does your product have a compatible license?
          • I don't cut-and-paste from Stackoverflow because I am incapable of writing the code myself. I cut-and-paste so I can do in an hour what takes you a week.

            {...} As for bugs, that's what unit tests are for. Code from a Stackoverflow post by someone with 10,000 karma points is a lot less likely to have bugs than some roll-yer-own solution by the noob in the next cubicle.

            Exactly this.

            I guess these same people in other jobs would demand that you just use a rock and a stick, and never look at how anybody else does things.

      • OK, that's on the job. But there's no reason whatsoever to go to Google during your Data Structures and Algorithm Analysis course during school. If you do, you're gonna be a shit programmer later.

      • In academia, when you cheat, you get an F.

        Only if you get caught but what makes cheating pointless is that if you copy the result and cite the original source you still get to use it but there is no cheating at all. In real life when you cheat and get caught you can end up with a lawsuit and being fired and if you want to use someone else's code you'll need to pay them.

      • Great... you hire the cheater who blindly mimes others and doesn't understand what he is doing and I'll hire the honest guy who can reason about systems and solutions. It's a whole lot easier to teach the latter how to Google than it is to teach the former how to be competent.
      • See the recent stories about the president of Harvard.

        Cheating is fundamental in academia, accusations of cheating, made due to grudges are what kill careers, not actual cheating. Actual cheating is usually covered up and lied about as part of the great war of producing more bits of paper than China.

      • This is just stupid. You're conflating wildly different things.

        In the "real world" are you happy to hire someone who lied on their resume and can't do what they claimed?

        I assume "yes", because that's cheating and you appear to be very much in favour of cheating.

      • by evanh ( 627108 )

        You're being dumb. The customer is not testing your knowledge. Using how-to snippets for getting you up to speed on a particular language or algorithm is not a cheat. It's simply doing further learning on the job. You probably should be giving a discount for that learning time though.

      • I disagree. Let me correct your statement:

        In the real world, when you cheat and get the right answer, you get a raise.

        A few days ago I had to interview a candidate remotely, and it was obvious that he was cheating. And the answers were still wrong. Later, I checked my questions against ChatGPT and found that the candidate repeated the answers almost verbatim.

        So, my own viewpoint is: if you, as a teacher, can design tests that ChatGPT gets wrong (and that's easy), then ChatGPT-based cheating is not a problem

      • It's not cheating to build on someone else's good work. All established scientific knowledge is based on someone else's work. It's called standing on the shoulders of Giants. What is cheating is not doing the work and claiming you did. The sad part is that the person who you are cheating is yourself. You are cheating yourself of understanding and future capability.
      • by RobinH ( 124750 )
        No. In the real world, the programmers who only know how to cut and paste do entry-level work, and the programmers who truly understand systems in depth form the core team and work on hard problems, and are paid more for their value. Those same experienced programmers are happy to go lookup quick solutions to known problems on StackOverflow, but the difference is that they immediately grok what they're reading.
    • by kmoser ( 1469707 )
      Also, it's assuming students consider the use of ChatGPT to be cheating. If they don't consider it cheating, then of course then numbers are the same.
  • by ewibble ( 1655195 ) on Thursday December 14, 2023 @07:56PM (#64082727)

    They asked if people if they cheated, it doesn't seem like an accurate measure, maybe that's the percentage of people willing to admit they cheated, even if anonymous. Or maybe the people who cheated are cheating more because of chat bots.

    • by ShanghaiBill ( 739463 ) on Thursday December 14, 2023 @08:07PM (#64082745)

      They asked if people if they cheated, it doesn't seem like an accurate measure

      What other measure can they use?

      They can't look at the the number caught cheating because that's a tiny percentage.

      When I was in school, I wrote programs for other students (my excuse: I was broke), and none were ever caught. I always asked them what grade they wanted. The most common answer was a "B". Sometimes they would say an "A", and then I'd explain that would be a red flag.

      "Ok, how about a C minus?"

      "It'll be ready in an hour."

      • >They can't look at the the number caught cheating because that's a tiny percentage.

        No, it's not. Cheating is very common, and catching cheating is very common.

        This semester saw a rise from 20% to 80% cheating rate on intro CS classes at our local colleges.

        OP is wrong.

  • In other words, (Score:4, Interesting)

    by fredrated ( 639554 ) on Thursday December 14, 2023 @07:58PM (#64082731) Journal

    the students that didn't cheat before still don't cheat.

  • I tried using it in my Java class (no pun intended). It was not able to produce code that met the problem requirements.
    • by ShanghaiBill ( 739463 ) on Thursday December 14, 2023 @09:13PM (#64082799)

      It was not able to produce code that met the problem requirements.

      You're doing it wrong. ChatGPT isn't going to give you a polished program ready to submit. But it can often give you a good first draft of a program. Then you fix the obvious bugs, run some unit tests, and fix the problems.

      If you are used to working with ChatGPT output, which will have different types of bugs than a human would make, it can cut your dev time in half.

      • I think for the moment, AI might even help some people by giving you code you then have to analyze to look for bugs or hallucinations. It may even help some slower learners who might have given up stay the course.

        For the moment anyway.

    • I think OpenAI have done a good job at making sure their product is not very useful for getting good answers to questions. It's probably less effort to do the work yourself than craft a query that makes ChatGPT do something useful, only to have to double check it's output anyway.

      When it was released in January, it seemed to have deep understanding of English. The only thing is has a deep understanding of now is a reminder that some topics are very complicated and as an Al language model, it's import

  • and they asked the students if they had been cheating...

    They expect an honest answer?

    And of course 'cheating' can have multiple meanings, what about cheating on their significant other (and these days that could be of the opposite sex, or the same sex, or something else...)

    • Even if they were lying, if more were doing it, you'd expect an uptick in the 'yes' answers, right? I think ATM, AI just isn't good enough to be used as a cheating tool. The risk of being exposed it too high, and you end up doing the work yourself during the verification phase anyway!

    • Found the guy that brings gender issues into every single thing.

  • by Junta ( 36770 )

    Rather than how many tried to cheat, I'd be curious as to how often they got away with it. Does the output of LLM make it harder to catch cheating?

  • ChatGPT and Google Gemini are not the best at programming anyway, they get a lot wrong. Who would depend 100% on a chatbot for programing and essay writing?

    • We're starting to use it as a tool at work, but in very specific ways that increase code quality and consistency. I fully expect it to improve in a few years, though, and take all our jobs. IT's just a matter of solving enough of the hallucinations.

    • Who would depend 100% on a chatbot for programing and essay writing?

      Well, off the top of History's head (circa dawn of time to yesterday), I'd say it's the same procrastinating humans who were bored/distracted during the learning phase, and abuse every plagiaristical tool available the night before the assignment is due.

      Could be just spitballing too...at a human accurate level...

    • On the other hand teachers don't read essays when marking then, just imagine how long that would take. At best they skim them looking for some expected points. At school I watched groups off 6 to 8 kids submit the exact same essay in different writing and the teacher never noticed it was the same.

  • Meanwhile, I've spent all day following up on a student pulling up ChatGPT in a CS final exam I proctored yesterday. Even with recorded video evidence the school still requires I spend days on this process and get the student to accept any penalty I give out. Must be nice to be at Stanford.

  • You also have to factor in all the false positives from LLM output "detectors," which OpenAI claims cannot & don't work. It could be the case that some (smart, well-informed) students learn to use LLMs to make their studying more efficient & therefore learn to produce the required written submissions more quickly & easily. I suspect that'd be a minority of particularly well-motivated & savvy students though. With training & guidance, it could be more though.

One good suit is worth a thousand resumes.

Working...