Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Education

Professor Failed More Than Half His Class After ChatGPT Falsely Claimed It Wrote Their Final Papers (rollingstone.com) 126

A Texas A&M professor failed more than half of his class after ChatGPT falsely claimed the students used the software to write their final assignments. Rolling Stone reports: A number of seniors at Texas A&M University-Commerce who already walked the stage at graduation this year have been temporarily denied their diplomas after a professor ineptly used AI software to assess their final assignments, the partner of a student in his class -- known as DearKick on Reddit -- claims to Rolling Stone. Dr. Jared Mumm, a campus rodeo instructor who also teaches agricultural classes, sent an email on Monday to a group of students informing them that he had submitted grades for their last three essay assignments of the semester. Everyone would be receiving an 'X' in the course, Mumm explained, because he had used "Chat GTP" (the OpenAI chatbot is actually called "ChatGPT") to test whether they'd used the software to write the papers -- and the bot claimed to have authored every single one. "I copy and paste your responses in [ChatGPT] and [it] will tell me if the program generated the content," he wrote, saying he had tested each paper twice. He offered the class a makeup assignment to avoid the failing grade -- which could otherwise, in theory, threaten their graduation status.

There's just one problem: ChatGPT doesn't work that way. The bot isn't made to detect material composed by AI -- or even material produced by itself -- and is known to sometimes emit damaging misinformation. With very little prodding, ChatGPT will even claim to have written passages from famous novels such as Crime and Punishment. Educators can choose among a wide variety of effective AI and plagiarism detection tools to assess whether students have completed assignments themselves, including Winston AI and Content at Scale; ChatGPT is not among them. And OpenAI's own tool for determining whether a text was written by a bot has been judged "not very accurate" by a digital marketing agency that recommends tech resources to businesses.

In an amusing wrinkle, Mumm's claims appear to be undercut by a simple experiment using ChatGPT. On Tuesday, redditor Delicious_Village112 found an abstract of Mumm's doctoral dissertation on pig farming and submitted a section of that paper to the bot, asking if it might have written the paragraph. "Yes, the passage you shared could indeed have been generated by a language model like ChatGPT, given the right prompt," the program answered. "The text contains several characteristics that are consistent with AI-generated content." At the request of other redditors, Delicious_Village112 also submitted Mumm's email to students about their presumed AI deception, asking the same question. "Yes, I wrote the content you've shared," ChatGPT replied. Yet the bot also clarified: "If someone used my abilities to help draft an email, I wouldn't have a record of it."
"A&M-Commerce confirms that no students failed the class or were barred from graduating because of this issue," the school said in a statement. "Dr. Jared Mumm, the class professor, is working individually with students regarding their last written assignments. Some students received a temporary grade of 'X' -- which indicates 'incomplete' -- to allow the professor and students time to determine whether AI was used to write their assignments and, if so, at what level." The university also confirmed that several students had been cleared of any academic dishonesty.

"University officials are investigating the incident and developing policies to address the use or misuse of AI technology in the classroom," the statement continued. "They are also working to adopt AI detection tools and other resources to manage the intersection of AI technology and higher education. The use of AI in coursework is a rapidly changing issue that confronts all learning institutions."
This discussion has been archived. No new comments can be posted.

Professor Failed More Than Half His Class After ChatGPT Falsely Claimed It Wrote Their Final Papers

Comments Filter:
  • That went well...
    • He's (apparently) a Professor of Rodeo, what were you expecting?

      Also, I mean I know it's Texas and all, but a rodeo instructor as a professor? Can my manicurist cousin also get a professorship there?

      • Re:That went well... (Score:5, Informative)

        by Zak3056 ( 69287 ) on Thursday May 18, 2023 @07:44AM (#63531977) Journal

        He's (apparently) a Professor of Rodeo, what were you expecting?

        Also, I mean I know it's Texas and all, but a rodeo instructor as a professor? Can my manicurist cousin also get a professorship there?

        The A&M in "Texas A&M University" stands for Agricultural and Mechanical, and dude has a doctorate from KSU in Animal Behavior and Welfare. He also appears to be an adjunct and not an associate or full professor. Here [tamuc.edu] is his CV. His duties at TAMU appear to largely relate to coaching, but he also teaches "Introduction to Animal Science."

        So... doesn't seem like a fair characterization on your part.

  • by ewibble ( 1655195 ) on Wednesday May 17, 2023 @05:13PM (#63530591)

    Shouldn't the professor be fired, for using ChatGPT to do his job, instead of doing the actual work themselves.

    On a more serious note, this is the type of thing that really worries me, people believing that what these chat bots are authoritative and acting on that, because AI is smart right, all the movies say it is.

    • by timeOday ( 582209 ) on Wednesday May 17, 2023 @05:20PM (#63530615)
      Asking the thing, "did you write this?" is actually not the craziest thing to do. Actually it would be trivially easy for OpenAI to check and give a "yes" or "no" answer - not by doing any "intelligent" analysis but simply by checking against transcripts (which I think they already keep).

      Would that work if the students bothered to remix the results a little? No. Would it create some sort of security risk by making it possible to find out what somebody else had been asking? Possibly.

      Still, it's not the dumbest assumption a pig farmer could make. I could imagine it becoming a proposed regulation.

      • Asking the thing, "did you write this?" is actually not the craziest thing to do. Actually it would be trivially easy for OpenAI to check and give a "yes" or "no" answer - not by doing any "intelligent" analysis but simply by checking against transcripts (which I think they already keep).

        This opens up another can of worms, which is the privacy of your ChatGPT queries. In your scenario, anyone can ask ChatGPT about anyone else's queries.

        As an analogy, what would someone find if anyone could ask similar questions about your browser history(*)?

        Also, can your queries be used against you (as evidence of wrongdoing) in a court of law?

        (*) Mine is chock full of tentacle porn and furries, not because I like that sort of thing, only because I find the study of such sociological phenomena fascinating.

        • by timeOday ( 582209 ) on Wednesday May 17, 2023 @05:58PM (#63530779)

          As an analogy, what would someone find if anyone could ask similar questions about your browser history(*)?

          It would only be necessary to ask whether a given snippet had ever been generated by ChatGPT (or say within the last 90 days), not whether it had been provided to anybody in particular.

          • ..and you see no privacy violations with that
            • I already said above I thought it could arise as a risk.

              Although I'd be interested if anybody could specifically come up with a good example.

        • by nasch ( 598556 )

          Also, can your queries be used against you (as evidence of wrongdoing) in a court of law?

          Of course.

      • That's like asking ChatGPT to grade your students' final exam essays without ever reading them. It will come back with a grade, probably. But that grade will be based on absolutely nothing. It might as well just be assigning random grades. ChatGPT doesn't have any clue what it's saying. It's all "hallucinations" all the time; not just when you catch it in error. It cannot answer questions. People who rely on it to do so for their job should be fired.

    • by Healer_LFG ( 10260770 ) on Wednesday May 17, 2023 @05:21PM (#63530625)
      I wouldn't say he should be fired; as all of this is breaking new ground here, and plenty of people are trying to deal with tbis issue without having any real knowledge on the subject. Without any policies and procedures put in place, this kind of thing will only become more and more common. What it *does* highlight, is that institutions of *any* sort need to work with professionals who are knowledgeable on the subject to create official guidelines, policies, and procedures for how to deal with it. Most places we're hearing about that make these kind of mistakes don't have anything aside from the vague "AI generated works are not allowed" rule.
      • He should be fired and the story should be widely reported as a cautionary tale to future professors. The more public examples of people being fired for using it, the less excuse there is for others to make the same mistakes.

    • by Arethan ( 223197 ) on Wednesday May 17, 2023 @05:24PM (#63530641) Journal

      On a more serious note, this is the type of thing that really worries me, people believing that what these chat bots are authoritative and acting on that, because AI is smart right, all the movies say it is.

      This is exactly the immediate danger of AI - humans making real world decisions based on misplaced trust in generative LLM because "it sounds smart". I hope this professor is summarily reprimanded - article is tldr

      • by Roger W Moore ( 538166 ) on Thursday May 18, 2023 @01:14AM (#63531535) Journal

        This is exactly the immediate danger of AI

        That's NOT a danger of artificial intelligence it's a danger of natural stupidity. When GPS first became common you would regularly hear stories of idiots blindly following its directions despite the fact that those directions were obviously wrong.

        Any new technology will lead to idiots finding something dangerous to do with it. Even something as innocuous as putting dishwasher detergent in a dissolvable plastic sachet led to the "tide pod challenge".

    • by ffkom ( 3519199 )

      Shouldn't the professor be fired, for using ChatGPT to do his job, instead of doing the actual work themselves.

      Yes.

      this is the type of thing that really worries me, people believing that what these chat bots are authoritative and acting on that, because AI is smart right, all the movies say it is.

      And this was a professor demonstrating blind, unjustified trust into ChatGPT, not any random low-life bozo.

      This is truly the symptom of the first of the two Robot Apocalypses, the first of which we are now in: People lowering their standards in order to transfer work to AI.

      (The second Robot Apocalypse will be after the Singularity, when humans become mere tools to their robot overlords. But maybe by then nobody will be smart enough to notice anymore.)

    • by geekmux ( 1040042 ) on Wednesday May 17, 2023 @05:54PM (#63530755)

      Shouldn't the professor be fired, for using ChatGPT to do his job, instead of doing the actual work themselves.

      And what work exactly would that be? Let me know when machine-level analysis becomes an Educator 101 class. I'd love to know how and why you think current teachers are even remotely capable of this kind of cheating analysis without a computer involved somehow. Hundreds of students to validate as well, so it's not exactly an easy "manual" effort these days.

      Bottom line is teaching has become a lot harder than you assume. I'm not even a teacher and I can see that.

    • by SvnLyrBrto ( 62138 ) on Wednesday May 17, 2023 @05:56PM (#63530769)

      ChatGPT isn't why he should be fired. It's a tool, nothing more. And there's nothing wrong with using too to do your job. The reason he should 100% absolutely unequivocally fired... with prejudice and for cause with all perks, health plans, pensions, references, et cetera forfeited... is for falsely accusing others of wrongdoing; especially a wrongdoing like academic dishonesty which could have affected their graduation and future employment prospects. THAT, not ChatGPT, is what makes this beyond-the-pale intolerable. Hell, I'm not normally one to cite scripture. But the stunt he tried to pull has been considered to be so heinous, so universally, and for so long that there's a commandment on the topic.

    • by gweihir ( 88907 )

      Probably not for using ChatGPT, but definitely for using it incompetently in something that has real negative consequences for his students.

    • by thegarbz ( 1787294 ) on Wednesday May 17, 2023 @07:54PM (#63531097)

      Shouldn't the professor be fired, for using ChatGPT to do his job

      Your facetiousness aside, this IS the professors job, using the tools at their disposal. The fact that this specific tool was used incorrectly for the wrong purpose is an issue, but he was doing his job, albeit a bit poorly.

    • by Barny ( 103770 )

      By the sound of it the college is dragging its feet on policy, tools, and enforcement of AI related content.

      Should he be fired? No. A teacher who covers fucking rodeo riding should not be expected to understand all this.

      The college is entirely to blame.

      They need to:
      Develop clear policies on what is and what isn't acceptable levels of AI tooling.
      Provide up-to-date tools to ensure the grading teachers can apply those policies.
      Have someone on staff with final-say authority on the issue who can be consulted by

    • Gee, I thought most movies say AI is insane and will kill us all...

  • Rodeo instructor? (Score:5, Insightful)

    by Okian Warrior ( 537106 ) on Wednesday May 17, 2023 @05:19PM (#63530611) Homepage Journal

    Dr. Jared Mumm, a campus rodeo instructor who also teaches agricultural classes, sent an email...

    You had me at "rodeo instructor".

    Also: rodeo instruction and/or agricultural classes require writing multiple essays?

    • by Okian Warrior ( 537106 ) on Wednesday May 17, 2023 @05:37PM (#63530675) Homepage Journal

      Per previous slashdot story [slashdot.org] (full disclosure: my submission) I recommended that AI not be used as an excuse for medical mistakes.

      I'm now thinking that AI should never be used as an excuse for *any* mistake.

      In other words, the blame for anything bad to come of this should rest entirely with the professor, that professor doesn't get to put the blame on AI and remain innocent of wrongdoing.

      In this particular instance, we can suppose that numerous students were unfairly accused of wrongdoing based on faulty data returned by ChatGPT. A professor who did that outside of AI would be reprimanded after investigation, but all the students would be made whole by the university.

      Most of the time AI mistakes will be minor and of no consequence, some will cause intermediate distress, and some (medical diagnosis, for instance) might cause catastrophic harm. In all cases the *person* using the AI should be held responsible: the journalist who posts erroneous information, the professor who relies on AI to fail his students, the human resources person who never hires a minority, and so on.

      I can see a cutout for companies that put an AI in charge of human safety: surgical robots and self-driving cars, for example. In these cases the software would be certified by the government to be a) better than a human operator, and b) developed using a high level of standards, and c) any identified problems lead to a safer system. Very much like aircraft software is done today: we expect some software problems going forward, if there's a bug and someone gets killed then the company isn't at fault because the system was safer than a human operator to begin with, and we can analyze the root cause of the fault and update every unit in the field to make everything safer from the single incident.

      • by jythie ( 914043 )
        Expanding on this.... stats based AI should not be used for anything that matters. Great for recommendation systems, great for advertising, great anywhere that giving the wrong answer doesn't have consequences. It should not be used anywhere correctness is more than an entertainment related inconvience..
      • by gweihir ( 88907 )

        Exactly. Chat-AI is a tool (and not a very good one regarding result reliability and quality) and a tool is not to be blamed for it being used incompetently. That is solely on the tool-user. If this nil-wit had used a dice to determine the grades, the dice would not have been at fault either, but he would have been very much so. As he is now. Probably a lazy-ass idiot, because after what ChatGPT told him, he should very much have tried to verify that and to find out whether that result was reliable in any w

    • Dr. Jared Mumm, a campus rodeo instructor who also teaches agricultural classes, sent an email...

      You had me at "rodeo instructor".

      Same. Guess it's not surprising that the instructor is giving his students the run-around. :-)

    • Dr. Jared Mumm, a campus rodeo instructor who also teaches agricultural classes, sent an email...

      You had me at "rodeo instructor".

      A "rodeo instructor" teaching agriculture, probably has a lot more hands-on experience than 99% of educators today teaching shit they've never experienced themselves first-hand. And every human on this planet understands there is NO substitute for first-hand experience. None.

      If you don't think agriculture requires considerable analysis (as in justifying "multiple essays") then I welcome you to give it a shot. Let's see how well your uneducated ass does by comparison.

    • An Aggie misunderstands technology. This ain't news, this is business as usual in Texas.
      • by taustin ( 171655 )

        An Aggie misunderstands technology.

        More like "a college professor misunderstands . . . almost anything, except, perhaps, the material in the textbook he wrote.

        This ain't news, this is business as usual in Texas.

        And everywhere else, except in your bigoted liberal imagination.

    • by taustin ( 171655 )

      You had me at "rodeo instructor".

      Also: rodeo instruction and/or agricultural classes require writing multiple essays?

      You clearly have no idea what the business of agriculture is like. Most farmers - family owned farms that is, not just corporate employees - are college educated, often with double major of agriculture and finance, because even a small family farm is a pretty big business, with millions (or tens of millions) a year in revenue, and a delicate financial balance between spending hundreds of thousand one part of the year, and paying off the loans in another. (And A&M is a premier ag school.)

      So yeah, I'm sure the students do, in fact, have to write multiple essays a year, just like pretty much all college students.

    • I thought this was an Aggie joke. BTW, as sort of a final assignment before you can graduate at A&M you have to write an essay in a classroom setting. So I can't see how you would have the opportunity to use any kind of software. At least that's how it was when I graduated back in the dark ages. Hell, we weren't even allowed to use electronic calculators in exams.
  • by 93 Escort Wagon ( 326346 ) on Wednesday May 17, 2023 @05:21PM (#63530621)

    It responded "It was the best of times, it was the worst of times". Then it told me to "Call me Ishmael".

  • by Bobknobber ( 10314401 ) on Wednesday May 17, 2023 @05:26PM (#63530645)

    There is a certain irony in the sense that schools spent millions over the past few decades to transition to digital/online courses only to suddenly be confronted with LLMs.
    Now any online output, be it essay, drawing, or even speech can be faked with a reasonable degree of accuracy. Not saying all the students
    faked their papers, but it is symptomatic of a big issue here.

    Two ways I see schools tackling this problem:
    1. Re-emphasize in-class learning. That means more in-class projects/assignments, and less emphasis on homework. Math and science courses might actually be best prepared for this already, because most of their exams are done in-class. If there is a discrepancy between homework and exam grades, teachers can put two-and-two together. Might be the best short-term solution going forward.

    2. Incorporate AI as a teaching assistant. Think I have heard of some instructors doing this, where AI is used to create learning materials on the fly so that teachers can focus more on, you know, teaching. Ideally it leads to less rote memorization and more emphasis on creative thinking and improvisation. A longer-term solution with high payoffs but high risks as well.

    That all said, my concern is that the US culture emphasizes doing things quick, cheap, and easy. Colleges already feel like degree mills, with an emphasis on processing as many students out the door as possible. In turn, many students do not really take their studies as seriously as they should, which ends up creating lower-quality workers. On top of a serious illiteracy problem, and you run the risk of this technology making people dumber and lazier then they already are.

    Technology alone does not make a society better or worse. It amplifies aspects that already existed. If people were already hard, creative workers then LLMs will make them even more so.
    If they were lazy and incompetent however, it will only make that worse. People on twitter are already boasting about how they make chat-gpt read and summarize books for them so they do not have to. Same thing with writing, be it rote business emails or full-length books. When you do not practice those skills, you will lose those skills. Simple as.

    • Re: (Score:3, Insightful)

      by migos ( 10321981 )
      Yeah, in person quizzes and final exams should weed out a lot of the cheaters.
      • Interestingly enough, I recall how during the pandemic some college professors mandated that students install a type of exam software that when logged in, effectively locked them from using the internet or other applications on their device for the duration of the online exam. This was used to try and simulate a private exam room, so as to prevent students from cheating. Some would even have students include a webcam with it so they could record them while they took the exam.

        Not necessarily foolproof, and o

    • You're overlooking root-cause analysis. I'd love to talk to all of the adults who benefited greatly in life from the writing assignments given to them in high school or college. Let's see how well the entire fucking point stands up.

      Once again, the Higher Education Complex is desperate to justify their obscene costs and salaries by pretending to be offended over the idea that their customers are NOT getting the same quality of education from the internet, as they are on any overpriced campus.

      Bullshit, is b

      • I did suggest that teachers consider incorporating AI/LLMs as a sort of TA used to generate teaching materials so teachers can focus more on actual teaching. As in, teaching kids to value and examine old knowledge in a manner that is conducive to their academic growth. The teaching field has put too much focus on rote memorization, which just does not fly in a world where working smarter is better.

        Believe me, I have written my fair share of essays throughout the decades. They were not necessarily enjoyable.

        • I did suggest that teachers consider incorporating AI/LLMs as a sort of TA used to generate teaching materials so teachers can focus more on actual teaching. As in, teaching kids to value and examine old knowledge in a manner that is conducive to their academic growth. The teaching field has put too much focus on rote memorization, which just does not fly in a world where working smarter is better.

          Yes, but it does fly in a world where childrens test scores are corruptly tied to school budgets. Teachers want the "best" salaries for themselves? It's easy to do. Give the same damn "test" over and over again until rote memorization has filled those financial coffers making you the "best" damn school out there.

          That's the American edumucashun system in a nutshell. Broken by the worst kind of greed in capitalism, and condemned by the policy of leaving no mentally challenged mind behind, to the detriment

    • The transition to digital/online courses in schools has indeed been a new challenge with the emergence of language models such as GPT. While it is important to recognise that not all students are falsifying their work, it is true that this technology does raise issues of academic integrity. To address this problem, schools can consider two approaches. First, they can re-emphasize classroom learning by increasing the number of projects and assignments completed in class, reducing reliance on homework. This m
  • by Sebby ( 238625 ) on Wednesday May 17, 2023 @05:30PM (#63530659)

    Dr. Jared Mumm, [...] used "Chat GTP" (the OpenAI chatbot is actually called "ChatGPT") to test whether they'd used the software to write the papers -- and the bot claimed to have authored every single one

    So, he fully admits to not reading the assignments, instead using some unreliable tool's "assessment" and not questioning it and denying students their proper grades.

    I can't speak for those students, but I'm pretty sure they're not paying for faculty to not do their work, or worse yet as in this case, do it so ineptly that it denies them their grade.

    Don't know if it's legally accurate, but I'd consider this fraud, as students are clearly not getting what they've paid for.

    .

    • Re: (Score:3, Insightful)

      by thegarbz ( 1787294 )

      So, he fully admits to not reading the assignments

      He said nothing of the sort. There's a difference between reading an assignment and asking someone (or searching) whether the work was original.

      I can't speak for those students

      Please don't speak for anyone, at least not until you have a basic understanding of what went on. And for fuck sake can you sue happy morons calm down for a moment, a mistake was made and is being corrected without any enduring impact on the people involved. I'm beginning to feel like I need to sue you for reading your stupid post.

      • by Anonymous Coward

        I can't speak for those students

        Please don't speak for anyone

        That's what the poster said they're already doing, dumbass!

        And for fuck sake can you sue happy morons calm down for a moment, a mistake was made and is being corrected without any enduring impact on the people involved. I'm beginning to feel like I need to sue you for reading your stupid post.

        Given your posting history, you're obviously a armchair-expert-troll with waaaaayyyyy too much time on your hands. I'm sure you have plenty of time to troll others in court too (including wasting everyone's time). Can't wait to see your Trump-like tactics in the courtroom.

        You need to calm the fuck down. Also need to STFU already.

      • by Sebby ( 238625 )

        Please don't speak for anyone,

        I explicitly said I wasn't, moron.

        I'm beginning to feel like I need to sue you for reading your stupid post.

        Go ahead troll, I triple dog dare you! [youtube.com]

    • It's not as simple. I've done it as well. You do it AFTER you read the essay, it looks fishy, and then you are starting to question if the student REALLY wrote it. I've asked ChatGPT as well, for a few segments of some dissertations, and it confidently blurted that it was generated by it. At some point I gave it something that it was certainly not generated by ChatGPT but looked similar, and ChatGPT claimed authorship as well. It's total garbage.I learnt my lesson, not to ask it such questions
  • Hey, look, another "educated" person who is literally too stupid to do their job correctly. I'm sure he knows a good amount about agriculture, but that's not enough if you're going to teach it. This kind of behavior would make me question if Mr. Munn actually did the work to earn his own degree. He definitely hasn't kept up on his continued education as new tech has come along, that much is obvious.

    • by Bahbus ( 1180627 )

      Mumm*. Stupid autocorrect to Munn. Stupid last name.

    • by gweihir ( 88907 )

      Educated does not assure smart. Smart does not assure ducated. To actually understand how things work you need both and some real-world experience on top.

      • by Bahbus ( 1180627 )

        Yes, but I expected the educated to have some minimum qualifications. For example, if you're hired as a professor, I assume you have some experience in educating or were educated in...educating. And that comes with a basic understanding of what tools teachers/professors have at their disposal.

        But I have come to expect most college professors (especially those with PhDs) to have next to zero teaching capabilities making them functionally useless.

        • by gweihir ( 88907 )

          Professors routinely have no education qualifications. It has gotten worse with the selection criteria that prefer people that can bring in research money. Education qualification (and actual research qualification) are something that prevent you from becoming a professor, because they slow you down and take real time to acquire. Just know the right people and get some good grants and nobody cares what you actually can do in education and research.

    • Comment removed based on user account deletion
      • by Bahbus ( 1180627 )

        You can lose the quotes. Or you can keep them, and the next time your nerdy self is asked a question about agriculture that the professor in this story could answer easily and you can't, we can say you're another "educated" person who is literally too stupid to do their job properly too.

        You are missing the point, nor does your comparison make sense. His job as a professor is to pass on his knowledge (the topic or expertise is irrelevant). Him attempting to use a completely inappropriate tool for his profession is 100% his own stupid fault related to doing the job appropriately. Don't use tools you don't understand (and aren't even marketed) for your job that also directly impacts others. There are tools advertised as being able to detect if it was AI written. A cursory Google would have to

        • Comment removed based on user account deletion
          • by Bahbus ( 1180627 )

            You're saying that someone is a moron because they don't understand that a technical product that is marketed as a thing that answers questions, and that appears to answer questions, isn't actually capable of answering those questions, not even the ones someone might think it should know the answer to. (Serious question: why is it unreasonable for a non-technical person to assume that a computer that answers questions wouldn't know what questions it has answered before and how it answered them?)

            When it's been all over the news, when the warnings are inside EVERY single chat window, yes. Two or three months ago, I would agree with you that he might not have known. And again, expertise means nothing in this discussion. He's not doing agriculture work, he's teaching, and as a teacher he should know how to do certain things, like basic research. If you can't do that, you don't deserve to be a teacher of any kind.

      • their job isn't to answer questions but to string together words that would appear to look like an answer to a question

        I think a lot of people really don't realise how literally this is true. There is a long reinforcement learning step when the model is trained to produce answers which people like. As in people are given the input and output and get a yes/no choice on whether they like it.

        That's a minimum wager, or at least not highly paid job right there, so you don't get armies of experts doing that fine

  • Because somebody is clearly not very smart or competent.

    • Not a problem at Texas A&M, AKA sheepfucker U.

      College Station (where it is located) is the only place I was ever in any significant danger in Texas, and I went pretty much everywhere.

  • by Anubis IV ( 1279820 ) on Wednesday May 17, 2023 @05:59PM (#63530785)

    As a proud former student of Texas A&M, I nearly banged my head on the table when I read this. Then I realized it was—stick with me here—Texas A&M University–Commerce [wikipedia.org], not Texas A&M University [wikipedia.org], and everything made sense again.

    Except that whole thing about the professor teaching rodeo. What the hell is up with that?!

    • Texas A&M College Station certainly had its share of good ol' boys, for instance the old Range Science Department. I'm sure TAES is still riddled with them, but they don't teach.
  • He isn't a professor (Score:4, Informative)

    by 1s44c ( 552956 ) on Wednesday May 17, 2023 @06:00PM (#63530787)

    His title is "Instructor/Judging Team Coordinator,
    Agricultural Sciences and Natural Resources"

    He does not appear to be a professor, or a PhD.

  • ... Yeah, that's the ticket.

  • by Locke2005 ( 849178 ) on Wednesday May 17, 2023 @06:15PM (#63530831)
    GPT means "Generative Pre-trained Transformer". It's really more of a language processor that uses statistical analysis to predict text based on it's training input. It understands nothing! The old "Garbage In, Garbage Out" rule applies, ChatGPT output is no better than it's training input. So, while I think GPT would be great for translating between languages. I wouldn't trust it to tell me the truth about anything, because I don't think it understands the concept of truth.
  • My wife takes online courses from Pasadena City College. They use a system called "TurnItIn" that determines whether or not someone plagiarized an essay. Its not perfect either, but the teacher can fine tune the level. Her last essay claimed something like 20% plagiarism, but it was a shorter essay, and when I looked at what it claimed was "similarity" was actually insignificant. She ended up with full points.

    Maybe the prof should be using that instead.

    (Also, I have a friend who is a high school teacher

    • Turnitin does not detect plagiarism. It recommends parts of text for manual review. It detects plenty of things that are not anywhere close to plagiarism. Turnitin is a widely abused tool, just like ChatGPT.

  • saying he had tested each paper twice.
    Pure laziness, I'm sure! He should have done it three times and calculate the average and median by hand!

  • by Walt Dismal ( 534799 ) on Wednesday May 17, 2023 @07:13PM (#63530975)
    Look homie, I have it on good authority that ChatGPT wrote the original Bible. It even says in Bot 9:13 that "He who doubts my divinity can kiss my shiny metal ass." What more proof do you morons need?
  • A professor who accused his entire class of cheating by getting ChatGPT to do their work was, in fact, cheating by getting ChatGPT to do HIS work?

    Will he be getting a failing grade? Particularly since, so far, he is the only one who definitely cheated?

    • by F.Ultra ( 1673484 ) on Wednesday May 17, 2023 @09:35PM (#63531247)
      Please explain how you imagine he should check to see if any of the papers had been constructed by a AI or by the actual person without using some form of technical tool. That part is not what was wrong here.
      • by sjames ( 1099 )

        Certainly not using ChatGPT, a tool well noted for being able to tell a believable fiction. I'm not saying I know a fool proof way to do it, but I can say that using the magic 8-ball is NOT the answer.

        The best (and far from perfect) method might be actually reading the paper and checking for factual correctness and well supported arguments. Since ChatGPT cannot actually reason, that will be it's weakness.

        His first clue that he ignored was that ChatGPT claimed ALL of the papers as it's work. Even magic 8-bal

        • Well this was not you original claim (which was he was letting ChatGPT doing HIS work), for this new comment of yours I'm 100% in agreement with.
          • by sjames ( 1099 )

            Instead of reading the papers and following the reasoning to the conclusions, he just chucked it into ChatGPT and asked if they cheated.

            • that you don't know and is just your assumption. He clearly chucked them into ChatGPT to ask the service if it had written them, but we have zero idea what else type of vetting that he did.
              • by sjames ( 1099 )

                Clearly none since everyone 'reported' by ChatGPT got an incomplete and each student subsequently investigated on appeal has been acquitted so far.

                If nothing else, the unusually high number of 'cheats' should have tipped the prof. off that more investigation was needed before throwing accusations around.

                • that is because he failed them on being written by ChatGPT, clearly the 50% of the papers that he failed due to this passed the other forms of validation. That he should have reacted that 50% is a too high a number does still not show that he didn't perform any other validations of the papers.
  • This is the real danger of AI, inept people not having a clue trusting it completely. From this guy to managers thinking they can get rid of programmers because some other guy claims that ChatGPT wrote a game for him.
  • ChatGPT has been trained to please the users, not necessarily to always tell the truth. So that professor has failed to use the tool correctly.
  • Is a confession

    All we know for sure is that at least one person used chatGPT to do their work

  • ... when people would get someone who already took the course to give them their papers then copy them over but change words/wording. The answer is to make 80-90% of the grade in class exams. Which is what I mostly saw during my undergraduate about 20 yrs ago, besides computer programming courses that did have heavily weighted 'take home' projects. My classes were small enough (~20-30) that the professor knew the style of each person. Or in the bigger intro course we'd program in a lab in front of TA's

  • ... as he proceeds to (dis)prove his point by lazily using ChatGPT (incorrectly) in an attempt to avoid actually doing his own job.

    Dang... This isn't even a case of "the pot calling the kettle black" -- it's more like a case of "the pot calling the silverware black."

  • or perhaps even better: artificial professor works artificially.

    Let's face it: a professor who uses AI to check his student's work [a] does not know his own field and/or [b] does not know his students and/or [c] does not know how to properly test and evaluate his students.

    It's really just that simple.

    It's a total lazy doofus play for a so-called professor to not want to expend the time and effort to properly evaluate his students (what the hell are you thinking if you hate this task and yet CHOOSE to be a p

  • Well if nobody has time to write or read essays, then lets go back to the old tradition of oral exams.
    Your grading of your knowledge can be provided much faster. Maybe some feedback or prep-talk can be given to help those nervous to speak in public.
    Maybe the student should do it in front of a "jury" of instructors or TAs so there is no doubt of fair scoring.
    Instructors will not have to carry a suitcase of hundreds of papers to read and grade over the weekend.
  • On two occasions I have been asked,

    – "Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?"

    ... I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question"

    Charles Babbage

    There's so much confusion of what ChatGPT and such models do and are that people have really dumb expectations.

    These things have very little memory, the context window of a GPT-4 session is 8192 tokens, and that's private to the current user.

"The vast majority of successful major crimes against property are perpetrated by individuals abusing positions of trust." -- Lawrence Dalzell

Working...