Professor Failed More Than Half His Class After ChatGPT Falsely Claimed It Wrote Their Final Papers (rollingstone.com) 126
A Texas A&M professor failed more than half of his class after ChatGPT falsely claimed the students used the software to write their final assignments. Rolling Stone reports: A number of seniors at Texas A&M University-Commerce who already walked the stage at graduation this year have been temporarily denied their diplomas after a professor ineptly used AI software to assess their final assignments, the partner of a student in his class -- known as DearKick on Reddit -- claims to Rolling Stone. Dr. Jared Mumm, a campus rodeo instructor who also teaches agricultural classes, sent an email on Monday to a group of students informing them that he had submitted grades for their last three essay assignments of the semester. Everyone would be receiving an 'X' in the course, Mumm explained, because he had used "Chat GTP" (the OpenAI chatbot is actually called "ChatGPT") to test whether they'd used the software to write the papers -- and the bot claimed to have authored every single one. "I copy and paste your responses in [ChatGPT] and [it] will tell me if the program generated the content," he wrote, saying he had tested each paper twice. He offered the class a makeup assignment to avoid the failing grade -- which could otherwise, in theory, threaten their graduation status.
There's just one problem: ChatGPT doesn't work that way. The bot isn't made to detect material composed by AI -- or even material produced by itself -- and is known to sometimes emit damaging misinformation. With very little prodding, ChatGPT will even claim to have written passages from famous novels such as Crime and Punishment. Educators can choose among a wide variety of effective AI and plagiarism detection tools to assess whether students have completed assignments themselves, including Winston AI and Content at Scale; ChatGPT is not among them. And OpenAI's own tool for determining whether a text was written by a bot has been judged "not very accurate" by a digital marketing agency that recommends tech resources to businesses.
In an amusing wrinkle, Mumm's claims appear to be undercut by a simple experiment using ChatGPT. On Tuesday, redditor Delicious_Village112 found an abstract of Mumm's doctoral dissertation on pig farming and submitted a section of that paper to the bot, asking if it might have written the paragraph. "Yes, the passage you shared could indeed have been generated by a language model like ChatGPT, given the right prompt," the program answered. "The text contains several characteristics that are consistent with AI-generated content." At the request of other redditors, Delicious_Village112 also submitted Mumm's email to students about their presumed AI deception, asking the same question. "Yes, I wrote the content you've shared," ChatGPT replied. Yet the bot also clarified: "If someone used my abilities to help draft an email, I wouldn't have a record of it." "A&M-Commerce confirms that no students failed the class or were barred from graduating because of this issue," the school said in a statement. "Dr. Jared Mumm, the class professor, is working individually with students regarding their last written assignments. Some students received a temporary grade of 'X' -- which indicates 'incomplete' -- to allow the professor and students time to determine whether AI was used to write their assignments and, if so, at what level." The university also confirmed that several students had been cleared of any academic dishonesty.
"University officials are investigating the incident and developing policies to address the use or misuse of AI technology in the classroom," the statement continued. "They are also working to adopt AI detection tools and other resources to manage the intersection of AI technology and higher education. The use of AI in coursework is a rapidly changing issue that confronts all learning institutions."
There's just one problem: ChatGPT doesn't work that way. The bot isn't made to detect material composed by AI -- or even material produced by itself -- and is known to sometimes emit damaging misinformation. With very little prodding, ChatGPT will even claim to have written passages from famous novels such as Crime and Punishment. Educators can choose among a wide variety of effective AI and plagiarism detection tools to assess whether students have completed assignments themselves, including Winston AI and Content at Scale; ChatGPT is not among them. And OpenAI's own tool for determining whether a text was written by a bot has been judged "not very accurate" by a digital marketing agency that recommends tech resources to businesses.
In an amusing wrinkle, Mumm's claims appear to be undercut by a simple experiment using ChatGPT. On Tuesday, redditor Delicious_Village112 found an abstract of Mumm's doctoral dissertation on pig farming and submitted a section of that paper to the bot, asking if it might have written the paragraph. "Yes, the passage you shared could indeed have been generated by a language model like ChatGPT, given the right prompt," the program answered. "The text contains several characteristics that are consistent with AI-generated content." At the request of other redditors, Delicious_Village112 also submitted Mumm's email to students about their presumed AI deception, asking the same question. "Yes, I wrote the content you've shared," ChatGPT replied. Yet the bot also clarified: "If someone used my abilities to help draft an email, I wouldn't have a record of it." "A&M-Commerce confirms that no students failed the class or were barred from graduating because of this issue," the school said in a statement. "Dr. Jared Mumm, the class professor, is working individually with students regarding their last written assignments. Some students received a temporary grade of 'X' -- which indicates 'incomplete' -- to allow the professor and students time to determine whether AI was used to write their assignments and, if so, at what level." The university also confirmed that several students had been cleared of any academic dishonesty.
"University officials are investigating the incident and developing policies to address the use or misuse of AI technology in the classroom," the statement continued. "They are also working to adopt AI detection tools and other resources to manage the intersection of AI technology and higher education. The use of AI in coursework is a rapidly changing issue that confronts all learning institutions."
That went well... (Score:1)
Re: (Score:1)
He's (apparently) a Professor of Rodeo, what were you expecting?
Also, I mean I know it's Texas and all, but a rodeo instructor as a professor? Can my manicurist cousin also get a professorship there?
Re:That went well... (Score:5, Informative)
He's (apparently) a Professor of Rodeo, what were you expecting?
Also, I mean I know it's Texas and all, but a rodeo instructor as a professor? Can my manicurist cousin also get a professorship there?
The A&M in "Texas A&M University" stands for Agricultural and Mechanical, and dude has a doctorate from KSU in Animal Behavior and Welfare. He also appears to be an adjunct and not an associate or full professor. Here [tamuc.edu] is his CV. His duties at TAMU appear to largely relate to coaching, but he also teaches "Introduction to Animal Science."
So... doesn't seem like a fair characterization on your part.
Shouldn't the professor be fired (Score:5, Insightful)
Shouldn't the professor be fired, for using ChatGPT to do his job, instead of doing the actual work themselves.
On a more serious note, this is the type of thing that really worries me, people believing that what these chat bots are authoritative and acting on that, because AI is smart right, all the movies say it is.
Re:Shouldn't the professor be fired (Score:4, Insightful)
Would that work if the students bothered to remix the results a little? No. Would it create some sort of security risk by making it possible to find out what somebody else had been asking? Possibly.
Still, it's not the dumbest assumption a pig farmer could make. I could imagine it becoming a proposed regulation.
And personal privacy? (Score:2)
Asking the thing, "did you write this?" is actually not the craziest thing to do. Actually it would be trivially easy for OpenAI to check and give a "yes" or "no" answer - not by doing any "intelligent" analysis but simply by checking against transcripts (which I think they already keep).
This opens up another can of worms, which is the privacy of your ChatGPT queries. In your scenario, anyone can ask ChatGPT about anyone else's queries.
As an analogy, what would someone find if anyone could ask similar questions about your browser history(*)?
Also, can your queries be used against you (as evidence of wrongdoing) in a court of law?
(*) Mine is chock full of tentacle porn and furries, not because I like that sort of thing, only because I find the study of such sociological phenomena fascinating.
Re:And personal privacy? (Score:4, Informative)
It would only be necessary to ask whether a given snippet had ever been generated by ChatGPT (or say within the last 90 days), not whether it had been provided to anybody in particular.
Re: (Score:2)
Re: (Score:2)
Although I'd be interested if anybody could specifically come up with a good example.
Re: (Score:2)
Also, can your queries be used against you (as evidence of wrongdoing) in a court of law?
Of course.
Re: Shouldn't the professor be fired (Score:3)
That's like asking ChatGPT to grade your students' final exam essays without ever reading them. It will come back with a grade, probably. But that grade will be based on absolutely nothing. It might as well just be assigning random grades. ChatGPT doesn't have any clue what it's saying. It's all "hallucinations" all the time; not just when you catch it in error. It cannot answer questions. People who rely on it to do so for their job should be fired.
Re: Shouldn't the professor be fired (Score:4, Interesting)
No, it's a way to check for plagiarism, not grade the paper.
Re: Shouldn't the professor be fired (Score:2)
A bad grade brings down your GPA. Plagiarism can get you an immediate fail or expulsion. It's much worse.
Re: Shouldn't the professor be fired (Score:4, Insightful)
Re: Shouldn't the professor be fired (Score:1)
He should be fired and the story should be widely reported as a cautionary tale to future professors. The more public examples of people being fired for using it, the less excuse there is for others to make the same mistakes.
Re:Shouldn't the professor be fired (Score:5, Insightful)
On a more serious note, this is the type of thing that really worries me, people believing that what these chat bots are authoritative and acting on that, because AI is smart right, all the movies say it is.
This is exactly the immediate danger of AI - humans making real world decisions based on misplaced trust in generative LLM because "it sounds smart". I hope this professor is summarily reprimanded - article is tldr
Natural Stupidity not Artifical Intelligence (Score:4, Insightful)
This is exactly the immediate danger of AI
That's NOT a danger of artificial intelligence it's a danger of natural stupidity. When GPS first became common you would regularly hear stories of idiots blindly following its directions despite the fact that those directions were obviously wrong.
Any new technology will lead to idiots finding something dangerous to do with it. Even something as innocuous as putting dishwasher detergent in a dissolvable plastic sachet led to the "tide pod challenge".
Re: (Score:2)
Shouldn't the professor be fired, for using ChatGPT to do his job, instead of doing the actual work themselves.
Yes.
this is the type of thing that really worries me, people believing that what these chat bots are authoritative and acting on that, because AI is smart right, all the movies say it is.
And this was a professor demonstrating blind, unjustified trust into ChatGPT, not any random low-life bozo.
This is truly the symptom of the first of the two Robot Apocalypses, the first of which we are now in: People lowering their standards in order to transfer work to AI.
(The second Robot Apocalypse will be after the Singularity, when humans become mere tools to their robot overlords. But maybe by then nobody will be smart enough to notice anymore.)
Re: Shouldn't the professor be fired (Score:5, Insightful)
And this was a professor demonstrating blind, unjustified trust into ChatGPT, not any random low-life bozo.
A rodeo instructor at Texas A&M is absolutely a low-life bozo.
Re:Shouldn't the professor be fired (Score:5, Insightful)
Shouldn't the professor be fired, for using ChatGPT to do his job, instead of doing the actual work themselves.
And what work exactly would that be? Let me know when machine-level analysis becomes an Educator 101 class. I'd love to know how and why you think current teachers are even remotely capable of this kind of cheating analysis without a computer involved somehow. Hundreds of students to validate as well, so it's not exactly an easy "manual" effort these days.
Bottom line is teaching has become a lot harder than you assume. I'm not even a teacher and I can see that.
Re: (Score:2, Informative)
yes, when you are asked to write something yourself and then you copy and paste something that you didn't write (regardless of where you found it), that is definitely considered cheating.
if a math teacher adds a requirement to not use a calculator during a test and you decide fuck that im gonna use a calculator, yes, you are once again cheating.
Re:Shouldn't the professor be fired (Score:5, Insightful)
ChatGPT isn't why he should be fired. It's a tool, nothing more. And there's nothing wrong with using too to do your job. The reason he should 100% absolutely unequivocally fired... with prejudice and for cause with all perks, health plans, pensions, references, et cetera forfeited... is for falsely accusing others of wrongdoing; especially a wrongdoing like academic dishonesty which could have affected their graduation and future employment prospects. THAT, not ChatGPT, is what makes this beyond-the-pale intolerable. Hell, I'm not normally one to cite scripture. But the stunt he tried to pull has been considered to be so heinous, so universally, and for so long that there's a commandment on the topic.
Re:Shouldn't the professor be fired (Score:5, Insightful)
Re: (Score:2)
Re: (Score:1)
Probably not for using ChatGPT, but definitely for using it incompetently in something that has real negative consequences for his students.
Re:Shouldn't the professor be fired (Score:4, Insightful)
Shouldn't the professor be fired, for using ChatGPT to do his job
Your facetiousness aside, this IS the professors job, using the tools at their disposal. The fact that this specific tool was used incorrectly for the wrong purpose is an issue, but he was doing his job, albeit a bit poorly.
Re: (Score:2)
By the sound of it the college is dragging its feet on policy, tools, and enforcement of AI related content.
Should he be fired? No. A teacher who covers fucking rodeo riding should not be expected to understand all this.
The college is entirely to blame.
They need to:
Develop clear policies on what is and what isn't acceptable levels of AI tooling.
Provide up-to-date tools to ensure the grading teachers can apply those policies.
Have someone on staff with final-say authority on the issue who can be consulted by
Re: (Score:2)
Gee, I thought most movies say AI is insane and will kill us all...
Re: (Score:2)
To be pedantic about it... " Chat behaves very similar to a person, a reasonable person" is incorrect. Chat may produce output that *on the surface* resembles output from a reasonable person, but it is not. Especially as Chat does not REASON in any way shape or form. It has no idea what it is "saying", as it does not think.
That's the real disconnect I believe.
Rodeo instructor? (Score:5, Insightful)
Dr. Jared Mumm, a campus rodeo instructor who also teaches agricultural classes, sent an email...
You had me at "rodeo instructor".
Also: rodeo instruction and/or agricultural classes require writing multiple essays?
Instructor liability or AI liability? (Score:4, Interesting)
Per previous slashdot story [slashdot.org] (full disclosure: my submission) I recommended that AI not be used as an excuse for medical mistakes.
I'm now thinking that AI should never be used as an excuse for *any* mistake.
In other words, the blame for anything bad to come of this should rest entirely with the professor, that professor doesn't get to put the blame on AI and remain innocent of wrongdoing.
In this particular instance, we can suppose that numerous students were unfairly accused of wrongdoing based on faulty data returned by ChatGPT. A professor who did that outside of AI would be reprimanded after investigation, but all the students would be made whole by the university.
Most of the time AI mistakes will be minor and of no consequence, some will cause intermediate distress, and some (medical diagnosis, for instance) might cause catastrophic harm. In all cases the *person* using the AI should be held responsible: the journalist who posts erroneous information, the professor who relies on AI to fail his students, the human resources person who never hires a minority, and so on.
I can see a cutout for companies that put an AI in charge of human safety: surgical robots and self-driving cars, for example. In these cases the software would be certified by the government to be a) better than a human operator, and b) developed using a high level of standards, and c) any identified problems lead to a safer system. Very much like aircraft software is done today: we expect some software problems going forward, if there's a bug and someone gets killed then the company isn't at fault because the system was safer than a human operator to begin with, and we can analyze the root cause of the fault and update every unit in the field to make everything safer from the single incident.
Re: (Score:3)
Re: (Score:2)
Exactly. Chat-AI is a tool (and not a very good one regarding result reliability and quality) and a tool is not to be blamed for it being used incompetently. That is solely on the tool-user. If this nil-wit had used a dice to determine the grades, the dice would not have been at fault either, but he would have been very much so. As he is now. Probably a lazy-ass idiot, because after what ChatGPT told him, he should very much have tried to verify that and to find out whether that result was reliable in any w
Re: (Score:2)
Dr. Jared Mumm, a campus rodeo instructor who also teaches agricultural classes, sent an email...
You had me at "rodeo instructor".
Same. Guess it's not surprising that the instructor is giving his students the run-around. :-)
Re: (Score:2)
The whole situation is a bucking mess.
Re: Rodeo instructor? (Score:2)
Re: (Score:1)
Dr. Jared Mumm, a campus rodeo instructor who also teaches agricultural classes, sent an email...
You had me at "rodeo instructor".
A "rodeo instructor" teaching agriculture, probably has a lot more hands-on experience than 99% of educators today teaching shit they've never experienced themselves first-hand. And every human on this planet understands there is NO substitute for first-hand experience. None.
If you don't think agriculture requires considerable analysis (as in justifying "multiple essays") then I welcome you to give it a shot. Let's see how well your uneducated ass does by comparison.
Re: (Score:2)
Re: (Score:1)
An Aggie misunderstands technology.
More like "a college professor misunderstands . . . almost anything, except, perhaps, the material in the textbook he wrote.
This ain't news, this is business as usual in Texas.
And everywhere else, except in your bigoted liberal imagination.
Re: (Score:2)
You had me at "rodeo instructor".
Also: rodeo instruction and/or agricultural classes require writing multiple essays?
You clearly have no idea what the business of agriculture is like. Most farmers - family owned farms that is, not just corporate employees - are college educated, often with double major of agriculture and finance, because even a small family farm is a pretty big business, with millions (or tens of millions) a year in revenue, and a delicate financial balance between spending hundreds of thousand one part of the year, and paying off the loans in another. (And A&M is a premier ag school.)
So yeah, I'm sure the students do, in fact, have to write multiple essays a year, just like pretty much all college students.
Re: (Score:3)
I asked ChatGPT about this story (Score:3, Funny)
It responded "It was the best of times, it was the worst of times". Then it told me to "Call me Ishmael".
Re: (Score:2)
Growing Pains (Score:3)
There is a certain irony in the sense that schools spent millions over the past few decades to transition to digital/online courses only to suddenly be confronted with LLMs.
Now any online output, be it essay, drawing, or even speech can be faked with a reasonable degree of accuracy. Not saying all the students
faked their papers, but it is symptomatic of a big issue here.
Two ways I see schools tackling this problem:
1. Re-emphasize in-class learning. That means more in-class projects/assignments, and less emphasis on homework. Math and science courses might actually be best prepared for this already, because most of their exams are done in-class. If there is a discrepancy between homework and exam grades, teachers can put two-and-two together. Might be the best short-term solution going forward.
2. Incorporate AI as a teaching assistant. Think I have heard of some instructors doing this, where AI is used to create learning materials on the fly so that teachers can focus more on, you know, teaching. Ideally it leads to less rote memorization and more emphasis on creative thinking and improvisation. A longer-term solution with high payoffs but high risks as well.
That all said, my concern is that the US culture emphasizes doing things quick, cheap, and easy. Colleges already feel like degree mills, with an emphasis on processing as many students out the door as possible. In turn, many students do not really take their studies as seriously as they should, which ends up creating lower-quality workers. On top of a serious illiteracy problem, and you run the risk of this technology making people dumber and lazier then they already are.
Technology alone does not make a society better or worse. It amplifies aspects that already existed. If people were already hard, creative workers then LLMs will make them even more so.
If they were lazy and incompetent however, it will only make that worse. People on twitter are already boasting about how they make chat-gpt read and summarize books for them so they do not have to. Same thing with writing, be it rote business emails or full-length books. When you do not practice those skills, you will lose those skills. Simple as.
Re: (Score:3, Insightful)
Re: Growing Pains (Score:2)
Interestingly enough, I recall how during the pandemic some college professors mandated that students install a type of exam software that when logged in, effectively locked them from using the internet or other applications on their device for the duration of the online exam. This was used to try and simulate a private exam room, so as to prevent students from cheating. Some would even have students include a webcam with it so they could record them while they took the exam.
Not necessarily foolproof, and o
Re: (Score:2)
You're overlooking root-cause analysis. I'd love to talk to all of the adults who benefited greatly in life from the writing assignments given to them in high school or college. Let's see how well the entire fucking point stands up.
Once again, the Higher Education Complex is desperate to justify their obscene costs and salaries by pretending to be offended over the idea that their customers are NOT getting the same quality of education from the internet, as they are on any overpriced campus.
Bullshit, is b
Re: Growing Pains (Score:2)
I did suggest that teachers consider incorporating AI/LLMs as a sort of TA used to generate teaching materials so teachers can focus more on actual teaching. As in, teaching kids to value and examine old knowledge in a manner that is conducive to their academic growth. The teaching field has put too much focus on rote memorization, which just does not fly in a world where working smarter is better.
Believe me, I have written my fair share of essays throughout the decades. They were not necessarily enjoyable.
Re: (Score:2)
I did suggest that teachers consider incorporating AI/LLMs as a sort of TA used to generate teaching materials so teachers can focus more on actual teaching. As in, teaching kids to value and examine old knowledge in a manner that is conducive to their academic growth. The teaching field has put too much focus on rote memorization, which just does not fly in a world where working smarter is better.
Yes, but it does fly in a world where childrens test scores are corruptly tied to school budgets. Teachers want the "best" salaries for themselves? It's easy to do. Give the same damn "test" over and over again until rote memorization has filled those financial coffers making you the "best" damn school out there.
That's the American edumucashun system in a nutshell. Broken by the worst kind of greed in capitalism, and condemned by the policy of leaving no mentally challenged mind behind, to the detriment
Re: (Score:1)
Time for a fraud lawsuit (Score:5, Interesting)
Dr. Jared Mumm, [...] used "Chat GTP" (the OpenAI chatbot is actually called "ChatGPT") to test whether they'd used the software to write the papers -- and the bot claimed to have authored every single one
So, he fully admits to not reading the assignments, instead using some unreliable tool's "assessment" and not questioning it and denying students their proper grades.
I can't speak for those students, but I'm pretty sure they're not paying for faculty to not do their work, or worse yet as in this case, do it so ineptly that it denies them their grade.
Don't know if it's legally accurate, but I'd consider this fraud, as students are clearly not getting what they've paid for.
.
Re: (Score:3, Insightful)
So, he fully admits to not reading the assignments
He said nothing of the sort. There's a difference between reading an assignment and asking someone (or searching) whether the work was original.
I can't speak for those students
Please don't speak for anyone, at least not until you have a basic understanding of what went on. And for fuck sake can you sue happy morons calm down for a moment, a mistake was made and is being corrected without any enduring impact on the people involved. I'm beginning to feel like I need to sue you for reading your stupid post.
Re: (Score:1)
I can't speak for those students
Please don't speak for anyone
That's what the poster said they're already doing, dumbass!
And for fuck sake can you sue happy morons calm down for a moment, a mistake was made and is being corrected without any enduring impact on the people involved. I'm beginning to feel like I need to sue you for reading your stupid post.
Given your posting history, you're obviously a armchair-expert-troll with waaaaayyyyy too much time on your hands. I'm sure you have plenty of time to troll others in court too (including wasting everyone's time). Can't wait to see your Trump-like tactics in the courtroom.
You need to calm the fuck down. Also need to STFU already.
Re: (Score:3)
Please don't speak for anyone,
I explicitly said I wasn't, moron.
I'm beginning to feel like I need to sue you for reading your stupid post.
Go ahead troll, I triple dog dare you! [youtube.com]
Lawsuit? (Score:2)
So where's that lawsuit you promised me, moron?
Re: (Score:2)
Oh boy (Score:1)
Hey, look, another "educated" person who is literally too stupid to do their job correctly. I'm sure he knows a good amount about agriculture, but that's not enough if you're going to teach it. This kind of behavior would make me question if Mr. Munn actually did the work to earn his own degree. He definitely hasn't kept up on his continued education as new tech has come along, that much is obvious.
Re: (Score:2)
Mumm*. Stupid autocorrect to Munn. Stupid last name.
Re: (Score:2)
Educated does not assure smart. Smart does not assure ducated. To actually understand how things work you need both and some real-world experience on top.
Re: (Score:2)
Yes, but I expected the educated to have some minimum qualifications. For example, if you're hired as a professor, I assume you have some experience in educating or were educated in...educating. And that comes with a basic understanding of what tools teachers/professors have at their disposal.
But I have come to expect most college professors (especially those with PhDs) to have next to zero teaching capabilities making them functionally useless.
Re: (Score:3)
Professors routinely have no education qualifications. It has gotten worse with the selection criteria that prefer people that can bring in research money. Education qualification (and actual research qualification) are something that prevent you from becoming a professor, because they slow you down and take real time to acquire. Just know the right people and get some good grants and nobody cares what you actually can do in education and research.
Re: (Score:3)
Re: (Score:2)
You can lose the quotes. Or you can keep them, and the next time your nerdy self is asked a question about agriculture that the professor in this story could answer easily and you can't, we can say you're another "educated" person who is literally too stupid to do their job properly too.
You are missing the point, nor does your comparison make sense. His job as a professor is to pass on his knowledge (the topic or expertise is irrelevant). Him attempting to use a completely inappropriate tool for his profession is 100% his own stupid fault related to doing the job appropriately. Don't use tools you don't understand (and aren't even marketed) for your job that also directly impacts others. There are tools advertised as being able to detect if it was AI written. A cursory Google would have to
Re: (Score:2)
Re: (Score:2)
You're saying that someone is a moron because they don't understand that a technical product that is marketed as a thing that answers questions, and that appears to answer questions, isn't actually capable of answering those questions, not even the ones someone might think it should know the answer to. (Serious question: why is it unreasonable for a non-technical person to assume that a computer that answers questions wouldn't know what questions it has answered before and how it answered them?)
When it's been all over the news, when the warnings are inside EVERY single chat window, yes. Two or three months ago, I would agree with you that he might not have known. And again, expertise means nothing in this discussion. He's not doing agriculture work, he's teaching, and as a teacher he should know how to do certain things, like basic research. If you can't do that, you don't deserve to be a teacher of any kind.
Re: (Score:2)
their job isn't to answer questions but to string together words that would appear to look like an answer to a question
I think a lot of people really don't realise how literally this is true. There is a long reinforcement learning step when the model is trained to produce answers which people like. As in people are given the input and output and get a yes/no choice on whether they like it.
That's a minimum wager, or at least not highly paid job right there, so you don't get armies of experts doing that fine
Somebody needs to be fired... (Score:2)
Because somebody is clearly not very smart or competent.
Re: (Score:2)
Not a problem at Texas A&M, AKA sheepfucker U.
College Station (where it is located) is the only place I was ever in any significant danger in Texas, and I went pretty much everywhere.
Whoops (Score:2)
Not that Texas A&M, it's the one in Commerce.
Still probably sheepfuckers tho
Thankfully not my alma mater (Score:5, Interesting)
As a proud former student of Texas A&M, I nearly banged my head on the table when I read this. Then I realized it was—stick with me here—Texas A&M University–Commerce [wikipedia.org], not Texas A&M University [wikipedia.org], and everything made sense again.
Except that whole thing about the professor teaching rodeo. What the hell is up with that?!
Re: (Score:2)
He isn't a professor (Score:4, Informative)
His title is "Instructor/Judging Team Coordinator,
Agricultural Sciences and Natural Resources"
He does not appear to be a professor, or a PhD.
Re: (Score:2)
Re:He isn't a professor (Score:4, Informative)
His title is "Instructor/Judging Team Coordinator,
Agricultural Sciences and Natural Resources"
He does not appear to be a professor, or a PhD.
Err... Here [tamuc.edu] is his CV, and he has a doctorate from KSU in Animal Behavior and Welfare. You're correct that he isn't a professor, though, he appears to just be an adjunct.
ChatGPT says ... (Score:2)
Re: ChatGPT says ... (Score:2)
Would I LIE?????
Is ChatGPT really Artificial Intelligence? (Score:3)
There Are Better Tools. (Score:2)
My wife takes online courses from Pasadena City College. They use a system called "TurnItIn" that determines whether or not someone plagiarized an essay. Its not perfect either, but the teacher can fine tune the level. Her last essay claimed something like 20% plagiarism, but it was a shorter essay, and when I looked at what it claimed was "similarity" was actually insignificant. She ended up with full points.
Maybe the prof should be using that instead.
(Also, I have a friend who is a high school teacher
Re: There Are Better Tools. (Score:2)
Turnitin does not detect plagiarism. It recommends parts of text for manual review. It detects plenty of things that are not anywhere close to plagiarism. Turnitin is a widely abused tool, just like ChatGPT.
He failed at the three times rule! (Score:2)
saying he had tested each paper twice.
Pure laziness, I'm sure! He should have done it three times and calculate the average and median by hand!
Re: (Score:2)
Re: He failed at the three times rule! (Score:2)
ChatGPT has a random element. You often don't get the same output from the same input.
Re: (Score:2)
That is my point, I just wanted to write it more funny :P
totally believable really (Score:3)
Let's see if I have it right (Score:2)
A professor who accused his entire class of cheating by getting ChatGPT to do their work was, in fact, cheating by getting ChatGPT to do HIS work?
Will he be getting a failing grade? Particularly since, so far, he is the only one who definitely cheated?
Re:Let's see if I have it right (Score:4, Interesting)
Re: (Score:2)
Certainly not using ChatGPT, a tool well noted for being able to tell a believable fiction. I'm not saying I know a fool proof way to do it, but I can say that using the magic 8-ball is NOT the answer.
The best (and far from perfect) method might be actually reading the paper and checking for factual correctness and well supported arguments. Since ChatGPT cannot actually reason, that will be it's weakness.
His first clue that he ignored was that ChatGPT claimed ALL of the papers as it's work. Even magic 8-bal
Re: (Score:2)
Re: (Score:2)
Instead of reading the papers and following the reasoning to the conclusions, he just chucked it into ChatGPT and asked if they cheated.
Re: (Score:2)
Re: (Score:2)
Clearly none since everyone 'reported' by ChatGPT got an incomplete and each student subsequently investigated on appeal has been acquitted so far.
If nothing else, the unusually high number of 'cheats' should have tipped the prof. off that more investigation was needed before throwing accusations around.
Re: (Score:2)
real danger of AI (Score:2)
ChatGPT will often lie to please its users (Score:2)
every accusation (Score:2)
Is a confession
All we know for sure is that at least one person used chatGPT to do their work
not much different than the olden days (Score:2)
... when people would get someone who already took the course to give them their papers then copy them over but change words/wording. The answer is to make 80-90% of the grade in class exams. Which is what I mostly saw during my undergraduate about 20 yrs ago, besides computer programming courses that did have heavily weighted 'take home' projects. My classes were small enough (~20-30) that the professor knew the style of each person. Or in the bigger intro course we'd program in a lab in front of TA's
"Students are lazy!" crows professor... (Score:2)
... as he proceeds to (dis)prove his point by lazily using ChatGPT (incorrectly) in an attempt to avoid actually doing his own job.
Dang... This isn't even a case of "the pot calling the kettle black" -- it's more like a case of "the pot calling the silverware black."
Better title: incompetent prof phones it in... (Score:2)
or perhaps even better: artificial professor works artificially.
Let's face it: a professor who uses AI to check his student's work [a] does not know his own field and/or [b] does not know his students and/or [c] does not know how to properly test and evaluate his students.
It's really just that simple.
It's a total lazy doofus play for a so-called professor to not want to expend the time and effort to properly evaluate his students (what the hell are you thinking if you hate this task and yet CHOOSE to be a p
Oral Exams should be used way more then (Score:2)
Your grading of your knowledge can be provided much faster. Maybe some feedback or prep-talk can be given to help those nervous to speak in public.
Maybe the student should do it in front of a "jury" of instructors or TAs so there is no doubt of fair scoring.
Instructors will not have to carry a suitcase of hundreds of papers to read and grade over the weekend.
Obligatory Charles Babbage quote (Score:2)
There's so much confusion of what ChatGPT and such models do and are that people have really dumb expectations.
These things have very little memory, the context window of a GPT-4 session is 8192 tokens, and that's private to the current user.