Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Education AI

Professors Are Now Using AI to Grade Essays. Are There Ethical Concerns? (cnn.com) 102

A professor at Ithaca College runs part of each student's essay through ChatGPT, "asking the AI tool to critique and suggest how to improve the work," reports CNN. (The professor said "The best way to look at AI for grading is as a teaching assistant or research assistant who might do a first pass ... and it does a pretty good job at that.")

And the same professor then requires their class of 15 students to run their draft through ChatGPT to see where they can make improvements, according to the article: Both teachers and students are using the new technology. A report by strategy consultant firm Tyton Partners, sponsored by plagiarismâdetection platform Turnitin, found half of college students used AI tools in Fall 2023. Meanwhile, while fewer faculty members used AI, the percentage grew to 22% of faculty members in the fall of 2023, up from 9% in spring 2023.

Teachers are turning to AI tools and platforms — such as ChatGPT, Writable, Grammarly and EssayGrader — to assist with grading papers, writing feedback, developing lesson plans and creating assignments. They're also using the burgeoning tools to create quizzes, polls, videos and interactives to up the ante" for what's expected in the classroom. Students, on the other hand, are leaning on tools such as ChatGPT and Microsoft CoPilot — which is built into Word, PowerPoint and other products.

But while some schools have formed policies on how students can or can't use AI for schoolwork, many do not have guidelines for teachers. The practice of using AI for writing feedback or grading assignments also raises ethical considerations. And parents and students who are already spending hundreds of thousands of dollars on tuition may wonder if an endless feedback loop of AI-generated and AI-graded content in college is worth the time and money.

A professor of business ethics at the University ofâVirginia "suggested teachers use AI to look at certain metrics — such as structure, language use and grammar — and give a numerical score on those figures," according to the article. ("But teachers should then grade students' work themselves when looking for novelty, creativity and depth of insight.")

But a writer's workshop teacher at the University of Lynchburg in Virginia "also sees uploading a student's work to ChatGPT as a 'huge ethical consideration' and potentially a breach of their intellectual property. AI tools like ChatGPT use such entries to train their algorithms..."

Even the Ithaca professor acknowledged to CNN that "If teachers use it solely to grade, and the students are using it solely to produce a final product, it's not going to work."
This discussion has been archived. No new comments can be posted.

Professors Are Now Using AI to Grade Essays. Are There Ethical Concerns?

Comments Filter:
  • Lol. (Score:5, Insightful)

    by waspleg ( 316038 ) on Sunday April 07, 2024 @02:38PM (#64376720) Journal

    Assuming AI knows best is the first error. Paying some asshole as much as house to teach you something and they're just using some shitty website means you're getting scammed on your "education" too - enjoy being a debt slave.

    • Using an AI is better than the old fashioned way of tossing the essays down a staircase and grading them based on which step they landed.
      • by Anonymous Coward

        > tossing the essays down a staircase and grading them based on which step they landed.

        I was always skeptical of subjective courses like liberal arts. But this is starting to make sense.

        • In all seriousness this is a good development. If people are using AI to grade Essays, you can imagine a future where the coursework is given to you as a series of tasks and the university can get your essay and determine your level of subject matter comprehension without the house per year expense of a professor. Lectures can be videos, assignments can be automated. The AI can and if used will undoubtably get better and better at grading. Imagine if your essay was trained on how well you did on subsequent
          • by Calydor ( 739835 )

            In all seriousness this is a good development. If people are using AI to write Essays, you can imagine a future where the coursework given to you as a series of tasks (...) can be automated. The AI can and if used will undoubtably get better and better at writing. Imagine if your essay was trained on how well you did on asking the AI to write it, even extending into your career. Successful professionals will have their AI-written essays ranked higher than those who failed for future students, or some other

    • AI grammar checkers are very good.

      AI isn't as good at checking the overall structure and consistency of an essay but still does a better job than a bored and overworked TA.

      The output of any AI system should not be blindly trusted, but it is plenty good enough for a first pass.

      • The thing is, checking for local ("surface") errors like grammar, spelling, punctuation, etc., should be the last pass, not the first. Why correct the presentation of the ideas before you've checked the ideas themselves?

        Additionally, local/surface errors typically contribute something like less than 5% of the overall submission score. That's alongside following citation & bibliography conventions. It's really not that important & it doesn't take a knowledgeable marker very long to identify streng
        • by NFN_NLN ( 633283 )

          > The thing is, checking for local ("surface") errors like grammar, spelling, punctuation, etc., should be the last pass, not the first.

          Honestly, if you're too stupid to run your paper through a similar AI that the professor would be using means you don't have what it takes.

        • by ShanghaiBill ( 739463 ) on Sunday April 07, 2024 @06:12PM (#64377056)

          The thing is, checking for local ("surface") errors like grammar, spelling, punctuation, etc., should be the last pass

          It is the easiest pass, and if the student didn't even bother to check their grammar, they don't deserve a good grade.

          Why correct the presentation of the ideas before you've checked the ideas themselves?

          Because atrocious grammar often makes the paper unreadable.

          it doesn't take a knowledgeable marker very long to identify strengths & weaknesses in a submission

          You'd be surprised how little time a typical TA spends grading a paper. If they're grading 100 papers in an hour, that's 36 seconds per paper.

          They'll often skim for a few keywords, maybe read the last few sentences to see the conclusion, and that's it

          Some don't even do that. In many classes, the grades for homework essays are either "turned it in" or "didn't turn it in".

          • You'd be surprised how little time a typical TA spends grading a paper. If they're grading 100 papers in an hour, that's 36 seconds per paper.

            I was taught the stairs method. You simply throw the papers down the stairs, and grade based on how far they go.

          • Re: (Score:2, Insightful)

            Something tells me you don't have much experience of marking & giving feedback on students' writing or if you do, you give terrible feedback. Don't be surprised if your students don't read it & turn to "other means" of submitting higher-scoring writing.
          • The thing is, checking for local ("surface") errors like grammar, spelling, punctuation, etc., should be the last pass

            It is the easiest pass, and if the student didn't even bother to check their grammar, they don't deserve a good grade.

            IMO, that makes sense only in a class where the topic is grammar, spelling, and punctuation. In, say, a philosophy course, the point is to evaluate the student's grasp of the philosophical concepts being covered, and leeway is given for deficiencies in language skills -- especially considering the multinational nature of higher education. Presuming that a pass with a software checker will catch all errors is a faulty presumption.

            Liberal Arts disciplines are different than the sciences. I would agree 100%

            • Cut' em some slack.

              Only after they've matured to a suitable level of "gaminess". Which, in my area, is determined by the head coming off and the nicely ripe body falling to the floor.

      • They would still need to read it.

        Will they? Probably not and if they do it they use the ai as the basis, biasing the grading.

        Besides than that the students themselves will run it through the same thing either getting the same access to the model or by giving a fiver to the ta.

        Now if the essays don't matter because they're just literal filler on a bs subjective subject and this is okay then its a separate problem. In all cases if the higher education is based on this and costs tens of thousands a year its a

      • Back in my day (many many years ago) we were encouraged to use Grammarly to prevent spelling errors.

        I think Grammarly can do much more. It would be kind of interesting to see what it does. I wouldn't assume that Grammarly's score correlates to grade on any technical subject, for pretty much anything ... I could see many math and engineering papers failing miserably.

        Also, I do have a problem with Turnitin getting a license to do anything with my work. I imagine today's students are just screwed over.

      • But here is the problem even as a first pass it is bad as a first pass. Let's do a first pass and I get somebody who has rigged the paper to pass through the system. If we see a B or even an A and assume the system assignment is correct then we have a false negative. Thus if this happens often enough what then? How can you trust a system that has that? You have to read each and every one of the papers, which puts us right back in square one.

      • Part of the cost of undergrad education is meant to subsidize grad education by paying for TAs. If TAs are replaced with AI, grad education becomes financially non viable and secondly the undergrad tuition needs to be reduced instead of the Professors and University just taking that as excess profit.
    • Education in the US was a scam long before Artificial Stupidity came along. For quite some time one has been better off learning a trade and being debt-free, rather than going into debt for a degree that most probably will not pay for itself. And that's even before the horror scenarios of the lenders often spiralling that debt into bottomles pits of lifetime servitude.

      An important part of that scam, other than the fact that universities have turned into vehicles for manufacturing student debt, is that the m

      • by gweihir ( 88907 )

        An important part of that scam, other than the fact that universities have turned into vehicles for manufacturing student debt, is that the money in the universities has also for quite some time now been increasingly going into administration, not faculty.

        Essentially late-stage enshittification, academic version. If you stop understanding that your teachers and researchers are your core assets as an academic institution, everything turns to crap.

      • one has been better off learning a trade and being debt-free

        Unless you are paying $1.2 million for that college degree, then no, there is still a huge earnings premium for earning a college degree (or $800,000 vs associate's degree). This finding is persistent and consistent and the gap has grown in recent years (see, for example: https://www.forbes.com/sites/m... [forbes.com]).

        There's absolutely nothing wrong with going into the trades and working for a living. The work itself can also be really interesting and fun. But the big money in the trades comes when you own an operat

        • That 1.2 million when spread across a career in which you will work 60 hours a week, but only be paid for 40, doesn't really amount to much. When you consider the toll that such a lifestyle will take on your health, you won't even break even.

          A 60 vs 40 hour work week is 1000 unpaid hours every year. Over the course of a 40 year career, that's 40k hours, or about $30/hour. Which means that any blue collar working making $20/hour or more will actually be making more per hour whenever they work overtime.

      • by ranton ( 36917 )

        For quite some time one has been better off learning a trade and being debt-free, rather than going into debt for a degree that most probably will not pay for itself.

        This isn't true at all. A college education still provides a positive ROI for most people who go to college. But it greatly depends on your major and aptitude. Research by The Foundation for Research on Equal Opportunity shows the median Engineering graduate fully recovers the costs of college within 4 years of graduation. However the median Visual Arts and Music graduate never recovers the cost of college. So if you are going to go to college, you either need to choose and complete a lucrative major and/or

    • by vlad30 ( 44644 )
      First question regarding every essay

      Is this essay an output the AI produced?

  • by xack ( 5304745 ) on Sunday April 07, 2024 @02:42PM (#64376724)
    Aka Human Intelligence. I'd expect a human to grade my work. Any university using AI for marking work should lose their accreditation and their student's loans refunded. Just as a student can be expelled for citing AI in their work (worse than citing Wikipedia!), so called "professors" relying on AI are huge idiots.
    • Sounds silly to me. One side of the equation is supposed to be learning a discipline. The other is doing a job. These are not equivalent.

      There's a reason that exams are closed book when the rest of your life is open book.

      • by NFN_NLN ( 633283 ) on Sunday April 07, 2024 @04:52PM (#64376938)

        > There's a reason that exams are closed book when the rest of your life is open book.

        When you're presenting to a customer and they ask a question, you make sure to whip out your book and start thumbing through the pages for the answer. Customers love that.

        • I know engineers have done exactly that ...

          What the code says is a major preoccupation with health and safety people and inspectors (both government, UL, electrical, etc).

        • At least all the software developers I know don't ever google anything when doing their work. And there are only customer-facing jobs, where the customer is constantly following what you do. Sarcasm aside, often the point of education is to make you remember what concepts there are and how to apply, but in the job you won't usually need everything you have been taught. So you will know which reference you're after, and with a small refresher can apply it. And the things that you'll need frequently you will
        • > There's a reason that exams are closed book when the rest of your life is open book.

          When you're presenting to a customer and they ask a question, you make sure to whip out your book and start thumbing through the pages for the answer. Customers love that.

          Depends.

          I do that all the time, in a way, in customer meetings. "I don't know, off the top of my head, but I can look into that and get back to you."

          Dumb customers might not like it, but smart ones appreciate the honesty. It's better than just making stuff up and getting caught.

        • In the 90s we did. Now we just look it up on our laptop, which we're already doing the customer presentation from. If I have to be subtle, I can look on my phone too.

          We're not in 1989 anymore. We don't have to work like a cliche 80s corporate villain who fires the main character because they didn't print the right number of TPS reports.
        • by mysidia ( 191772 )

          When you're presenting to a customer and they ask a question, you make sure to whip out your book

          Customers will in fact appreciate that you take the care to confirm the information they are asking about, instead of whipping out a response that may be wrong or in error.

          Have you not heard of Powerpoint? Even sales presentations to customers are open book. Of course they are. If the question's answer is not clear from the Presentation or pamphlet; the presenter may even very well call one of their co-wo

    • by Bahbus ( 1180627 )

      The only problem with citing Wikipedia is that it's already a collection of various citations, so you'd be stupid to cite Wikipedia directly instead of the source that section of Wikipedia is based on.

      Other schools have useless AI policies that don't work. UCF, as an example, as strict no AI policies. They'll run you work through AI detection tools, which don't work by the way, and if they get a positive result, you're expelled. If the tool produced a false positive, tough luck, still expelled. Have the stu

      • A friend of mine who teaches at a local uni noticed a few other lecturers trying this approach with "AI detectors". Knowing that they are somewhat unreliable he intervened and they came up with a fairly good approach.

        Essentially, using a pair of them, and if BOTH claim its a plagarized or AI written essay, then the student is invited to do a "defence" of the essay. Essentially they are interogated on the subject of the essay. If they understood and can demonstate they have the knowledge to write that essay,

    • Huh? I don’t know any that goes to university for the purpose of acquiring intelligence from humans.People go to university to acquire knowledge that is ideally suited for them to have a successful career. Who cares what grades my work? I care that my work is graded accurately and that I am taught to the maximum of my learning capability.

    • by gweihir ( 88907 )

      Indeed. Although citing Wikipedia is entirely unproblematic in most STEM areas. But what I expect from an academic teacher is grading and feedback based on actual insight. As an academic teacher, I expect myself to be able to deliver that. And one of the most important sources to know how to improve your lecture is seeing what those taught do with it. If you are cutting yourself off from that then you do not understand your job or have stopped caring.

    • Agree. AI is good for mechanical stuff, not human stuff. Fire him.
      HI (Human Intelligence) is a new term we need to throw around and get into the vernacular
      And the Turing test is BS.
    • by Tom ( 822 )

      Aka Human Intelligence. I'd expect a human to grade my work.

      Agreed.

      What if he uses a tool to do that? Where is the line? wc to check if you satisfied the word count requirement? A spell-checker? An AI?

      Assuming that the actual grading is still done by a human and AI is just one of several tools used in the process?

  • by chrism238 ( 657741 ) on Sunday April 07, 2024 @02:54PM (#64376740)
    From the article summary, the quote "....sees uploading a student's work to ChatGPT as a 'huge ethical consideration' and potentially a breach of their intellectual property. AI tools like ChatGPT use such entries to train their algorithms..." TurnitIn uses uploaded student essays to populate its database, access to TurnitIn is available for a licence fee, and students are *required* to submit their work to TurnitIn to receive their grades.
    • by godrik ( 1287354 )

      TurnitIn has proper FERPA protections as they are subcontractor for the universities. The rights and permission given to the company are carefully negotiated.

      None of that happens with ChatGPT as far as I know.

      • FERPA protections mean little outside of the US.
      • Oh, they made a dealeo with the universities? Well that makes it all well and ethical, I'm sure.
        • Politicians: "Oh, the Student Bit [wikipedia.org] is set in the row! They can't use that data from the database!"
          Reality / Literally Any CEO: "Just modify the query to ignore that bit and fake the logs. Not like they expect us to be able to do it anyway for "National Security" reasons....."
      • It isn't so much a FERPA issue as it is a copyright issue....

  • AI grading essays that students used AI to write, to graduate so they can get jobs where they'll use AI to replace workers

  • Solved problem (Score:5, Interesting)

    by Anubis IV ( 1279820 ) on Sunday April 07, 2024 @02:57PM (#64376752)

    AI tools like ChatGPT use such entries to train their algorithms...

    OpenAI already has licenses available wherein they do not use your prompts for training purposes. If you’re working at an institution, they should already be using the appropriate license.

    • OpenAI already has licenses available wherein they do not use your prompts for training purposes.

      You have a lot of faith in the integrity of that company. I have no idea what makes you believe this company has integrity, but ok then. Enjoy your trust. Oh my.

  • Teachers are using AI to grade essays that students likely used AI to generate.

    Weird porn intro, but ok.
  • by TheNameOfNick ( 7286618 ) on Sunday April 07, 2024 @03:00PM (#64376758)

    Way to prove you're redundant.

  • by twoallbeefpatties ( 615632 ) on Sunday April 07, 2024 @03:25PM (#64376786)

    There's a post on the Teachers subreddit where a teacher brags that they just gave their entire class a 0% grade for cheating on a paper. The teacher says they fed all of the kids' papers through an AI program called TurnItIn, and the AI confirmed that most of the students' papers had similarities to each other and had AI-generated text. https://www.reddit.com/r/Teachers/comments/1bwojmm/kids_think_chatgpt_is_going_to_save_them_turnitin/ [reddit.com]

    In the replies, someone said they tested TurnItIn by submitting an estimate paper that they wrote on their own. TurnItIn said the estimate was 94% written by AI. A college student claims that all of their papers were declared 50%â"90% written by AI, except for when they submitted an AI-generated paper as a test and it came back as 25% written by AI.

    Whenever someone claims that AI is about to replace a job function, ask for the rate of false positives.

    • by gweihir ( 88907 )

      There's a post on the Teachers subreddit where a teacher brags that they just gave their entire class a 0% grade for cheating on a paper.

      So teachers brag that they are crap at their job? These people are not teachers. They failed to make it clear to their class why the subject matter was im portant and why it was important to do the work yourself to learn how to do it.

      Whenever someone claims that AI is about to replace a job function, ask for the rate of false positives.

      Indeed. There are desk-jobs that are low-skill, high-error-rate, no-insight-required and these may be subject to replacement by Artificial Incompetence. Other jobs are not.

    • Whenever someone claims that AI is about to replace a job function, ask for the rate of false positives.

      Why? Ignorant people don't care and it is clear as day that our leaders are ignorant... so what does it gain us to realize that everything is fucked and that there is nothing we can do about it?

  • by Artem S. Tashkinov ( 764309 ) on Sunday April 07, 2024 @03:29PM (#64376792) Homepage
    AI grades itself.
  • by Midnight Thunder ( 17205 ) on Sunday April 07, 2024 @03:46PM (#64376820) Homepage Journal

    So we will have students submitting AI generated essays and professors checking them with AI.

    At what point does everyone make themselves irrelevant?

    • They already are. If it wasn't for the fact that the owner class lacks their own AI robot army to defend themselves with, we'd all be in terrafoam housing / 6 feet under by now.
  • by Baron_Yam ( 643147 ) on Sunday April 07, 2024 @04:02PM (#64376844)

    Having an AI grade your work is fraud, and should be pursued in court. You're paying more than enough for feedback from a skilled human professional.

    If they want to cut your tuition 95%, then fine. Otherwise, not so much.

    • You're paying more than enough for feedback from a skilled human professional.

      Oh, but you agreed to it in section 2446567 of the 200 page student enrollment agreement written in Ancient Egyptian. Didn't you read that contract in full before signing it? What do you mean that contract only goes up to page 24 and is written in English? We updated it three days ago and it took effect the second you logged into your class this morning. We can't update it? What do you mean? The original contract gave us the right to update it at our sole discretion and without notification to you.

      Having an AI grade your work is fraud, and should be pursued in court.

      Class a

      • by chefren ( 17219 )

        Oh, but you agreed to it in section 2446567 of the 200 page student enrollment agreement written in Ancient Egyptian.

        Right, and when you are finally able to read and understand it, you graduate. Ingenious!

    • You pay for school so that they can teach you. Who or what evaluates you learned something is less relevant.
      • No, you pay to have that education certified by a reputable organization so others will recognize it.

        You can teach yourself whatever you want with more effort and probably a lot less money, but typically unless you already have a proven record you're going to want a diploma or degree to present to a potential employer. That is what you are ultimately paying for; the education you get is to maintain the institution's credibility so that piece of paper has value.

  • Are the students paying for an AI to teach them, or an AI service? It can be both, but only when used together explicitly.
  • Do they at least remove the students' names ?
  • I know former college students who were subjected to previous attempts at automated grading of assignments.
    They learned to game the system, to produce the absolute minimum required to get a good grade from the program.
    I can't believe that it would be different here.

    I think in general, it should not be an issue that someone uses modern technology to make their job easier: whether it is a student or a teacher.
    But the user has to understand the limits of the tool, and not rely on it functioning outside those l

    • Long ago we had public high school driving instruction in a simulator. Some of us discovered empirically that one could get a passing score by simply holding the brake pedal down for the duration of the exercise. Word got around quickly.
  • If the AI that the student uses to write the paper is the same AI that the teacher uses to grade the paper, does it get a higher score?
  • And they are not "ethical", but very practical. Essay grading is not about finding a grade. It is about understanding what the strengths and weaknesses of each essay are and about giving the people that wrote the essays pointers and help. Artificial Incompetence cannot do any of those things. Apparently, these "professors" do not understand what it means to teach.

    As an academic teacher, I am deeply offended by such shoddy, lazy and outright dumb practices.

  • Do students get downgraded if they put copyright notices on their assignments?
  • Did people think professors themselves graded student papers in person? Most of the time it was the TAs that did the first pass, and professors would at best do the final edits.

    If done properly, this could be a net positive for the students. In other words, AI does the draft of the work, professor looks at the high level structure.

    (But of course a negative hit on the TA's own advancement as they would be losing out on this experience).

    • In-the-day I was fortunate to have 1st-rate graders for student homework in an advanced Maxwells Equations lecture. I assured myself they could solve difficult book problems. Spreadsheet-type  student solutions were simply rejected. Of-course I graded all tests and quizes, and questionable solutions got face-to-face attention .   Can't imagine it's any different now-days. 
  • If profs used AI ro both write and grade exams, as well as solicit feedback from AI and students on both tasks, maybe exams would improve. If course profs should be able to spend less time on the boring parts of teaching (esp exams and term papers) while becoming mire inventive at it. Ideally they could also use AI to improve the broader learning experience itself, since jettisoning the lecture model of teaching is decaded overdue, not to mention multiple choice and true/false tests.

  • What could go wrong?

    Seriously, teaching is the most human intensive interaction you can have - it is what makes us human. And we want to go and automate it? This is a disaster in the making. Only a few generations of this and everyone's knowledge will be worse than a chat bot. This is taking human innovation away from the future.

    Congratulations everyone, we have witness peak human and can all die knowing we lived at the peak of humanity.
  • After reading a lot of similar stuff, people kinda turn into robots, especially if what they are reading is bad or uninteresting
    It's plausible that actual robots can be more fair and consistent
    Of course, today's AI is very immature and may do a really bad job

  • AI used right (Score:5, Insightful)

    by Tom ( 822 ) on Monday April 08, 2024 @03:23AM (#64377554) Homepage Journal

    Don't understand the hate. This is actually AI being used in the right way. As an assistant. Not to replace a human, but to help with the repetitive ordinary tasks that are part of the job.

    My own experience is similar. When I ask AI to generate some text for a purpose, the result is meh. But as a text critic or to get suggestions for improvements, as a proof reader, it's pretty good.

    What should happen is that we don't take an AI output and just use it as-is, but use it as an input for a human who does the actual job. AI isn't magic, it's just a tool. Nobody complains that a lever enables us to excerpt more force than our muscles alone could.

    • The problem of course is that the pressures of capitalism reward the teacher for getting more done in less time, and thus they won't use AI an assistant but rather take everything verbatim.

      • by Tom ( 822 )

        The pressures of capitalism can also work to force good quality for the money spent on education. Not everything in capitalism is about the lowest price. I know that cheap-cheap-cheap has become a kind of mantra, but that's not god-given.

  • But a writer's workshop teacher at the University of Lynchburg in Virginia "also sees uploading a student's work to ChatGPT as a 'huge ethical consideration' and potentially a breach of their intellectual property. AI tools like ChatGPT use such entries to train their algorithms..."

    I appreciate or agree with the whole statement, but I want to focus on the last sentence.

    Suppose a class, for whatever, reason, even just randomness, has a bunch of bad students, or bad teaching, or just writes shitty papers for no special reason. Whatever the issue, they write a class full of C, D, and E grade papers. If the teacher grades them there is opportunity for the students to learn and improve. But, if the AI does the grading, but does not grade correctly in the first place, e.g. has a pre-eval

    • by Veretax ( 872660 )
      What if they gamify it so that you earn more points on various submissions. You could see the bigger papers constructed over time, and earn a better score.
  • Reminds me of this: https://edsurge.imgix.net/uplo... [imgix.net]
  • AI does some nice stuff, but it is not (yet, anyway) reliable. If a prof asks an AI for some specifics that can then be checked? No problem. Trusting the AI's results directly? No.

    I use an AI frequently, asking it to re-work a draft text that I want to send. Sometimes it finds better ways to say something. Sometimes it misunderstands the text. Sometimes it introduces errors. In the end, it's not quite as useful as asking another person for an opinion, but it's a lot more convenient. And if you asked someo

  • Students write their essays using AI, profs grade those essays using AI.

    That's like buying two chess computers so they can play each other and you have time for better things.

  • Fine, just allow the students to pay their tuition with artificial money, for classes where this is done. Problem solved!
  • Of course there are, allowing the thing that wrote the paper to grade itself?

  • 1) the teachers that would be fine with this are probably phoning that shit in anyway; would we REALLY expect such a teacher - when told they can't use AI - to scrupulously follow the instructions and laboriously/ethically go through the papers carefully themselves instead?

    2) re point 1, let's maybe revisit that whole 'tenure' bullshit and avoidance of merit pay raises/not-raises for teachers in general as some sacrosanct profession? I know if I hand a chunk of my job to a machine, I need to understand tha

  • Chatgpt might be less biased than your typical humanities prof in grading essays. The prof knows who submitted the essay.

  • Putting aside the overly broad powers that students have to agree with, is using AI to check students work a form of moral or ethical IP violation? Feeding a student's work into an AI to check it, could be considered plagiarizing.

    Plagiarizing: to steal and pass off (the ideas or words of another) as one's own : use (another's production) without crediting the source

    from: https://www.merriam-webster.co... [merriam-webster.com].

    If the AI uses that work as a form of “learning” and the student didn't explicitly agree to that beforehand, and if the prof doesn't credit the work, effectively they stole it, and misappropriated that work. Again, I know t

  • Students are paying for a trained human to do this.

    And using the chatbot, instead of grad students?

    Oh, right, first go from 70% tenured in the 1970's to 30% now, and grad student instructors with *ZERO* job security, let's get rid of them, so we can pay the president and the coach more. Who cares about grad students?

  • The AI would ensure each paper gets evaluated equally, without any unintended bias by the professor. The AI could evaluate the papers poorly or evaluate them well, but at least you could say it would evaluate the papers equally, regardless of what it uses as a reference for grading the papers.
  • When you submit your longhand essay, which didn't involve AI at all, the OCR engine is going to introduce a lot of errors once the TA has scanned it in. That's on top of the errors from the TA not dusting the platen of the scanner down before feeding your pages in.

    Use both sides of the paper, double-spaced so you can see the other side of the page from the side you're scanning.

    • Even in the 1980's I remember that some professors would not accept longhand and college students had to get their hands on a typewriter. So high school graduates would get a suitcase "portable" electric typewriter as a gift. If not one of those fancier and more expensive word processors.
      I made a few bucks back in the day as a middle schooler typing out term papers for college students. My version of the lemonade stand.

      • Never encountered that in the 1980s. Though I did have a friend who, like you, made a little money on the side typing up people's theses for them. And inserting up to 11 "Judas pages" with deliberate erors [sic] on them, to encourage the recipient to do their proof reading carefully. (He'd say there were a dozen "Judas pages", not 11.)

        What you describe would have been fought by the SRC (Student's Representative Council : trade union, in effect) as being biased against poorer students, and in favour of rich

  • The only concern I have is they should quit bitching about students who use ChatGPT to write essays if they're going to use AI to grade them.

    It's fucking hypocritical.

If I have not seen so far it is because I stood in giant's footsteps.

Working...