Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Education AI

ChatGPT Outperforms Undergrads In Intro-Level Courses, Falls Short Later (arstechnica.com) 93

Peter Scarfe, a researcher at the University of Reading's School of Psychology and Clinical Language Sciences, conducted an experiment testing the vulnerability of their examination system to AI-generated work. Using ChatGPT-4, Scarfe's team submitted over 30 AI-generated answers across multiple undergraduate psychology modules, finding that 94 percent of these submissions went undetected and nearly 84 percent received higher grades than human counterparts. The findings have been published in the journal PLOS One. Ars Technica reports: Scarfe's team submitted AI-generated work in five undergraduate modules, covering classes needed during all three years of study for a bachelor's degree in psychology. The assignments were either 200-word answers to short questions or more elaborate essays, roughly 1,500 words long. "The markers of the exams didn't know about the experiment. In a way, participants in the study didn't know they were participating in the study, but we've got necessary permissions to go ahead with that," Scarfe claims. Shorter submissions were prepared simply by copy-pasting the examination questions into ChatGPT-4 along with a prompt to keep the answer under 160 words. The essays were solicited the same way, but the required word count was increased to 2,000. Setting the limits this way, Scarfe's team could get ChatGPT-4 to produce content close enough to the required length. "The idea was to submit those answers without any editing at all, apart from the essays, where we applied minimal formatting," says Scarfe.

Overall, Scarfe and his colleagues slipped 63 AI-generated submissions into the examination system. Even with no editing or efforts to hide the AI usage, 94 percent of those went undetected, and nearly 84 percent got better grades (roughly half a grade better) than a randomly selected group of students who took the same exam. "We did a series of debriefing meetings with people marking those exams and they were quite surprised," says Scarfe. Part of the reason they were surprised was that most of those AI submissions that were detected did not end up flagged because they were too repetitive or robotic -- they got flagged because they were too good.

Out of five modules where Scarfe's team submitted AI work, there was one where it did not receive better grades than human students: the final module taken by students just before they left the university. "Large language models can emulate human critical thinking, analysis, and integration of knowledge drawn from different sources to a limited extent. In their last year at the university, students are expected to provide deeper insights and use more elaborate analytical skills. The AI isn't very good at that, which is why students fared better," Scarfe explained. All those good grades Chat GPT-4 got were in the first- and second-year exams, where the questions were easier. "But the AI is constantly improving, so it's likely going to score better in those advanced assignments in the future. And since AI is becoming part of our lives and we don't really have the means to detect AI cheating, at some point we are going to have to integrate it into our education system," argues Scarfe. He said the role of a modern university is to prepare the students for their professional careers, and the reality is they are going to use various AI tools after graduation. So, they'd be better off knowing how to do it properly.

This discussion has been archived. No new comments can be posted.

ChatGPT Outperforms Undergrads In Intro-Level Courses, Falls Short Later

Comments Filter:
  • Easy solution (Score:5, Insightful)

    by Viol8 ( 599362 ) on Saturday June 29, 2024 @05:11AM (#64587315) Homepage

    In fact one invented by the victorians and still used on proper courses (ie STEM) - put the students in a vigilated exam room where they have to do the exam on the spot.

    Coursework has always been a cheats paradise and I myself colluded with friends on my course. I never did understand what it was supposed to prove other than you're ability to write or type.

    • Re:Easy solution (Score:5, Interesting)

      by VeryFluffyBunny ( 5037285 ) on Saturday June 29, 2024 @06:10AM (#64587357)
      I teach students how to write syntheses, i.e. taking info from different sources & writing a coherent, cohesive, accurate, & appropriate response to a question or prompt. Students learn a great deal about the subject matter from this process. It's a valuable part of a good curriculum.

      However, if you want to assess what somebody has learnt about the subject matter, asking them to do this is not an optimal format. IMHO, there's no substitute for timed, invigilated tests. That alone informs students that they at least have to remember what they're studying but through the process they hopefully learn that it's easier to remember stuff that they understand & that connecting the concepts & ideas together into schemas tends to work pretty well. It also transfers better into practical applications, i.e. it's easier to develop a working knowledge of the subject matter.
      • by Moryath ( 553296 )
        "timed, invigilated tests" are a measure of memorization regurgitation, not synthesis and application.
        • by Viol8 ( 599362 )

          Yeah, maybe you can get away with that in history or english, good luck just regurgitating on a maths or physics paper where you're expected to actually solve stuff.

          • Not necessarily history either. The tests should not simply be what day did X happen. It should be perhaps something like what factors influenced the decision makers behind X. The former and latter represent the difference between an elementary school level social studies class and a late high level history class. I'd hope a college level class is operating in the "history" manner.

            I expect an english class could operate at a higher, non-regurgitory, level too.
        • You obviously didn't process what I wrote: The process I described requires synthesis. I design this stuff for a living. I know what I'm talking about.
    • In fact one invented by the victorians and still used on proper courses (ie STEM) - put the students in a vigilated exam room where they have to do the exam on the spot.

      Coursework has always been a cheats paradise and I myself colluded with friends on my course. I never did understand what it was supposed to prove other than you're ability to write or type.

      Most of my professors encouraged collusion on projects and assignments. They expected each student to learn the material and not just copy the others work, but working together to understand a problem and come up with a solution was acceptable. Perhaps the only exceptions were tests and when the prof said not to work together, such as on homework assignments as an essay or simple problems in courses such as engineering dynamics.

      • Its better to have the students collaborate amongst themselves, shore up the different gaps in different individuals, effectively teach themselves to a degree. Its teamwork, people with complementary skills benefiting from each other, learning from eachother.

        This is far better than having lone students bother you during office hours with questions. :-)
    • One of the lifeguards I chat with at my school is in his capstone Masters project course/team project. The focus of the project is writing serial prompts to reduce the "word salad" generated by LLMs when trying to generate longer essays on technical subjects. Their customer is a post-doc working on a pub.

      This is never going "back in the bottle" and will infiltrate even most STEM courses, eventually.

      Professors are not incentivized to teach, and this will be an easy way out for schools/departments, the prof

    • by mspohr ( 589790 )

      Sounds like you benefited from group work.
      It's a good way to learn.

    • Good coursework is not where one looks up the correct answer in the book and copies the answer. It is when you take the lessons learned from the book/lecture and apply them to a problem not identical to those in the book/lecture. The application of knowledge helps set the knowledge in the student's mind. It also provides feedback to the student as to whether they are progressing satisfactorily.

      Yes one could cheat, but that undermines one's learning.

      A collaborative efforts is not necessarily cheating.
  • Are we surprised? (Score:4, Interesting)

    by Mr. Dollar Ton ( 5495648 ) on Saturday June 29, 2024 @05:26AM (#64587325)

    NLPs are random string generators, which are modded so that the strings they generate resemble other strings, which the thing learned on. So, maybe it works well with rote, but in general they don't work at all when one needs to apply reason.

    The problem is what to do with the ever-increasing number of people, who don't understand how "AI" works and wrongly believe they have true "intelligence" talking to them from the chat prompt, and take whatever comes out of it at face value. I see many, for example, who are writing "code" by asking chat-gpt random string generators to give them "code samples" for every line they have to write. Often a question that is half a page long is composed just for an one-liner like os.chdir(path).

    Just yesterday one of my students came with some i2c commands that "didn't work". It was easy to see why, they were simply wrong. We went over the datasheet briefly, the student worked a bit to build the proper commands and, unsurprisingly, all was fine. What was curious, however, was how could this person come up with the strings they showed me, I mean, it is a 5-page explanation of several simple bytestrings, not rocket science.

    Turns out the person didn't read the data sheet at all, but asked a chat service to generate the code to drive the device. The chat complied, providing randomly generated "answers" of what to feed the device. The answers looked somewhat like a python script using the python smbus interface, except, of course, the actual i2c commands being complete bonkers, and the API being used incorrectly in some cases.

    Time wasted, lessons unlearned, garbage code and failure increases proportionately to chat-gpt usage. And as more of it appears (and more of it gets into the "learning" feedback loop), the worse it will get.

    The future is bleak.

    • LLMs, dear auto-correct "AI", not NLPs.

    • by Viol8 ( 599362 )

      " So, maybe it works well with rote, but in general they don't work at all when one needs to apply reason."

      Sounds like a large part of the human population frankly. Great at remembering facts, not so great when applying them.

      • Yes, it does, because at some stage it makes sense to get a cached and simplified picture of the basic facts of the world to get you bootstrapped, and it beats re-discovering the bicycle from scratch. But the sooner you are pushed out of the comfort zone of rote, the better you become at dealing with change and challenges and doing something new and important. And the more work you put into it, the better you get.

        Or at least this is my experience.

    • Re: (Score:1, Insightful)

      by Anonymous Coward

      The future is bleak.

      Last year, ChatGPT want from infancy to toddler.

      This year, ChatGPT is embarrassing undergrads.

      The future is bleak alright. For paid educators.

      Life will always prove we are perpetual students who have two options to learn; the easy way or the hard way. But the days of being financially raped for an overpriced piece of paper that proves how dumb people still are after 4 years, are hopefully coming to an end.

      The secret no one says out loud but everyone knows? 90% of jobs could likely be done with little mor

      • by Mr. Dollar Ton ( 5495648 ) on Saturday June 29, 2024 @06:38AM (#64587381)

        Sam Altman, please log in.

      • "The future is bleak alright." For the human race Unable to even think for themselves. Way worse than normal human stupidity, which was bad enough
        • I think that mostly applies to the rsilvergun types that basically just live for the sake of existing and bleat every time something doesn't go their way. But the word "bleak" doesn't seem right for them, it's just what it has been since time immemorial: Just some animals eating and breading. Kind of like looking at a rock and saying it has a bleak future -- sure, I guess, but so what?

    • Re: (Score:2, Insightful)

      by Rei ( 128717 )

      but in general they don't work at all when one needs to apply reason.

      There are literally bechmarks on this topic.

      I just made this one up yesterday - complete with fictional topics, shuffled sentences, and distractors thrown in.

      ---
      1. If something is nahu then it's blurgy.
      2. John is a wajut.
      3. Greebles are nahu.
      4. Emily is a wajut.
      5. All wajut are nahu.
      6. Cindy is not nahu.
      7. Some greebles are wajut.

      Is John blurgy?

      Claude:

      Let's approach this step-by-step:

      We know that if something is nahu, then it's blurgy. (Gi

      • Re: (Score:3, Insightful)

        by gweihir ( 88907 )

        Your "benchmark" is broken. It looks like it is using implications (which are critical for finding rational arguments), but it really only is using equivalence (which LLMs usually can do).

        • by gweihir ( 88907 )

          The person that voted this down is broken as well. Well, most people are not rational. For example, only about 20% can be convinced by rational argument. The rest is not really capable of rational thinking.

      • I just made this one up yesterday

        and I made a counter example based on your other example where the LLM fell completely flat because the inference it needed to make logically was weird.

        LLMs are self-assembled Turing-complete machines.

        https://x.com/ylecun/status/17... [x.com]

        Everything operates in a vast conceptual (latent) space of thousands of dimensions, wherein operations are done on concepts, not words. "King - Man + Woman ~= Queen", etc.

        Kinda, but this is as usual a gross over simplification and the devil is i

        • by Rei ( 128717 )

          and I made a counter example based on your other example where the LLM fell completely flat because the inference it needed to make logically was weird.

          *YOU* fell flat, being outperformed by the LLM, with YOUR mistake that "some" could mean "zero", which it absolutely cannot.

          You beat the LLM in catching one possibility, while failing in one other metric. And I failed to catch the possibility that you caught and the LLM didn't. You ultimately conceded that LLMs *can* do logic.

          https://x.com/ylecun/status/17

          • *YOU* fell flat, being outperformed by the LLM, with YOUR mistake that "some" could mean "zero", which it absolutely cannot.

            I have literally no idea what you're talking about now, but you seem really defensive on behalf of Chat GPT.

            You beat the LLM in catching one possibility, while failing in one other metric. And I failed to catch the possibility that you caught and the LLM didn't. You ultimately conceded that LLMs *can* do logic.

            Again, let me introduce you to the concept of an "internet forum". Anyone ca

            • by Rei ( 128717 )

              Then now I have no idea what *you* are talking about, since you claimed to have been the one discussing this with me the other day with a counterexample. Except that was apparently vux984 [slashdot.org], not you. I see zero replies from you on the topic of the above wajut / nahu / greebles example.

              You are confusing the ML term "concept" with the human reasoning term "concept". They're related and often align quite well, but the mapping is not nearly so clean as you are asserting.

              That humans may lack a term for a concept

              • Ok not going to give a big reply because I'm beginning to feel it's fruitless.

                I didn't say they can't do logic. You kept saying I did. I didn't and then you dropped it without even the kind of usual apology of "oh sorry I mixed you up with a different poster". So I feel you are not discussing in good faith.

                We were discussing the other day, you have an example with bowling and probabilities, and I attempted to get it to logically reason about the numbers it generated and it fell completely flat. You didn't r

      • by Junta ( 36770 )

        Let's rewrite your example of the LLM "reasoning":

        class nahu:
        blurgy = True
        class wajut(nahu)
        John = new wajut()

        print(John.blurgy)

        Is that "reasoning"? It's a porting of the question from natural language to very trivial programming and has been possible for decades.

        The natural language part is certainly incredible, but the "reasoning" portion of the example hardly qualifies as demonstrating reasoning beyond what programmers could do historically, and trivially.

        • by Viol8 ( 599362 )

          The point you're missing is that is HASN'T been specifically programmed to understand these relationships.

          • by Junta ( 36770 )

            The point is it grasps natural language grammar, which is amazing, but the "thought" process, or manipulation in accordance with that grammar is rudimentary.

            In this context, it's effectively a basic program written in a human language. The prompt *is* the programming for these specific relationships. "X is a Y" is, for lack of better terminology, parsed the same way "X = Y", and so forth. It's a set of declarations of variables and values, roughly, and then it can traverse those declarations like an inter

            • by Rei ( 128717 )

              In this context, it's effectively a basic program written in a human language. The prompt *is* the programming for these specific relationships. "X is a Y" is, for lack of better terminology, parsed the same way "X = Y",

              This is not how LLMs work.

              I tried a few different ways

              There are literal benchmarks on the topic; we don't have to take your "I tried a few examples" remark for anything.

              • by Junta ( 36770 )

                > This is not how LLMs work.
                Perhaps the better phrase it is "practically a basic program". The fact remains that this isn't a deeply considered processing problem, it's utterly trivial compute wrapped up in natural language. It's not just regurgitating learned material rote, but neither is it 'reasoning' about the data, and your examples have not demonstrated 'reasoning' and people have replied to you with research papers specifically addressing the phenomenon.

                > There are literal benchmarks on the to

    • NLPs are random string generators, which are modded so that the strings they generate resemble other strings, which the thing learned on. So, maybe it works well with rote, but in general they don't work at all when one needs to apply reason.

      The problem is what to do with the ever-increasing number of people, who don't understand how "AI" works and wrongly believe they have true "intelligence" talking to them from the chat prompt, and take whatever comes out of it at face value. I see many, for example, who are writing "code" by asking chat-gpt random string generators to give them "code samples" for every line they have to write. Often a question that is half a page long is composed just for an one-liner like os.chdir(path).

      Just yesterday one of my students came with some i2c commands that "didn't work". It was easy to see why, they were simply wrong. We went over the datasheet briefly, the student worked a bit to build the proper commands and, unsurprisingly, all was fine. What was curious, however, was how could this person come up with the strings they showed me, I mean, it is a 5-page explanation of several simple bytestrings, not rocket science.

      Turns out the person didn't read the data sheet at all, but asked a chat service to generate the code to drive the device. The chat complied, providing randomly generated "answers" of what to feed the device. The answers looked somewhat like a python script using the python smbus interface, except, of course, the actual i2c commands being complete bonkers, and the API being used incorrectly in some cases.

      Time wasted, lessons unlearned, garbage code and failure increases proportionately to chat-gpt usage. And as more of it appears (and more of it gets into the "learning" feedback loop), the worse it will get.

      The future is bleak.

      That's a pretty long way of saying "someone was using a tool incorrectly".

    • by gweihir ( 88907 )

      Yep. When I put my last secure applications exam into ChatGPT, it got 100% on the "can look it up" questions and a whopping 0% (total failure) on the questions that needed a small bit of thinking. It got something like 50% on "can look it up but need to apply it to a very simple scenario".

      I think what we may be getting here will be students that fly through the first year and then fail hard when AI does not cut it anymore and they have not learned anything in that first year. Not good.

    • ... "learning" feedback loop

      Is that code for "decides to cheat", because that's what this student did. He didn't find an-out-of-date answer, or jam several code-snippets together: He asked someone/something else to write the answer. Computing courses usually have an end-of-year exam: How was your student planning to cheat on that?

      You learned that he doesn't understand the subject material or the machine he ordered to do the work for him. The student learned that pretending to know the subject, requires work.

      ... works well with rote ...

      I guess, feed the LL

      • Is that code for "decides to cheat"

        No, it is a statement of fact. People publish what "AI" "creates" for them, often without bothering to check it. This "work" is scraped by the responsible "AI" companies and used to train the "next generation" of "AI". So the "next generation AI" feeds on the hallucinations of the current generation. You can imagine how that "improves" it.

        , because that's what this student did.

        They didn't, and this is not an exam - you would not want test anyone on the content of a datasheet. Using aides when you decipher one is completely ok, the problem is th

  • by TheNameOfNick ( 7286618 ) on Saturday June 29, 2024 @05:58AM (#64587347)

    Intro-level courses are for you to learn, not to perform. You're not supposed to know but to understand. It's the basics, so you can get by parroting like an LLM, but then, like an LLM, you'll faceplant when you get to the interesting stuff.

    • +5

      I was going to post that lower division courses are about memorize and regurgitate which is what computers are good at. That's the key element function of every database: store and retrieve information provided with no analysis. But upper division and post undergraduate work requires understanding. The high you go the more understanding is required and the poorer a computer program will do.

      A $5 hand held calculator performs 100% on all K-12 math. So what? Should we call it an AI?

  • This doesn't need either exams nor does it need for us to ban AIs.

    Instead, it requires us to distinguish between synthesis (something that LLMs are good at) and creative analysis (something that they suck at). The way in which universities ask questions of students needs to change - but this should have been done decades ago - any test that, essentially, expects a student to regurgitate facts has missed the point.

    Undergraduate degrees are particularly prone to this - because (I believe that) nobody in academia sees the first degree as anything other than a money spinner // entry requirement to a higher degree.

    Creativity is, after all, what is sought after. Creative analysis is not the forté of AI. AI's are great at pattern matching. But both discovering a novel pattern and being able to encode that into a resonant narrative is not something that AIs are any good at. yet.

    A great example - try getting an AI to write a good novel. It cannot. This isn't to do with the number of words - but to do with understanding what captivates, what is resonant, and how we are persuaded or moved by. a piece of narrative.

    Eloquence, elegance, intuition, creativity, novelty - all of these can be measured - but it's much harder to engineer a means of using anything other than human effort to mark and measure such aspects without human expertise in the field.
  • by fluffernutter ( 1411889 ) on Saturday June 29, 2024 @06:27AM (#64587371)
    ChatGPT has the benefit of everything it has scraped from the internet. How would a student do if he had google at his disposal during the exam, and had the exam time limit extended by the number of times that the computer is faster at searching?
    • by m00sh ( 2538182 )

      ChatGPT has the benefit of everything it has scraped from the internet. How would a student do if he had google at his disposal during the exam, and had the exam time limit extended by the number of times that the computer is faster at searching?

      Why don't we hamstring the student some more then?

      Let's make it so that the student has to do the exam submerged to the chest in a cold pool while holding a 20lb weight in one hand and can only answer with a touch phone on the other hand.

    • by Rei ( 128717 )

      LLMs are not databases.

      • A stone tablet isn't a database either but it still holds information. If you put the answers to a test on a stone tablet the student couldn't have it in the test with them, yet the llm holds billions of times more information than a stone tablet.
  • Tools (Score:4, Interesting)

    by cascadingstylesheet ( 140919 ) on Saturday June 29, 2024 @06:50AM (#64587389) Journal

    He said the role of a modern university is to prepare the students for their professional careers, and the reality is they are going to use various AI tools after graduation. So, they'd be better off knowing how to do it properly.

    Yep.

    They are tools. Refusing to use a better tool is stupid. (Yes, yes, worshiping a tool is stupid too, but who's doing that?)

    Latest cool thing I did with a tool (ChatGPT) -

    "Given this form {url to the front end of a giant form on the client's old website}, can you generate an import file for {form tool we are going to use} to recreate it on a new website?"

    "Sure! Here is the import json file for recreating this form in {form tool} ..."

    Did I need to tweak a few of the 100+ fields after importing the file? Sure. But did it save me a few hours? Yep.

    A tool. Don't be a Luddite. Learn how to use it.

    • Sure, so the tool does work you might delegate to entry-level folks. Remind me where experienced people like yourself come from? Yes, compilers and IDEs and so forth didn't break the system, but sooner or later we're out of seed corn and paying more taxes for welfare support rather than paying for the inefficiencies of redundant workers.

  • When CAS (computer algebra systems) came in, it could solve integrals and differential equations in seconds. A one hour exam created by a professor could be solved in under a minute by CAS (most of the time was typing in the question to it).

    I pointed out the absurdity to the professor but he just said they can't use CAS during exams. But that was not the point. The point was that it was a useless task to learn how to do. There was no practical purpose for 99.99% of people to learn how to do.

    What was worse w

    • by Baron_Yam ( 643147 ) on Saturday June 29, 2024 @08:26AM (#64587489)

      I think you missed the point. You may have a tool that does the work faster and even more reliably, but you weren't supposed to be learning how to do the work - you were supposed to be learning how it works and why, to give you a deeper understanding of uncountable other things to which those concepts apply.

      I was a finite math geek back in the day. Now I would struggle to calculate all but the simplest probabilities, but I still understand the concepts behind the calculations and when they apply. That was the real lesson.

      • by m00sh ( 2538182 )

        Actually, it was just to pass the class which was in turn to get that CS or engineering degree. It was to gate-keep that degree and make it hard to get and keep the standard of the university.

        The last thing you want is someone who couldn't learn integration by parts to get an CS and engineering degree and then sully the name of the university.

        • Doing these operations by hand gives you an insight and actual understanding of what they are. You might not use it, but understanding what matrix operations actually are can give you insight into how they're useful.
          The same for calculus. Understanding what the first and second derivative and integral actually are enhances your ability to interpret graphs, and help ask better questions about functions.

          Now, getting my degree I guess I had a similar experience as you: mindless droning on exercises, where I ha

        • This is a specific flaw in the US style of university system. This is not to say it's better or worse overall than the British system, merely that they have different flaws.

          In the US system you get the advantage of figuring out what you want to do once you get there and after broader exposure to other topics, plus to essentially bail from a course which doesn't suit you. Minus side, you have to often do irrelevant things to study the course you want.

          In the British system, you don't get the irrelevancies. Ma

      • Totally agree. Interestingly enough, I was a math geek too, have a degree in it (one of the most otherwise useless degrees to have save the day-to-day applicability of statistics in business). I was out for a walk just yesterday, and trying to remember how to do long division, like I learned in grade 3. Remembered, but it took a while. How many of us educated folk here could work out 37850 / 43 on paper if needed?
        • Being completely honest, I never learned it. My teachers simply passed me through high school and my first (admittedly for profit) degree without bothering to ensure I knew it.

          It took a optional after class study group headed by my second degree's (state run) algebra instructor and three different attempts of me explaining I could understand the "X" in the numerator just fine, before anyone took the time to teach me.

          FYI: No, that wasn't a remedial course. When the instructor finally had the light bulb g
    • Hmm.

      Wiring exam questions is hard. Mindlessly cranking an error prone handle isn't useful.

      With that said, mindlessly plugging runes into a computer algebra system to get more runes proves what exactly? You need to actually understand calculus is order to do physics sms engineering.

      I don't mind if engineers use a cas, but I wouldn't trust one who couldn't manage without one.

      The 3x3 matrix exam question always struck me as insufferably lazy and I even got into a debate with the exam board at undergrad over th

      • by m00sh ( 2538182 )

        You're assuming that if someone can't invert a 3x3 matrix, then they don't know how matrices work at all.

        Or, if someone can't find the square of a number to 10th decimal place, then they don't know how numbers work.

        You can easily know how integrals work without remembering how integration by parts works or the various various integration tricks. Or, just do it numerically. I don't see when an engineer would even need symbolically integrate much less learn dozens of tricks for a small subset of problems.

        Yes,

        • Yes, if you can't invert a 3x3 matrix by hand then no you don't know much linear algebra.

          It's not a very good test because it's tedious, error prone and it's possible to basically memorize and answer with no understanding. But if you can't (modulo errors in cranking the handle) do it by hand you are missing some knowledge. It's elementary row ops (a variety of methods). Or cramer's rule. If you are feeling especially sparky or masochistic, you could have a crack at it via the SVD. Though you might cone unst

    • A university is not a trade school, its not there to teach you to use a tool, like CAS. Its there to teach you fundamental principles that won't change over time. Learning to use a tool is an exercise left to the student.

      As a gateway class, an ability to learn these fundamental principles may be a good proxy as to whether one is able to learn upcoming fundamental principles in a different topic. Allowing the CAS tool fails to screen out those lacking a necessary level of aptitude.

      As a holder of BS and
  • by dv82 ( 1609975 ) on Saturday June 29, 2024 @09:02AM (#64587513)
    "He said the role of a modern university is to prepare the students for their professional careers..." How sad. A university professor believes the university is a trade school, not a place to learn critical thinking and become educated enough to understand something of the ways of the world and one's place in it. So go ahead and simply teach the sheep AI so they can earn money and become good consumers.
    • "He said the role of a modern university is to prepare the students for their professional careers..." How sad. A university professor believes the university is a trade school, not a place to learn critical thinking and become educated enough to understand something of the ways of the world and one's place in it. So go ahead and simply teach the sheep AI so they can earn money and become good consumers.

      I would argue a good professor includes:

      learn[ing] critical thinking and becom[ing] educated enough to understand something of the ways of the world and one's place in it

      as part of:

      the role of a modern university [in] ... prepar[ing] the students for their professional careers...

    • by gweihir ( 88907 )

      Yes, that was my first thought as well. How the mighty have fallen and they do not even notice.

      • Damned if you do damned if you don't. If he didn't say that, he'd have been excoriated for being out of touch.

        And just because he's stating reality as it is right now, and isn't whining doesn't actually imply he things it's a good state of affairs.

    • Some 4-year schools are oriented towards preparation for grad school. Other 4-year schools are oriented towards preparation for industry. The latter does not reflect a trade school mentality. Both still require a robust set of general education classes outside of the major. Both are fulfilling the traditional role of the university by trying to produce a well rounded individual. Preparation for the needs of industry is not incompatible with a well rounded individual. Not does it prevent instruction on the f
  • by west ( 39918 ) on Saturday June 29, 2024 @09:26AM (#64587561)

    Most of what LLMs generate feels like an over-achieving grade 12 writing an essay on a topic they know little about.

    The sentences are cogent and it feels like it makes sense, but if you read closely, there's no there there. It's mostly hand-wavy sorta plausible nothings.

    Hmm... I just noticed that that applies to marketing literature and political speeches.

    I guess I better look forward to our new AI overlords.

  • by Registered Coward v2 ( 447531 ) on Saturday June 29, 2024 @09:32AM (#64587573)
    Back when you wrote answers in a little blue book. The TA had to grade a few hundred tests in an intro course, so you knew he or she wasn't reading it all that carefully. I would write a lot off stuff that was sort of relevant, capitalize key words and phrases, and be sure to fill up at least 2 books. What I wrote was not nonsense, just a lot of filler and enough real info that it seemed to make sense. Usually a few of those key words and phrases would be circled with a positive comment. Got an A in those classes. Never was sure if it was simply because the TA only read the capitalized stuff or simply figured if I wrote that much what the hell, mark it an A. Of course, when I got into my real coursework for my major that didn't fly.

    Sort of like ChatGPT today.

  • ...in Intro-Level Courses, Falls Short Later."

    Sounds a lot like my own college experience.

  • in that out only grades good regurgitation LLM are not capable of reasoning
  • ChatGPT actually READ the materiel.

  • Eh, education used to be little more than a hobby for rich people and a way to distinguish rich from poor in hiring and social mobility. For a short time it became a path for social mobility back then the GI bill underwrote the system and boomers saw the value in their own education. Now? I guess if you can pay a machine to do your work, it just becomes a matter of who can afford the best service and who can not.
  • It got me thinking about my own struggles with college assignments. Last semester, I was overwhelmed with work and tried using ChatGPT for a few of my assignments. Honestly, the results were mixed. For some of the simpler tasks, it did a great job. But for more complex essays, it didn't quite hit the mark. Then I found this cheap writing service in canada [papersowl.com], which was a game changer. They offered quality work that didn't sound robotic like some of the AI-generated stuff, that was made by a real person. It mad

Be sociable. Speak to the person next to you in the unemployment line tomorrow.

Working...