Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI Businesses Education

ChatGPT Passes MBA Exam Given By a Wharton Professor (nbcnews.com) 155

An anonymous reader quotes a report from NBC News: New research (PDF) conducted by a professor at University of Pennsylvania's Wharton School found that the artificial intelligence-driven chatbot GPT-3 was able to pass the final exam for the school's Master of Business Administration (MBA) program. Professor Christian Terwiesch, who authored the research paper "Would Chat GPT3 Get a Wharton MBA? A Prediction Based on Its Performance in the Operations Management Course," said that the bot scored between a B- and B on the exam.

The bot's score, Terwiesch wrote, shows its "remarkable ability to automate some of the skills of highly compensated knowledge workers in general and specifically the knowledge workers in the jobs held by MBA graduates including analysts, managers, and consultants." The bot did an "amazing job at basic operations management and process analysis questions including those that are based on case studies," Terwiesch wrote in the paper, which was published on Jan. 17. He also said the bot's explanations were "excellent." The bot is also "remarkably good at modifying its answers in response to human hints," he concluded.

While Chat GPT3's results were impressive, Terwiesch noted that Chat GPT3 "at times makes surprising mistakes in relatively simple calculations at the level of 6th grade Math." The present version of Chat GPT is "not capable of handling more advanced process analysis questions, even when they are based on fairly standard templates," Terwiesch added. "This includes process flows with multiple products and problems with stochastic effects such as demand variability." Still, Terwiesch said ChatGPT3's performance on the test has "important implications for business school education, including the need for exam policies, curriculum design focusing on collaboration between human and AI, opportunities to simulate real world decision making processes, the need to teach creative problem solving, improved teaching productivity, and more."
The latest findings come as educators become increasingly concerned that AI chatbots like ChatGPT could inspire cheating. Earlier this month, New York City's education department banned access to ChatGPT. While the education department cited "safety and accuracy" as reasons for the decision, the Washington Post notes how some teachers are "in a near-panic" about the technology enabling students to cheat on assignments.

Yesterday, for example, The Stanford Daily reported that a large number of Stanford students have already used ChatGPT on their final exams. It's prompting anti-plagiarism software Turnitin to build a tool to detect text generated by AI.
This discussion has been archived. No new comments can be posted.

ChatGPT Passes MBA Exam Given By a Wharton Professor

Comments Filter:
  • by bussdriver ( 620565 ) on Tuesday January 24, 2023 @10:43PM (#63238142)

    It's just an MBA... ever spend some time talking with those graduates?

    Didn't Trump pass one of those tests?

    • by ShanghaiBill ( 739463 ) on Wednesday January 25, 2023 @12:23AM (#63238290)

      Didn't Trump pass one of those tests?

      Trump is an impulsive narcissist with a poor attention span and no filter between his brain and his mouth.

      But he is not stupid.

      • by 93 Escort Wagon ( 326346 ) on Wednesday January 25, 2023 @01:18AM (#63238356)

        But he is not stupid.

        [Citation needed]

        • Re: (Score:2, Insightful)

          [Citation needed]

          He's a billionaire, and you're not. He may have made his billions by being an unethical slimeball, but that doesn't make him stupid.

          He was a successful TV star, and you weren't.

          He defeated an entire field of supposedly intelligent people to become President of the United States. He was ingenious at demagoguing the electorate's prejudices, phobias, and insecurities. No other candidate had a clue how to counter that.

          • by Potor ( 658520 ) <farker1 AT gmail DOT com> on Wednesday January 25, 2023 @02:41AM (#63238452) Journal
            You can be stupid and successful, as long as your stupidity is consonant with the viewers, or the electorate, or whatever is your target audience.
            • You can be stupid and successful, as long as your stupidity is consonant with the viewers, or the electorate, or whatever is your target audience.

              It also helps to have money so that people tend to say "yes" to you.

              (or at least the appearance of it, the "tax returns" never actually appeared...)

          • by orzetto ( 545509 ) on Wednesday January 25, 2023 @02:53AM (#63238470)

            He's a billionaire, and you're not.

            He inherited his wealth, I could be a billionaire too if I did not have to work for it. That, and he underperformed index funds (though figures are disputed since it's difficult to quantify his actual wealth).

            Your argument is a bit like "Charles III is a king and you're not". Well duh.

            • by King_TJ ( 85913 )

              Only on Slashdot could a ChatGPT discussion evolve into a Trump debate!

              But no, I don't think the "Trump was handed everything" argument holds as much water as many people would like it to. Take a look at the success rates of lottery winners in the USA, for an example. It's incredible how many of them lose everything in a really short period of time and wind up practically homeless.

              The mere fact Trump invested in real estate shows he was making smarter financial decisions than many people who are just given

          • Re: (Score:3, Insightful)

            by sir1963nz ( 4560389 )
            Trump airlines
            Trump steaks
            Trump Casino
            Trump Beverages
            Trump The game
            Trump University
            The list goes on...
            Trump makes George Santos look like an honest man
            To Help Get Him Into Wharton, Trump Allegedly Paid Someone To Take The SAT

            As for TV star... only for those with incredibly low standards, he was nasty back then too
            https://slate.com/news-and-pol... [slate.com]

            If trump did not inherit his wealth he would be running scams on tourists somewhere, and being in and out of prison.
          • Re: (Score:2, Insightful)

            by quonset ( 4839537 )

            He's a billionaire, and you're not.

            False. We have no evidence he is a billionaire as he has refused to provide any such evidence. He keeps saying he is, but he lies about a lot of things.

          • by gtall ( 79522 ) on Wednesday January 25, 2023 @06:39AM (#63238788)

            He says he's a billionaire. His relationship with the truth is in negative territory.

          • by tlhIngan ( 30335 )

            He's a billionaire, and you're not.

            So is Elon Musk. Who seems to care more about tweeting than actually running a company competently. I think he's less brains and more "money mule" given recent antics.

          • by nagora ( 177841 )

            [Citation needed]

            He's a billionaire, and you're not.

            So his dad was smart. And, in fact, Trump is probably insolvent rather than being a billionaire.

            He was a successful TV star, and you weren't.

            Interesting view of TV stars.

            He defeated an entire field of supposedly intelligent people to become President of the United States. He was ingenious at demagoguing the electorate's prejudices, phobias, and insecurities. No other candidate had a clue how to counter that.

            Well, he lost the vote so they seem to have had some clue. The garbage electoral collage system in the US handed him the office, but that was dumb luck; he didn't design the system.

            The thing about having 350m+ people in your country is that the one that happens to be luckier than everyone else will be very bloody lucky indeed. If Trump had been born to different parents he would have

          • A reality TV star needs brains?? The production makes the show; "the talent" is the spetical in front of the camera they help create.

            Investigators estimate Trump inherited and swindled from his father and uncle approximately a billion. He was a massive failure the whole time; it's hard to screw up that much inheritance into the poor house. One could buy slow growth bonds and significantly out perform Trump. He has probably only actually grown when the kids starting taking over and they got all those Russi

          • You don't need to not be stupid to be a billionaire either. The two are unrelated. Also it's likely he's not a billionaire, he doesn't have much proof beyond his own assertions. He's been bankrupt many times, each time a way to shed debt that he earned onto others. That's smart, but that's also likely advice from others.

            There are stupid people who are TV stars. Big deal. He had a lame TV show. He was only there because he spent decades building up his public image by appearing all the time in media,

      • Trump is an impulsive narcissist with a poor attention span and no filter between his brain and his mouth.

        But he is not stupid.

        Yes he is.

        He also only did undergrad at Wharton.

      • by quenda ( 644621 )

        But he is not stupid.

        No, he is clearly above average, but that is a low bar for a president. Most past presidents and contenders have been exceptionally smart people.
        Obama, Romney, Bush the first, Clinton x2, Nixon, Kerry, McCain - lots of really smart people.
        GWB less so, but he was a legacy hire. And a decent guy who knew his limits. Ronald Reagan was probably in decline, he seemed smarter in his younger days. Unlike Bush or Trump, he was an awesome public speaker.
        Actually, if you watch some old interviews of Trump from the 8

    • It's just an MBA... ever spend some time talking with those graduates?

      Didn't Trump pass one of those tests?

      We don't know. The con artist has threatened to sue anyone who releases any of his supposed college information.

      Because he was such a great student and all. The best. Top of his class. No one does class better than him.

    • Well to me telling is "Terwiesch noted that Chat GPT3 "at times makes surprising mistakes in relatively simple calculations at the level of 6th grade Math.". And yet the bot got a B. It shows there is a problem with the test if you can get a B at a masters level in an area where math is important and Jethro Bodine can do better.
    • How hard is it to answer "screw over the workers and take more money for the management" to every answer?
    • Trump also went to Wharton. He seemed to have others do much of the work for him. Possibly ChatGPT also was allowed to cheat in many ways, and the whole test likely set up as a show piece rather than being a real test in a real examination room with real restrictions on what source material you could use. Was ChatGPT marked down for plagiarism at all here?

  • by Virtucon ( 127420 ) on Tuesday January 24, 2023 @10:52PM (#63238158)

    That MBA degrees are the most useless shit on the planet.

  • by Tony Isaac ( 1301187 ) on Tuesday January 24, 2023 @11:07PM (#63238170) Homepage

    They say the robots are coming for our jobs, and even for our programming jobs. The MBA jobs will undoubtedly go first. Once that happens, then I might start to become concerned about software development!

    • by Moridineas ( 213502 ) on Wednesday January 25, 2023 @01:39AM (#63238374) Journal

      They say the robots are coming for our jobs, and even for our programming jobs. The MBA jobs will undoubtedly go first. Once that happens, then I might start to become concerned about software development!

      You've got it wrong (unfortunately). The professional-managerial class will protect its turf the hardest. Automation of factories, grocery store checkouts, gas stations, farms, etc? Fuck those jobs. And yet, as an examples, lawyers and the ABA screech at the hint of anything that might give power or agency to non-lawyers in the legal system. Professional licensure is another big stumbling block across many fields. MBAs will undoubtedly find a reason, and laws to back it up, as to why they need to have a human present.

      Hopefully the herd at least gets thinned a good bit.

      • You are absolutely right, they'll fight tooth and nail. They won't be automated in companies where they are already entrenched. Instead, it will be new upstart businesses, that want to focus on just making a great product, or delivering a great service, that will have no use for the kind of business managers that can be automated. Those new companies will save so much money through automation, that they'll begin to displace the entrenched bureaucracy of the big corporations. And so the business cycle goes.

    • by sir1963nz ( 4560389 ) on Wednesday January 25, 2023 @03:49AM (#63238552)
      Its actually come for lawyers.
      AI can read, analyse and report on huge amounts of data quickly.
      The old "bury them with paperwork" to hide what was important is rapidly going away, the AI works 24/7.
      Because of this junior lawyers who used to do all that donkey work are no longer needed.
      • I do so hope you are correct! Yes, automation could replace a lot of what lawyers do. But lawyers know how to make themselves important legally. These days you can already buy software to help you with routine legal filings, such as last wills. But lawyers in many cases have found ways to make themselves a required step in the process, even if you did all the work yourself already (with the help of software).

    • by sinij ( 911942 )

      They say the robots are coming for our jobs, and even for our programming jobs. The MBA jobs will undoubtedly go first. Once that happens, then I might start to become concerned about software development!

      Why? You think ChatGPT could possibly do worse than MBAs at managing software development?

  • ...fire one million workers instead of five hundred thousand like Jean-Baptiste Emanuel Zorg?

    • by tragedy ( 27079 )

      It will know how to automatically lay off everyone at Brawndo (The THIRST MUTILATOR!) when the stock price dips.

      This post brought to you by Brawndo (TM) "It's what plants crave!"

  • BS generator (Score:5, Insightful)

    by khchung ( 462899 ) on Tuesday January 24, 2023 @11:26PM (#63238194) Journal

    So ChatGPT is good enough to generate vision statements and other BS coming out of the exec suite, often after a whole week of offsite brainstorming in a faraway ski resort?

    Nice, the next job class to be automated away could be the upper management.

    • I for one welcome our new AI CEOs.

      At least they'll be using data.

    • I am more worried about the jobs of people working at ski resorts now
    • Re:BS generator (Score:4, Interesting)

      by MMC Monster ( 602931 ) on Wednesday January 25, 2023 @05:12AM (#63238686)

      Certainly creatively focused jobs can be downsized a bit.

      Need a slogan for a new food or car? Ask ChatGPT for 10 slogans and then brainstorm with those options as a starting point.

      We'll still need creative people. But less of them and they can use their time more effectively.

    • by Tom ( 822 )

      Upper, but not top-level management.

      Yes, but we've all been aware for decades that most management could be replaced by fairly simple scripts.

      What ChatGPT can't do is provide the "human interface" that people need. We're social animals. Much of management is simply there to provide a human face to the company processes.

  • by Rosco P. Coltrane ( 209368 ) on Tuesday January 24, 2023 @11:29PM (#63238196)

    I tried ChatGPT with real, actual, precise technical questions: the thing was confidently giving me very verbose, very well argumented, very well presented, totally incorrect answers. When I corrected it, it apologized, acknowledge the correct bit of information I just fed it and proceeded to give me more wrong answers, making shit up as it went.

    But if you didn't know the answer was wrong, the thing was really quite convincing.

    ChatGPT is a cross between a webcrawler and BonziBuddy. It's not intelligent: it feeds you any old incorrect data it finds on the internet that matches what you ask it. It doesn't understand what it's saying. But what it's very good at is packaging its response into something convincing. I wouldn't be surprised if ChatGPT won an election some day...

    I'm not surprised at all it was able to bullshit a MBA exam jury. But it would never mass muster with a proper engineering exam jury asking basic engineering questions.

    • by Moridineas ( 213502 ) on Wednesday January 25, 2023 @01:33AM (#63238372) Journal

      Your comment is so vague and nonspecific that it's, well, meaningless. I have been utterly fascinated watching people attack chatGPT, including people who should know better. I'm not going to say anything like it's approaching general purpose intelligence, but it's shockingly beyond anything we've seen before. It's like when Google's Go AI came out of absolutely nowhere and crushed the best human players. Exponential change is hard to deal with, and we're in the middle of some exponential change right here.

      Here's something I just tried. There was additional explanatory text I cut. Tell me with a straight face that chatGPT is no better than a "cross between a webcrawler and BonziBuddy"

      If you think math and engineering (and physics, etc) isn't on the improvement plan, and attainable, you're completely deluding yourself.

      Write a C program that picks a random number between 0-100, then allows the user 3 guesses to find the number. If the user gets it right, the program prints out "Yay!". If the user does not find the number it says "ERROR"
      (Note: you could also have said ASM, Pascal, Ada, PL/I, Cobol, etc.)

      #include <stdio.h>
      #include <stdlib.h>
      #include <time.h>

      int main() {
      srand(time(0));
      int secret = rand() % 101;
      int guess;
      int guesses = 0;

      while (guesses < 3) {
      printf("Guess the number between 0 and 100: ");
      scanf("%d", &guess);

      if (guess == secret) {
      printf("Yay!\n");
      return 0;
      }
      else {
      printf("Nope, try again\n");
      }
      guesses++;
      }
      printf("ERROR\n");
      return 0;
      }

      Edit the program so that after each guess, the program tells the user if their guess was too high or too low

      #include <stdio.h>
      #include <stdlib.h>
      #include <time.h>

      int main() {
      srand(time(0));
      int secret = rand() % 101;
      int guess;
      int guesses = 0;

      while (guesses < 3) {
      printf("Guess the number between 0 and 100: ");
      scanf("%d", &guess);

      if (guess == secret) {
      printf("Yay!\n");
      return 0;
      }
      else if (guess < secret) {
      printf("Too low, try again\n");
      }
      else {
      printf("Too high, try again\n");

      • by vyvepe ( 809573 )
        This is a very common and simple problem. It is not hard to expect ChatGPT to "memorize" it from internet.
        Try something more complicated. Something which requires ChatGPT to construct an equation based on verbal description and solve it to return the right answer.
        • by Tom ( 822 )

          It would do that easily, provided that fragments of solutions are somewhere in its model.

          It can find common programming errors in code you provide it as well, which is an interesting use case.

          But we'll find its weak spots by experimenting, the way we were first astonished by Stable Diffusion, then figured out it doesn't know how many fingers people have and other silly stuff.

          What worries me is that lots of the "knowledge" it has ingested is likely to be outdated or outright wrong. We already have trouble en

          • Interesting use case. I edited the "guess" program to change the comparison from "guess == secret" to "guess = secret" and asked if there was a problem with the program. It correctly found the bug. I came up with another contrived off-by-one error in array indexing, and it located that bug too.

            Cool stuff.

        • by Moridineas ( 213502 ) on Wednesday January 25, 2023 @08:50AM (#63239020) Journal

          This is a very common and simple problem.

          That certainly depends on the definitions of "common" and "simple!"

          Maybe what we're really finding out is that a very large percentage of the tasks that most humans undertake on a daily basis are "common" and "simple," even when they seem more complicated.

          • by vyvepe ( 809573 )

            Most tasks we do are common and simple. If somebody sees them complicated then it is a wrong assessment of that person.

            It will be great when AIs will do all the common and simple things for us. That will leave only the interesting tasks to be done by humans.

        • by sinij ( 911942 )

          This is a very common and simple problem.

          What percentage of coding problems are not simple problems? I think most of the coding tasks are routine. Plus ChatGPT could likely be trained not allow overflows and such, while humanity on the whole cannot be.

      • by laxguy ( 1179231 )

        I have been utterly fascinated watching people defend chatGPT, including people who should know better.

        fixed.

      • Here's something I just tried. There was additional explanatory text I cut.

        ...(and is actually critical to the whole process, as it is necessary to load the current context.)

        Tell me with a straight face that chatGPT is no better than a "cross between a webcrawler and BonziBuddy"

        {...}

        Write a C program that picks a random number between 0-100, then allows the user 3 guesses to find the number. If the user gets it right, the program prints out "Yay!". If the user does not find the number it says "ERROR"
        (Note: you could also have said ASM, Pascal, Ada, PL/I, Cobol, etc.)

        Now just realize that the language model of chatGPT and the context in the dialog don't necessarily need to be in English.

        It's a realistic output, according to the language model, that fits the cues present in the dialog context.
        That's it.

        We can be amazed at the emergent properties of the system (there's no specialised english-to-C transpiler inside chatGPT, that's just a natural output of the model trained on

    • Actually we had a conversation this afternoon when a staff member told us ChatGPT passed a 2nd year university statistics exam when they tried it, and based on actual students marks, it would have received an A.
      The exam was online due to COVID so was likely dumbed down, but there are discussions being held as to how to mitigate it.
      At this stage they are working on the basis that the spelling and grammar are well above the usual student level, so its relatively easy to spot.
    • Agree 1000%. The danger of ChatGPT is that the answers it gives sounds right and (to some) have the appearance of authority because it was generated by a computer.

    • I wouldn't be surprised if ChatGPT won an election some day...

      Just tried using ChatGPT now and their "we're too busy now" page reads:

      Write an inspirational speech about the status of ChatGPT.

      My fellow citizens, the time has come. The future is here, and it is called ChatGPT. This revolutionary AI chatbot website is here to change the world, and we must embrace it with all our strength and determination.

      I know that many of you are eager to try ChatGPT for yourselves. And I assure you, we are doing everything in our power to make that happen. Our team is working tirelessly to accommodate the incredible demand for this groundbreaking technology.

      But I want to remind you that greatness is never achieved without sacrifice. And the road to the future of AI will not be without its challenges. So please, be patient. Trust that the right time for you to try ChatGPT will come.

      And when it does, let us seize this opportunity with all our might. Let us embrace the power of AI and use it to build a brighter, more prosperous future for ourselves and for generations to come. Thank you.

      Sure sounds presidential enough! Why wouldn't a chatbot be able to write inspiring speeches using some A/B-testing to find out what resonates best with the public.

    • You're stretching a young tool until it breaks. Good for you. Not unexpected. I can get it to tie itself in knots too. Just understand you're not proving anything, really. Just that it's not perfect. And it never will be. But it'll get better and better.

  • nerfed (Score:4, Interesting)

    by bob_jenkins ( 144606 ) on Tuesday January 24, 2023 @11:34PM (#63238200) Homepage Journal

    ChatGPT is nerfed by being unable to do math. It can clearly recognize math, and usually knows what formulas to apply, but does calculations right almost never. If it called out to WolframAlpha when it recognized it was doing math it'd get it right closer to 95% of the time. It can't even multiply correctly. A computer is clearly nerfed if it can recognize the need to multiply but can't multiply correctly.

    What's more, if it could do math correctly, it could up the importance of doing math-based sanity checks, because doing math correctly would make those sanity checks useful.

  • While Chat GPT3's results were impressive, Terwiesch noted that Chat GPT3 "at times makes surprising mistakes in relatively simple calculations at the level of 6th grade Math."

    In other words, ChatGPT was an even more convincing substitute for a typical MBA than Terwiesch expected.

  • by Xylantiel ( 177496 ) on Wednesday January 25, 2023 @12:00AM (#63238244)
    It is really hard to know just how "good" chat GPT is while its training set is unclear. If there are a few actual answers to MBA questions in its training set, all it is doing is rewording them and spitting them out. That's pretty much expected to be easy for a bot at this point. People seem to be comparing its output to what a person would produce when taking an exam, but I think a better analogy is what a person would produce when taking an exam while having access to the internet where they can search for an answer posted online and then take the time to reword it. Obviously that's cheating, not because anyone is using a bot, but because they are just recycling someone else's answer to the prompt. This is compounded by the fact that many "standard" exams have perfectly good answers posted online somewhere if you look for them. This is not some miraculous feature of chatGPT, it is just a fact of the modern internet.
    • Re: (Score:3, Informative)

      by Bumbul ( 7920730 )

      a better analogy is what a person would produce when taking an exam while having access to the internet where they can search for an answer posted online and then take the time to reword it.

      Did you read the paper? Those were not essay-type of questions, but more of optimization problems - "identify the bottleneck", etc... Even though it failed to do some math in the more complex problems (math was not complex, but the question setup was), it was able to "solve the riddle". I repeat, it is not just copying answers from the internet. This is best proven by giving it coding tasks. There is some "understanding" of the problem logic - and it is able to generate code from scratch for new problems.

      • by gweihir ( 88907 )

        I repeat, it is not just copying answers from the internet.

        You really cannot be sure of that. There is a lot of information on the Internet. Well, maybe it has some Wolfram-Alpha-like capabilities as well (or actually licensed that engine as you can do), but that does not mean it has any clue what it is doing.

    • by gweihir ( 88907 )

      These MBA questions will be in there. This stunt here was probably planned a long time ago. A lot of the marketing for ChatGPT is carefully orchestrated misdirection.

    • by Tom ( 822 )

      ChatGPT has the interesting feature of essentially repeating your prompt back to you, then elaborating. That is exactly how exam questions are often phrased. When the question is "Explain how XY does AB." then you can't go wrong by doing what ChatGPT does: "XY does AB by ..." and then add whatever your memory digs up about XY and AB. Putting a bunch of connected facts through a natural language generator to put them into sentences isn't that hard.

      Drawing (correct) conclusions from your known facts is what i

  • by Skapare ( 16644 ) on Wednesday January 25, 2023 @12:02AM (#63238256) Homepage

    there would be less worry about cheating if the new technology we will be using in the very near future and need to understand well how to use is made to be part of the coursework. my math teacher back when calculators first came out (mine was an HP-35) included them in his teaching under the idea that we would be using them and needed to learn how to work with this subject with calculators. he even arranged for loaners for those who could not afford one. he encouraged everyone to bring one to each class. he "required" on on each exam with questions specifically about using a calculator to solve the problems.

    we will be using AI from now on (until it replaces us). we need to, for now, learn how to incorporate and integrate AI into the near future work we do. the course needs to include this. the exams need to include this. use of AI needs to be mandatory in all future subjects.

  • Unfortunately he just got sacked by G..gle so they must have figured that out as well.

  • This is like taking an open book exam.

    I'm sure I could pass any exam you give to me, as long as you give me Google whatever i want.

    ChatGPT is trained on the world's knowledge. It literally has access to any question it wants.

  • hmm... (Score:5, Interesting)

    by Megahurts ( 215296 ) on Wednesday January 25, 2023 @12:46AM (#63238326)

    I can't tell if this is a story about the advancement of AI technology or the steady decline of MBA's

  • All it proves is that to pass a test what you have to do is parrot back what you have "learned". Understanding or the ability to create new information or products is something else altogether. This matches my experience with MBAs. They parrot the latest buzzwords and have no idea how to run a business. It always made me want to both laugh and cry to see the leaders of a billion-dollar company listening to these academics, who have never run even a hot dog stand, tell them how to run their company.
  • can work interactively with historic and/or real time data analytics a significant portion of middle-managers will become completely useless.
    • by gweihir ( 88907 )

      As that is probably not going to happen anytime soon, the middle-managers will sadly say. Remember this thing was trained on a set of carefully curated data and I do not even want to know how often they had to reset it to an earlier stage. Even so it often is really badly off. Training it on data that is uncurated will likely blow it up within a very short time.

      While ChatGPT is an impressive stunt, it really is nothing more than a very early, very experimental prototype. It is human to go the animistic pat

  • Quote: Terwiesch noted that Chat GPT3 "at times makes surprising mistakes in relatively simple calculations at the level of 6th grade Math."

    So it's superior to most MBA's then?

  • Everyone keeps bawking about how ChatGPT will, or will not, replace X job because #reason.

    Either way, this thing needs to keep learning to remain relevant. So where's the docs on how this thing is trained, and how it acquires and retains relevant knowledge to perform within a particular role or skill? I keep seeing disclaimers that it's trained on 2020 information, etc.

    This toy is trained on past data, so it might be able to theoreticize potential futures based on that past data, but it needs to be able to

    • by gweihir ( 88907 )

      Indeed. This thing can serve as an insight-less and not-so-rarely badly wrong oracle, but it cannot do daily business. It will also get outdated rapidly.

  • by Opportunist ( 166417 ) on Wednesday January 25, 2023 @02:44AM (#63238458)

    You can go to your manager and say "STFU or I replace you with a small script".

  • by gweihir ( 88907 ) on Wednesday January 25, 2023 @04:31AM (#63238622)

    Because this is pretty much what that means. ChatGPT can only pass an exam if that exam requires no insight or understanding.

    Not that I am surprised an MBA program has bad exams...

    • I suspect it would only take a perl script to pass some of my engineering and mechanics exams. Especially the early ones like statics, I remember the final was copying values from the diagram into a matrix for the simultaneous solver on my calculator.

      • by gweihir ( 88907 )

        For introductory and early classes that is not good, but not nearly as bad. This was apparently a final exam on their MBA programm.

        • It was a subtle jab that I equivocate the difficulty of first year engineering with the completion of an MBA. Even so, I likely had to use more integrals than an MBA. Those even showed up in the chemistry classes I took "for fun".

  • by OrangeTide ( 124937 ) on Wednesday January 25, 2023 @07:45AM (#63238878) Homepage Journal

    The problem with exams is that they are written assuming a human college student is taking them. Those assumptions let us skip over a lot of tedious probing and focus on the incremental knowledge that the course provides. All of these machine learning systems are quite specialized and unbalanced in their knowledge set when compared to a human adult with two or more decades of experience. Sadly I suspect some professors will feel the need to adjust exams in order to filter out ML systems, probably with some trap questions that will just end up tripping up humans with false accusations of cheating.

  • MBA (Score:3, Funny)

    by MitchDev ( 2526834 ) on Wednesday January 25, 2023 @07:47AM (#63238888)

    Isn't that just parroting business buzzwords that have no real meaning?

  • by RUs1729 ( 10049396 ) on Wednesday January 25, 2023 @09:17AM (#63239080)
    It's an MBA exam, after all.
  • What is the possibility of a Mechanical Turk here? I admit I have not looked at ChatGPT (since they want my real live mobile number only, no VOIP, no nothing else will do, so that's a non-starter). But it's curious from what I've seen. Seems too good to be true.
  • Don't count your chicks before you've bought the eggs.

  • by whitroth ( 9367 ) <whitroth@@@5-cent...us> on Wednesday January 25, 2023 @12:25PM (#63239686) Homepage

    ... that the MBA is bs.

  • You know that ChatGPT is basically a hallucination, right? It occasionally exhibits overtly hallucinatory behviour in its output. It's great fun to play with!
  • by pesho ( 843750 ) on Wednesday January 25, 2023 @02:54PM (#63240162)

    Just yesterday I tested ChatGPT on questions for molecular genetics exam I plan to give students this week. ChatGPT has a decent chance at a B grade. Answers are very well written. If only reproduction of well established knowledge is required the answers are excellent most of the time (except when it tries to "volunteer" information that the question is not asking for specificaly). In followup questions that aim to clarify details or logic it fails very quickly and in a way that is very similar to a liar that has been caught or a student that is trying to mask that they don't know the answer: it starts providing verbose, repetitive and circuitous answers that pile on flaws. The reasons are two fold: algorithm based on associations that has no means of distinguishing fact from fiction, and reliance on information that has been repeated multiple times on the internet.

    Whether it will undermine education depends on how people in education adapt to it. I have been transitioning to take home open-book exams for my students because I feel it is impossible to know everything and is more important to be able to find, critically review and synthesize information. This was until know slow process because it requires a lot more work both in the way we teach and the way we test. It is much easier to run the multiple choice exam conveyor. I guess I will be forced to move faster now, and perhaps modify the exam to add verbal Q&A sessions with the students (more work ....). Other faculty are reacting by moving to tightly proctored exams or looking for tools that would detect AI generated answers.

If God had not given us sticky tape, it would have been necessary to invent it.

Working...