Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
AI Education

AI Tool Usage 'Correlates Negatively' With Performance in CS Class, Estonian Study Finds (phys.org) 63

How do AI tools impact college students? 231 students in an object-oriented programming class participated in a study at Estonia's University of Tartu (conducted by an associate professor of informatics and a recently graduated master's student). They were asked how frequently they used AI tools and for what purposes. The data were analyzed using descriptive statistics, and Spearman's rank correlation analysis was performed to examine the strength of the relationships. The results showed that students mainly used AI assistance for solving programming tasks — for example, debugging code and understanding examples. A surprising finding, however, was that more frequent use of chatbots correlated with lower academic results. One possible explanation is that struggling students were more likely to turn to AI. Nevertheless, the finding suggests that unguided use of AI and over-reliance on it may in fact hinder learning.
The researchers say their report provides "quantitative evidence that frequent AI use does not necessarily translate into better academic outcomes in programming courses."

Other results from the survey:
  • 47 respondents (20.3%) never used AI assistants in this course.
  • Only 3.9% of the students reported using AI assistants weekly, "suggesting that reliance on such tools is still relatively low."
  • "Few students feared plagiarism, suggesting students don't link AI use to it — raising academic concerns."

AI Tool Usage 'Correlates Negatively' With Performance in CS Class, Estonian Study Finds

Comments Filter:
  • Surprising? (Score:4, Insightful)

    by liqu1d ( 4349325 ) on Sunday September 07, 2025 @12:51PM (#65644926)
    Perhaps if you frequently offload your thinking to AI you would also find it surprisingâ¦
    • Re:Surprising? (Score:4, Insightful)

      by gweihir ( 88907 ) on Sunday September 07, 2025 @12:54PM (#65644932)

      Well, as most people cannot fact-check, they are forever surprised by actual facts. And I have a nagging suspicion that AI, dumb and without insight as it is, may actually do better than many people at "insight" about common and well-understood topics. But a good encyclopedia basically does the same.

    • by will4 ( 7250692 ) on Sunday September 07, 2025 @03:26PM (#65645140)

      The surprising statistic is that the UK has 260 colleges and universities and 2,053,520 students (undergraduate and postgraduate) with 663,355 international students not from the UN and not from the EU. https://www.universitiesuk.ac.... [universitiesuk.ac.uk]

      When the top universities are excluded, the student enrollment from non-UK and non-EU countries is much higher than the 30% overall number.

      A guess is that the lower tier universities/colleges will have a much higher AI use.

      A second guess is that there are many colleges/universities who are fully dependent on international student tuition to keep their doors open.

  • Such a surprise (Score:5, Interesting)

    by gweihir ( 88907 ) on Sunday September 07, 2025 @12:51PM (#65644928)

    I have two personal data points:

    1. My IT security students (several different classes and academic institutions) all view AI as last resort or something to be used only after they have solved a task to verify they got it all. This comes from negative experiences they made. They say AI misses important aspects, prioritizes wrongly, hallucinates (apparently IT security is niche enough that this happens often), and generally it takes more time to check its results than to come up with things directly. They also mislike that you often do not get references and sources for AI claims.

    2. I taught a Python coding class in the 2nd semester for engineering students (they needed a lecturer and I had time). The students there told me that AI can at max be asked to explain one line of code, it routinely already failed at two connected ones. And for anything larger it was completely unusable. They also found that AI was often clueless and hallucinated some crap.

    Hence I conclude that average-to-smarter students are well aware of the dangers and keep a safe distance. Below average ones are struggling anyways and may just try whatever they can get their hands on. And at least here, 30-50% of the initial participants drop out of academic STEM courses because it is too much for them. AI may have the unfortunate effect of having them drop out later, but overall I do not think it will create incompetent graduates. Oh, and I do my exams on paper whenever possible or online-no-AI for coding exams (so they can use the compiler). The latter gets more problematic because of integrated AI tools. I expect we will have to move coding exams to project work (on site) or something in the near future and have them take a full day or the like and maybe group work and pass/fail grading. As I do not teach coding anymore from this year on, I am not involved in any respective decisions though.

    • Re:Such a surprise (Score:4, Insightful)

      by TurboStar ( 712836 ) on Sunday September 07, 2025 @01:14PM (#65644970)

      The students there told me that AI can at max be asked to explain one line of code, it routinely already failed at two connected ones.

      The smart students are saying this because they can get through your busy work faster with AI and don't want you to know. The dumb students are saying this because they aren't smart enough to manage the AI. You are believing this because why?

      • Re:Such a surprise (Score:4, Interesting)

        by gweihir ( 88907 ) on Sunday September 07, 2025 @02:03PM (#65645026)

        I am believing this because these were discussions about non-graded exercises and they demonstrated both failed and successful AI results and they can get as much help as they want from me on these exercises. Also, I have a good relationship with most of my students and discussions are frank and open. And on top of that, about 50% (in the coding class 100%) of my students are working while studying and have professional experience and this type of student is primarily interested in learning things.

        Your level of mistrust, on the other hand, seems to indicate something completely different.

        • IMO, it's not "cheating" if it's 1. Sanctioned by the requestor, predicated on 2. The "cheater" has to disclose their methods and results and 3. an honest evaluation and discussion transpires about the meta of the whole ordeal.
        • Your level of mistrust, on the other hand, seems to indicate something completely different.

          The word you're looking for is experience. Something you've demonstrated a lack of in regards to AI and CS competency.

          • by gweihir ( 88907 )

            Ah, yes. 35 years in the field with an on-target engineering PhD and teaching all along clearly has me completely inexperienced. Not.

            • For what it's worth, my own experiences match your groups 1 and 2. Though I'm self-taught with no CS background whatsoever. The guy in my signature claims to be, and so does this guy:

              https://slashdot.org/comments.... [slashdot.org]

              You gotta love the way he asserts that he will code four times as fast, and how he relies on his programs to crash and burn when he makes a mistake, but then proceeds to talk about how his potential semantic error is superior because it involves typing a whole four less characters. And that's wi

              • by gweihir ( 88907 )

                Does not surprise me. The biggest preventer of good engineering is big egos. Other engineering disciplines try their best to filter these idiots out. CS is not quite there yet and a lot of coders do not have a CS education in the first place.

                I have found uses for AI in just two areas though: Assisting in reverse engineering code output as they tend to be able to spot patterns in ways I occasionally overlook, and converting documentation into usable data structures. And that to me makes them basically just the next evolution of the search engine, ...

                Agreed. "Better search" is basically the main thing LLMs can do. Verifying whether you overlooked something is low risk because it can only improve your work (if you did it honestly). Finding and combining of essentially generic data structures from a description is pre

        • Your issue is that the state of the art is rapidly advancing faster than even 6-12 months for anecdotal experiences. I'm having AI build something for me while I type this, and I guarantee you its stringing more than a few lines together. I would not rely on second hand data for this, I'd go figure it out for yourself. Every single person that hacks out code for a living right now needs to go see what the stat of the art is and figure out how to deal with it. To me it sounds like you've chosen "door ost

          • by gweihir ( 88907 )

            The tech really is not advancing. It just gets better at hiding the defects. Incidentally, this is several decades old tech with known fundamental limitations. Oh and there are really bad drawbacks from letting LLMs do your coding, like a high prevalence of security problems. But I guess people like you need to run into a wall before they see it is there. Well, good luck in the next few years, you are going to need it.

        • Re:Such a surprise (Score:4, Insightful)

          by tlhIngan ( 30335 ) <slashdot@w[ ].net ['orf' in gap]> on Sunday September 07, 2025 @06:10PM (#65645390)

          It could be as simple as students using AI where they used to use someone else.

          After all, in the "pre-AI" age, students would routinely copy code and other things from other students, and you can tell they did it because their grades generally were worse.

          These days, I'm sure instead of asking other students, they are asking AI, and all we're seeing is the same thing - the students of the past simply copied one another, the students today, rather than copying, ask AI. Just like students of the past used paper mill sites to write their homework, now use ChatGPT to do same.

          Of course, I can't say I am completely innocent of the practice - we were in a group doing a group project. I and someone else smarter than me were working on something extremely complex for the rest of the group while they worked on something much simpler for another class. In the end we figured out the complex work, and presented it to the rest of the group to learn from our knowledge, and they submitted the group project with all our names on it. I saw what they submitted and studied from it so I did learn, and the rest of the group learned from us where we all got good grades in the end. It's just if it was divided equally among everyone the two projects would've been a mess and it was simpler if we split the work the other way

          • by gweihir ( 88907 )

            It could be as simple as students using AI where they used to use someone else.

            Sure. But AI cuts out the social aspect, like a comment "you do know that you need to pass an exam, right?" from the person you copy from. And AI makes this far too easy to do. Also, I had students angry at me when I graded copied "solutions" with no points and they asked "how can you know". Then I showed them the example where over several copying generations an index became a factor and then became an exponent. Never got any more complaints after that.

            Bottom line, some students will self-sabotage and AI m

            • by tlhIngan ( 30335 )

              But AI cuts out the social aspect, like a comment "you do know that you need to pass an exam, right?" from the person you copy from. And AI makes this far too easy to do.

              Of course. When I was in university (pre dot-com era) the Internet was around to communicate with (everyone exchanging ICQ numbers was a thing), but if you wanted to copy off someone you either had to bump into them on campus, or phone them at home, and if you did that, you had to do it at reasonable times because parents and other things

        • I wouldn't be so quick to jump down his throat. The population of students you're teaching sounds vastly different than what most people imagine when they think of an intro CS class. You have working professionals and everyone else is remembering the unwashed masses of freshmen taking the class because they like Facebook (or whatever was the equivalent of the time). It sounds more like you're teaching a bootcamp class where most of the people taking it already have an engineering or science degree. That's a
          • by gweihir ( 88907 )

            That is possible. I am also doing this in a country where only about 20% of all people get the qualification for academic studies at all. This does not invalidate my experience though, just the generalization.

  • Having a machine do something for you doesn't make you as proficient when doing it yourself? Shocking!

    Next you'll tell me that using a calculator doesn't help you remember your times tables.

  • by Fons_de_spons ( 1311177 ) on Sunday September 07, 2025 @01:08PM (#65644958)
    Don't jump to conclusions... Maybe the students that struggle are driven to LLMs to get things done.
    • by znrt ( 2424692 )

      it's reallt simple: if you want to get more shit done, use more ai. if you want to learn, use less.

      • It's kinda like laying work off on bottom feeder outsourced labor consultancies. You'll absolutely generate a lot of "activity," but generating better outcomes???...not so much.

        • by gweihir ( 88907 )

          The question how much outsourcing actually helps. From my experience, as soon as you need to get actual work done, you need to use local, small outsourcers, because they will care about you and will actually try to do good work. Forget about any well-known names. They just want your money and want to string you along for as long as possible. Actually solving your problems is not part of their business model and would decrease their profits.

          • by gweihir ( 88907 )

            Apparently there are blithering idiots that have not noticed this blatantly obvious problem yet and disagree enough to mod this down. No surprise.

      • it's reall[y] simple: if you want to get more shit done, use more ai. if you want to learn, use less.

        I will modify that a bit to fit reality. It you want to get more shit done, but results don't matter, use AI. If you want to learn, or if you want a higher correctness ratio, forego AI.

        • by gweihir ( 88907 )

          Indeed. There is nothing wrong to use AI as an additional plausibility check, but that is essentially it for any work that requires insight for good results. Note that a lot of big names get filthy rich by not caring about the quality of their work. These are clearly evil (doing massive damage to others for a comparatively much smaller gain to them), but that seems to be something the human race can well afford at this time. Or not.

        • by znrt ( 2424692 )

          fair enough, but that reality isn't really in the scope of the artice. it isn't about work being of good quality or not, but about students doing said work with ai apparently not having learned enough in the process. the reason is unclear, but this is the logical outcome when the process involves less mental exercise and intimate contact with the subject matter. still, while this to true in general i don't know it is exactly what happened here (as gp points out).

          then again ai is just a tool. if not used jud

      • by gweihir ( 88907 )

        For the first case, that really depends on what you want to get done. For the second part, yes.

      • More shit, damn the accuracy
    • Allow me to clarify... I teach a simple basic programming course. I see the talented ones writing games in a matter of weeks. Little chatgpt, some googling, usually when they are stuck. Then there are the ones who just do not like computers. Everything is a struggle. They want to spend as little time as possible on the course. Chatgpt is their best friend. So I am not surprised by the results of the study. They are kind of obvious to me. Of course there should be no people in a CS course that do not like co
    • But this is *negative* correlation, maybe that does mean causation???

  • by Slicker ( 102588 ) on Sunday September 07, 2025 @01:15PM (#65644978)

    Initially, I found the same in myself--a real degradation overall in my productivity. I am a software Engineer. It has not been easy learning how to use generative AI to actually increase and improve productivity. At first, you think it can do almost anything for you but gradually over time realize it greatly over-promises.

    Overall, the key is that you need to remain in charge of your work. It is an assistant that can be entrusted more or less to small tasks with oversight, at best. For example, frame out your project and clearly define your rules and preferences in fine detail. Then..

    It's good at:
    - Researching, summarizing, and giving examples of protocols, best practices, etc.
    - Help you identify considerations you might have overlooked.
    - Writing bits of code where the inputs/outputs and constraints are clearly defined.

    It's bad at:
    - Full projects
    - Writing anything novel (it knows common patterns and can't work much beyond them.
    - Being honest and competent -- it cheat on writing code and writing tests for that code; when you catch it red handed, it will weasel its way out.

    The bottom line: you are in charge. You need to review what it gives you. You need to work back and forth with it.

    Also -- I am still learning.

    --Matthew

    • by gweihir ( 88907 )

      While I mostly agree, there is another aspect: Coding is a skill that needs to be practiced. If you stop writing the simple things yourself, you will get worse at writing the harder stuff.

    • I feel like a lot of these same observations can be said for people that over rely on libraries and frameworks and don't actually understand or have any control over what they're doing. At a certain point, the AI is just patching frameworks together and implementing libraries without understanding them .. just like some programmers do.

    • Well put. It's only good for certain use cases. *You* have to discover which case they are, because it's like a blank canvas in front of an artist. You're starting from scratch. My opinion is that whomever targets and creates vertical applications, or say, AI that is expert in a much narrower subject matter, ie. "Python programming"... or perhaps just "programming" only, will capture market share and reach profitability. Then python programmers can get good value out, and Big Brains can charge you an arm an
  • It turns out that even though you can cover 5 miles quicker in a car, it negatively correlates with health outcomes compared to running or cycling the same distance. Using AI is like taking a taxi.

    • by gweihir ( 88907 )

      That is actually a very good comparison. Skills and insights need to be used to maintain them and even more so to improve them.

  • I have worked with a number of lazy engineers who use the google search bar as a calculator. As such, they don't write shit down and can't remember their answer five minutes later. They tend to be smart, but lazy as fuck. I suppose the AI kids have the same ranks, where there's folks that take note and make an effort regardless of the tools they use, then there's the folks who will copy from the chatgpt window and not read or recall the "answer"...
  • Students that don't do the work! Don't learn much!
  • My grandmother talked about her grandfather when I was a boy. She described how he was a scribe at the local courthouse. He would take the judgements and write them up on parchment in beautiful handwriting. It seems he lamented the introduction of typewriters, and how they would cheapen the documents, with people using soulless mechanical machines rather than the love and precision of the human hand. People would lose the ability to write he would say.

    At school we were not allowed calculators at first, as i

  • Kids like my daughter are obsessed with learning. They avoid AI because they'd rather get a lower grade but learn the topics. Other kids want top grades but figure they can learn it later if they choose.

    I never look at grades when evaluating CVs. I look at github
  • You mean I am supposed to fit all this information in my head, practice it, and eventually understand it? Sounds really hard

  • Runners who strap their fitbit to the dogâ(TM)s tail for an hour each day have less aerobic fitness.

  • AI is total rubbish at logic.

    If you want AI to make code for you if you do anything beyond psuedo code (A does B, returns C) you're going to get rubbish. AI is great for boilerplate repetitive stuff.

    CS students in 101 classes shouldn't be using AI for assignments for the same reason you don't give 10 year old calculators, you want them to learn the process first before using other tools.

    You need to be able to recognize when it's spitting garbage back out at you.

Your computer account is overdrawn. Please reauthorize.

Working...