Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
Education Programming

Why One Computer Science Professor is 'Feeling Cranky About AI' in Education (acm.org) 63

Long-time Slashdot reader theodp writes: Over at the Communications of the ACM, Bard College CS Prof Valerie Barr explains why she's Feeling Cranky About AI and CS Education. Having seen CS education go through a number of we-have-to-teach-this moments over the decades — introductory programming languages, the Web, Data Science, etc. — Barr turns her attention to the next hand-wringing "what will we do" CS education moment with AI.

"We're jumping through hoops without stopping first to question the run-away train," Barr writes...

Barr calls for stepping back from "the industry assertion that the ship has sailed, every student needs to use AI early and often, and there is no future application that isn't going to use AI in some way" and instead thoughtfully "articulate what sort of future problem solvers and software developers we want to graduate from our programs, and determine ways in which the incorporation of AI can help us get there."

From the article: In much discussion about CS education:

a.) There's little interest in interrogating the downsides of generative AI, such as the environmental impact, the data theft impact, the treatment and exploitation of data workers.

b.) There's little interest in considering the extent to which, by incorporating generative AI into our teaching, we end up supporting a handful of companies that are burning billions in a vain attempt to each achieve performance that is a scintilla better than everyone else's.

c.) There's little interest in thinking about what's going to happen when the LLM companies decide that they have plateaued, that there's no more money to burn/spend, and a bunch of them fold—but we've perturbed education to such an extent that our students can no longer function without their AI helpers.

Why One Computer Science Professor is 'Feeling Cranky About AI' in Education

Comments Filter:
  • Great article (Score:4, Insightful)

    by dskoll ( 99328 ) on Sunday September 21, 2025 @07:53PM (#65674992) Homepage

    That was an excellent, well thought-out article. Everyone should read it and not just rely on the summary on Slashdot.

    • by Anonymous Coward
      Everyone should read it and not just rely on the summary on Slashdot.

      If you didn't have a five digit UID I would ask if you're new here.
    • I honestly gained nothing from reading the full article compared to the slashdot summary. What did I miss??
    • by alvinrod ( 889928 ) on Sunday September 21, 2025 @08:04PM (#65675016)
      Look at this guy trying to brag about reading the summary. Real /.ers don't even read past the third word in the title before commenting. After all, why have one computer when you can have two? That's just basic math.
    • by will4 ( 7250692 ) on Sunday September 21, 2025 @10:44PM (#65675162)

      The whole AI question, from use of copyrighted material for training, to ethical use, to usability is way way behind the main hidden topic:

      Governments will push the AI as far as they can go because it is to be integrated into national defense, military and military strategy.

      Predict a vigorous debate with narrowly accepted and approved opinions on both sides which keeps spinning while the military part is progressed.

      • by gtall ( 79522 )

        The thing I think they like best about AI is no humans. Humans are messy, they raise ethical concerns, environmental concerns, concerns about the future, etc.. AI is a dictator's dream. So if your military is populated by bots, then orders get immediately acted upon with no push back or thought for consequences.

        What? You say. All them things can be built into AI. And we'll get AI versions of those elements that will only cover what the bots deem necessary.

        An added benefit is that if you replace taxpayers wi

        • by neoRUR ( 674398 )

          I keep saying this is the last president, in any country that can afford them, that won't have its own robot army. Good or bad, that is how it will be going. There should be a new Robot Command force for Autonomous Robot units, that follow conventional military command, not one order from the top to all at the bottom, which all dictators want. Right now China is leading the production of humanoids robots.

    • It's worth a read but it's more a blog post then an article. Still, it's better then most the stuff they post. As dskoll said, read the article.

  • There's little interest in interrogating the downsides of generative AI, such as the environmental impact, the data theft impact, the treatment and exploitation of data workers

    Since when has anyone (not counting the people downstream, the theft victim, or the workers themselves worried or cared about that?

    There's little interest in considering the extent to which, by incorporating generative AI into our teaching, we end up supporting a handful of companies that are burning billions in a vain attempt to each achieve performance that is a scintilla better than everyone else's.

    I fail to see how this is any different than now or at any other point in CS education since at least the 1980s and possibly before.

    There's little interest in thinking about what's going to happen when the LLM companies decide that they have plateaued, that there's no more money to burn/spend, and a bunch of them fold

    Same thing that happened the last dozen times. Everyone will be chasing after the next wave to ride instead of caring about AI. Does anyone even remember, much less actively think about all of the blockchain companies?

    • Yes, it's the same old song, but the AI bubble does it on an unprecedented scale in terms of energy use, environmental and societal impact, and the enormous amounts of money involved and the industry concentration in the hands of a few giant players.

    • Re:Same old song (Score:5, Insightful)

      by rudy_wayne ( 414635 ) on Sunday September 21, 2025 @08:15PM (#65675022)
      For a very long time, one of the biggest problems we have had is FOMO -- Fear Of Missing Out.

      If something becomes popular for more than 7 minutes, everyone immediately rushes to jump on board. $Billions are spent and wasted, a few people might get rich from it, and then it all collapses. Lather, Rinse, Repeat.
      • Re:Same old song (Score:5, Insightful)

        by gweihir ( 88907 ) on Monday September 22, 2025 @05:22AM (#65675406)

        Indeed. Almost like people do not even know anymore that you do not have to be part of every hype.

        At the same time, Azure probably got completely compromised. Again. And they do not even know who got in and did what, as they have no logs. Maybe invest some real money into IT security? But no, empty promised of "Security is our highest priority" is the extend of what they do about that. But Billions go into AI. As extending the fragile house of cards even more was a sane idea.

        • by DarkOx ( 621550 )

          A lot of that has to do with how rapidly the world is changing these days. I don't think it is fear of missing out as much as it is fear of being obsoleted.

          It was a lot easier to say, 'maybe I'll sit this one one out' when you were not thinking your entire vocational industry might vanish over night, imagine being in the video rental business or a music store in 2010. How many shop owners would really have imagined after 60 years of people successively collecting records, tapes, vhs, dvd, blueray, media t

          • by gweihir ( 88907 )

            A lot of that has to do with how rapidly the world is changing these days. I don't think it is fear of missing out as much as it is fear of being obsoleted.

            These two sounds quite similar to each other. And no, the world is not moving that fast. But some already far too rich megacorps profit immensely from creating that impression.

    • by sjames ( 1099 )

      I fail to see how this is any different than now or at any other point in CS education since at least the 1980s and possibly before.

      There is a difference. If you learned Pascal as the wave of the future, you could always do FORTRAN or with a little re-training, C (pointers always left Pascal programmers a bit befuddled at first). If you bet on Java, you could always migrate to C or Python. Some of the IDEs do leave people a bit brain dead, but not so much they can't make the jump to a simple text editor and command line compiler. Even BASIC was OK though you'd have to un-learn a few bad habits.

      But if you learn 'vibe coding', you are dea

  • by Tschaine ( 10502969 ) on Sunday September 21, 2025 @08:21PM (#65675032)

    The promises started long before the technology could fulfill them. Who is going to do the vibe-cleanup coding if it takes a decade or three for the tech to catch up to the hype?

    People who understand how to write reliable maintainable code, of course... But the world seems poorly positioned to produce more of those.

    The tricky thing is that LLMs are actually pretty good at implementing homework assignments. It's when you need code beyond that scope that the illusion of competence starts to fall apart.

    • by dskoll ( 99328 )

      Who is going to do the vibe-cleanup coding if it takes a decade or three for the tech to catch up to the hype?

      There are consultants [donado.co] for that.

      • by sjames ( 1099 )

        Sooner or later, we'll run out of consultants. People eventually retire or quit the industry in disgust and the new crop of vibe coders certainly aren't going to fill those roles.

    • by gweihir ( 88907 )

      The promises started long before the technology could fulfill them. Who is going to do the vibe-cleanup coding if it takes a decade or three for the tech to catch up to the hype?

      People who understand how to write reliable maintainable code, of course... But the world seems poorly positioned to produce more of those.

      I would argue that it may be impossible to produce many more of those, as it requires a specific mind-set, specific skills and specific motivations. Obviously, treating the existing ones badly does make the problem worse.

      The tricky thing is that LLMs are actually pretty good at implementing homework assignments. It's when you need code beyond that scope that the illusion of competence starts to fall apart.

      Exactly. Homework is simple because you need to be able to learn from it. Fail on the homework (or use an LLM) and you will not get anywhere.

    • The tricky thing is that LLMs are actually pretty good at implementing homework assignments. It's when you need code beyond that scope that the illusion of competence starts to fall apart.

      What I've found honestly as an occasional user of them, is the LLM's are great at writing small functions. One, small, discrete, easily describable task with clear inputs and an output. It sucks at doing things more complex than that.

      You still need somebody who knows how to actually take all those functions, check them over, and then implement actual program logic to make something useful overall happen.

    • by sjames ( 1099 )

      That matches my (limited) experience. Just for giggles, I let copilot (on Github) have a crack at a function in some of my code. Its suggested improvement made some sense in a vacuum, but in context it read more like someone who feels they must 'contribute' something and that's all they could find. It didn't seem to understand that the function would always be called in the context of a transaction and raising an exception will roll it back.

  • by phantomfive ( 622387 ) on Sunday September 21, 2025 @09:30PM (#65675088) Journal
    If I were in charge of a Computer Science curriculum at a university, I would address the LLM problem like this:

    I would offer a class (third or fourth year class) that starts from the basics of Neural Networks, and by the end of the class the students have built their own LLM. By building their own LLM, they will deepen their understanding, have a solid foundation, and avoid a lot of the nonsense that gets propagated about LLMs. The amount of code involved is not huge, it's actually quite doable.
  • by hdyoung ( 5182939 ) on Sunday September 21, 2025 @09:32PM (#65675090)
    And every time I mention it, some angry soul mods me down.

    This is exactly why universities can seem aloof to industry needs. Industry decides they need everything to be focused on *insert most recent shiny thing*, and suddenly every company is demanding that every new employee aBsOlUTeLY MuST have at least 10 years experience in something that basically didn’t exist 4 years ago. And they blame universities for a “disconnect between the ivory tower and the real world”. The universities roll their eyes, and start making noises about catering to current needs, knowing full well that the bubble will pop and the focus will be replaced by the next shiny thing.

    Universities need to be prepping people to use AI, for sure but it’ll be one skill among many. You know what else employers like to see in their new hires? Decent speaking skills, decent writing skills, an understanding if basic professionalism, the ability to work on a team, basic mastery of the standard CS topics developed over the last 25 years, an ability to work with non-CS types. And, yes, some skill with upcoming AI/ML/LLM tools. Oh, and all this has to be taught in 4 years. Any university that actually listens to the industry screaming about AI will dump all those other skills, implement 4 full years of AI-centeric content, and their CS program will crater like the tunguska event when the AI bubble pops.

    No, LLM models are NOT the start of the singularity. Sam Altman and all the other AI-tech-bros want you to believe that because they want investors to cough up all-the-dollars so they can play in the big leagues of the most recent computing fad.

    Maybe I’m wrong and the world will blast past me while I gumble about people on my lawn. I acknowledge that AI will have a significant impact. But, 30 years from now, workplaces will look a lot like they do right now. The main difference is that people will have one more useful tool in their belt to use.
    • by evanh ( 627108 )

      I can't make much sense of that rant. Here we have a university professor highlighting we need to take this AI craze a bit slower, and you seem to agree, but you still start out with complaining about the universities rather than the kool-aid pushers.

      • by gtall ( 79522 )

        Did you even read what the GP wrote? He concentrated on Universities because the article concentrated on those.

        Where I quibble a bit with the GP is that I am unsure what companies WILL like to see in their employees once they have drunk the AI-Kook-Aid. We want to believe they will value the things the GP mentions, but I do not believe it is a given they will value those things in the future, especially since they tend to be run by MBA-bots who really only want to get rich and retire until they die of heat

        • My response would be this: Most university programs don't cater to a single employer or even a single sector. Yeah, there are a few CS companies are going all-in on AI, and a few CS programs that will try and feed that need. But, without looking at the number, I'm gonna guess that the vast majority of CS jobs are NOT AI-centric, but will use it as a tool (along with dozens of other tools) to accomplish other tasks, but those places don't make the news because they don't generate likes/clicks.

          There's alw
  • No real CS student (aka hacker in the vernacular sense) is ever concerned with A let alone B or C. They are only interested in exploration of technology and how to make it do things that the designers and gatekeepers never intended. Read Steven Levy's book.

  • It seems that downsides of AI as discussed by the professor appears to be mostly about social or economics impact. While these are valid points, they looked like discussion by a social science professor instead of a computer science professor. "Exploitation of (data) workers" is exactly the kind of words commonly spoken by social science professors. It would be helpful to CS education if there are more discussion about pitfalls of AI from a computer science standpoint. I can think of two main problems after
    • by DarkOx ( 621550 )

      I think what is possibly actually interesting about the latest wave of machine learning is we might actually be approaching a point where we don't need to create 'new software' for many tasks even new tasks.

      I don't think we can vibe code or prompt engineer our way to things that require high precision and absolute correctness, I don't think you'll want your bank keeping a ledger in their vibe-coded database engine but... there are lots of computing tasks where if you could get the error rate down to that of

  • by Tony Isaac ( 1301187 ) on Sunday September 21, 2025 @11:48PM (#65675212) Homepage

    I got my CS degree in 1988, just as the personal computer revolution was washing over the world. There was a lot of hand-wringing back then too, about how the computer would take away people's ability to think and do things on their own.

    But I didn't make something of my career by wringing my hands about the down sides of the new computer technologies. I was *excited* about the possibilities I could see, and dove headlong into it. The result was a very fun, long, and exciting, not to mention well-paying, career.

    Today, I'm again excited about this latest new technology. Yeah, it will have some down sides, I get it. But that's not going to stop me from leveraging it to its fullest potential. And, I'm having fun doing it.

  • In my experience the machines are very clever indeed at programming, far better at it than the average person who does know how to write software. And constantly improving, I am seeing this just month over month. You can get a huge amount of good code out of LLM 's if you know what you're doing. An experienced programmer can just fly. There's no sense in whining about it. That train has left the station.

    Uses way too much energy and resources? I totally agree. "the data theft impact, the treatment and exploi

    • Re:Overwrought (Score:5, Informative)

      by serviscope_minor ( 664417 ) on Monday September 22, 2025 @03:53AM (#65675364) Journal

      You can get a huge amount of good code out of LLM 's if you know what you're doing. An experienced programmer can just fly.

      This does not appear to be holding up in practice, at least not reliably.

      https://developers.slashdot.or... [slashdot.org]

      Clearly the value being generated is very large. Not just my perception but in the opinion of the most wealthy investors.

      You may have thought tulip bulb growing was generating very large value too...

      The machines are already able to do most coding and in some cases all of it.

      Again, not my experience. I'm inveterately lazy, and have tried it repeatedly. It's... OK I guess. Definitely faster for some stuff, seems more to actually slow me down on others. Trouble is you never know which in advance.

      • by Tom ( 822 )

        This does not appear to be holding up in practice, at least not reliably.

        It holds up in some cases, not in others, and calculating an average muddles that.

        Personally, I use AI coding assists for two purposes quite successfully: a) more intelligent auto-complete and b) writing a piece of code using a common, well understood algorithm (i.e. lots of sources the AI could learn from) in the specific programming language or setup that I need.

        It turns out that it is much faster and almost as reliable to have the AI do that then finding a few examples on github and stackoverflow, checki

      • >> This does not appear to be holding up in practice

        That article is about a study of "experienced developers from large open-source repositories (averaging 22k+ stars and 1M+ lines of code) that they’ve contributed to for multiple years". I can assure you that 1M+ lines of legacy code which the developer is intimately familiar with is not a typical programming scenario. Also the study did not state that the developers had previous experience working with LLM's, which is critical.

        >> Again,

        • Maybe you just suck at it. Coding with AI assistance is a learned skill, and if you don't know how to do it you may very well get bad results.

          It's possible, but the problem is the AI writes buggy code, and invents APIs which don't exist. Tose are fairly widely reported problems.

          • >> Tose are fairly widely reported problems

            Have you actually used it yourself? AI has written many thousands of lines of rock solid code for me, complete with unit tests and extensive documentation. And it isn't just me, AI assistance is standard procedure for professional software developers these days.

            https://www.c-sharpcorner.com/... [c-sharpcorner.com]
            "According to GitHub’s 2024 report, 92% of developers in the U.S. already use AI coding tools in some form. Stack Overflow’s 2024 Developer Survey showed ov

        • "Maybe you just suck at it. Coding with AI assistance is a learned skill"

          How do you get the software to not hallucinate functions or even entire libraries which do not exist at all? Even in writing a very small shell script the software makes these kinds of error, and then some more, like hallucinating flags and arguments which do not exist either.

          • >> not hallucinate functions or even entire libraries which do not exist at all

            It never does that in my experience. But hey, don't use AI. Who cares?

      • by TheBAFH ( 68624 )

        For better results, you could use AI to write the prompts for you.

  • by MpVpRb ( 1423381 ) on Monday September 22, 2025 @12:48AM (#65675240)

    Real computer science is not the same as teaching the fashionable language or tool of the moment.
    It should be about teaching students how to think about code and design useful and reliable software.
    Current AI tools are interesting, not because they allow the clueless to quickly "vibe code" buggy, insecure crap, but because they give hope for a future where experts can build better software and use it to solve previously intractable problems.
    The best things to teach are the conceptual fundamentals and the ability to apply the fundamentals to whatever tech comes along tomorrow

  • by Anonymous Coward

    Seems to me like there's a reason she's a professor at Bard, and not an institution held in higher regard, forget one known for its technical prowess rather than liberal arts? Bar is barely in the top 1/3 of liberal arts colleges.

    She's missing the entire point: a good CS degree is not taught in one language, or with one focus. When I graduated with my bachelor's from a very well-known university, I had learned about a dozen languages. I wasn't at a trade school, I was at a university where we were taught

  • smoke and mirros (Score:5, Interesting)

    by Tom ( 822 ) on Monday September 22, 2025 @05:23AM (#65675408) Homepage Journal

    Hey, industry, I've got an idea: If you need specific, recent, skills (especially in the framework-of-the-month class), how about you train people in them?

    That used to be the norm. Companies would hire apprentices, train them in the exact skills needed, then at the end hire them as proper employees. These days, though, the training part is outsourced to the education system. And that's just dumb in so many ways.

    Universities should not train the flavour of the moment. Because by the time people graduate, that may have already shifted elsewhere. Universities train the basics and the thinking needed to grow into nearby fields. Yes, thinking is a skill that can be trained.

    Case in point: When I was in university, there was one short course on cybersecurity. And yet that's been my profession for over two decades now. There were zero courses on AI. And yet there are whitepapers on AI with me as a co-author. And of the seven programming languages I learnt in university, I haven't used even one of them ever professionally and only one privately (C, of course. You can never go wrong learning C. If you have a university diploma in computer science and they didn't teach you C, demand your money back). Ok, if you count SQL as a programming language, it's eight and I did use that professionally a few times. But I consider none of them a waste of time. Ok, Haskell maybe. The actual skill acquired was "programming", not a particular language.

    Should universities teach about AI? Yes, I think so. Should they teach how to prompt engineer for ChatGPT 4? Totally not. That'll be obsolete before they even graduate.

    So if your company needs people who have a specific AI-related skill (like prompt engineering) and know a specific AI tool or model - find them or train them. Don't demand that other people train them for you.

    FFS, we complain about freeloaders everywhere, but the industry has become a cesspool of freeloaders these days.

    • These days, though, the training part is outsourced to the education system. And that's just dumb in so many ways.

      Never mind apprentices, even just normal on-the-job training. Personally, I've always been a fan, and if I can do it in a tiny startup, then bigger companies certainly can.

    • Re:smoke and mirros (Score:4, Interesting)

      by fuzzyfuzzyfungus ( 1223518 ) on Monday September 22, 2025 @06:39AM (#65675456) Journal
      As best I can tell; most of the complaining about freeloaders is sideshow in the battle over who deserves subsidies, not objections in principle. I'm less clear on whether there's also a positive correlation between whining about the subsidies going to people who aren't you and actively seeking them yourself; or whether the cases of people who do both are disproportionately irksome and so appear more common than a dispassionate analysis of the numbers would reveal them to be.
  • There's little interest in thinking about what's going to happen when the LLM companies decide that they have plateaued, that there's no more money to burn/spend, and a bunch of them fold—but we've perturbed education to such an extent that our students can no longer function without their AI helpers.

    Er, they'll switch to local LLMs maybe? Some adjustments of course but same basic idea.

    (Also, I've heard this discussed plenty; there has been plenty of interest in this ... can't speak for the universities though, lol)

  • They are very much not alone. See this https://doi.org/10.5281/zenodo... [doi.org]
  • a.) There's little interest in interrogating the downsides of generative AI, such as the environmental impact, the data theft impact, the treatment and exploitation of data workers.

    That's all the press ever fucking talks about, to the point where you've got people who use the cloud for everything bitching about AI like the rest of their cloud use isn't impacting the environment. Also, analyzing data isn't theft.

    b.) There's little interest in considering the extent to which, by incorporating generative AI i

  • > what’s going to happen when the LLM companies decide that they have plateaued, that there’s no more money to burn spend, and a bunch of them fold—but we’ve perturbed education to such an extent that our students can no longer function without their AI helpers.

    That's a bit silly considering that
    1. there are local AI models that don't depends on online services that are pretty powerful and can be used for programming already -- at least for things that students need.
    2. there are o

  • SImple as that.

    YOU are supposed to be obtaining a skill. How can you judge what AI creates, when you're not even able to read what it creates... errr... copies.

  • by Big Hairy Gorilla ( 9839972 ) on Monday September 22, 2025 @10:08AM (#65675666)
    loss of fundamental skills
    without a foundation in fundamental ideas, and some practice, you aren't "smart" enough to vet the outputs.
    Kernighan quote comes to mind, I think it's applicable here:

    Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.
    Brian Wilson Kernighan

    Then we end up with a vacuum without anybody with enough knowledge to understand what the robot spit out.

Garbage In -- Gospel Out.

Working...