Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Education AI Programming

AI's Impact on CS Education Likened to Calculator's Impact on Math Education (acm.org) 102

In Communication of the ACM, Google's VP of Education notes how calculators impacted math education — and wonders whether generative AI will have the same impact on CS education: Teachers had to find the right amount of long-hand arithmetic and mathematical problem solving for students to do, in order for them to have the "number sense" to be successful later in algebra and calculus. Too much focus on calculators diminished number sense. We have a similar situation in determining the 'code sense' required for students to be successful in this new realm of automated software engineering. It will take a few iterations to understand exactly what kind of praxis students need in this new era of LLMs to develop sufficient code sense, but now is the time to experiment."
Long-time Slashdot reader theodp notes it's not the first time the Google executive has had to consider "iterating" curriculum: The CACM article echoes earlier comments Google's Education VP made in a featured talk called The Future of Computational Thinking at last year's Blockly Summit. (Blockly is the Google technology that powers drag-and-drop coding IDE's used for K-12 CS education, including Scratch and Code.org). Envisioning a world where AI generates code and humans proofread it, Johnson explained: "One can imagine a future where these generative coding systems become so reliable, so capable, and so secure that the amount of time doing low-level coding really decreases for both students and for professionals. So, we see a shift with students to focus more on reading and understanding and assessing generated code and less about actually writing it. [...] I don't anticipate that the need for understanding code is going to go away entirely right away [...] I think there will still be at least in the near term a need to understand read and understand code so that you can assess the reliabilities, the correctness of generated code. So, I think in the near term there's still going to be a need for that." In the following Q&A, Johnson is caught by surprise when asked whether there will even be a need for Blockly at all in the AI-driven world as described — and the Google VP concedes there may not be.
This discussion has been archived. No new comments can be posted.

AI's Impact on CS Education Likened to Calculator's Impact on Math Education

Comments Filter:
  • that could teach you math, and provide solutions to math problems.
    • by fibonacci8 ( 260615 ) on Saturday April 06, 2024 @02:53PM (#64375186)
      No, it's more like a calculator that can copy someone else's homework, present that to you, claim that that's close enough to teaching, and doesn't have any recourse when it's incorrect.
    • We already have calculators that do that. When I was in community college 20 years ago I had a ti-92 that would solve integral equations. Basically made much of calculus 2 pointless.

      • by bn-7bc ( 909819 )
        Well iirc rge answers that came out of the tis where rather easy to spotas the calculatot did some stuff that was computationally eficient but a pita to do nanually, so the answer the calc gave while correct always looked rather different than what we got epwgen doing it manually ( eys the manual answers were correct} at amy rate you allways needed to show what steps you used and the calc ( at least in it's default state was unable to do that
      • Well, if you did all your homework with that calculator and never had to show your work, did you actually learn anything? In college at least, it's assumed that the student _wants_ to learn, because the student is paying for the education instead of being forced to go. Now you get to Calculus 3 and they don't allow calculators or you have to show all your work, then do you complain to the professor that it's unfair because you never learned how to do integration?

        • No calculators (other than a basic scientific calculator) were allowed during the exams. You literally can't pass the class with such a calculator. Anyways, calc 3 was much easier, which isn't unexpected as it's generally understood that calc 2 is the hardest calculus course.

      • "Pointless" except now you can't do advanced mathematics. Learning calculus and basic differential equations are the arpeggios of real value mathematics. But, if your purpose in taking calculus was to get a degree, sure, it's pointless.
    • Did a quick test by asking ChatGPT to so a simple division by pi. Apparently it used a value close to pi = 3.142507.

      • Were you using 3.5? I tried in in GPT 4 and it gave me a perfect answer.

        Me: divide pi by 10.

        ChatGPT:
        # Calculating pi divided by 10
        3.141592653589793 / 10
        0.314
  • Nope. (Score:5, Informative)

    by Nrrqshrr ( 1879148 ) on Saturday April 06, 2024 @02:45PM (#64375170)

    I disagree.
    The calculator gives you the correct answer. If your math is wrong you will get an answer that makes no sense or straight up an error.
    The "AI" gives you an answer and tries hard to convince you it's correct. You can only tell whether it's a good answer or not if you're capable of writing the good answer yourself, anyway.

    • by Njovich ( 553857 )

      Ok, tell me one question that would realistically be asked in school that ChatGPT 4 will get wrong?

      • Re:Nope. (Score:4, Informative)

        by ffkom ( 3519199 ) on Saturday April 06, 2024 @04:58PM (#64375476)
        There were plenty of news on examples, like https://www.businessinsider.co... [businessinsider.com]

        Of course, once this year's test questions have become part of the training data, it will likely not fail them again.
        • by Njovich ( 553857 )

          This article was from before ChatGPT 4 and it seems version 4 gets them all correct. How about an actual example instead of what version 3.5 could not do (which is a vastly inferior product).

          • If 3.5 was someone paraphrasing what they overheard at Starbucks, maybe 4 is someone paraphrasing what they overheard at at TED talk. But it's still just paraphrasing. It's still just language. It isn't a subject-matter expert. It will always be dangerous to trust a system of this type with anything important.
            • Just like humans are always paraphrasing what they experienced before. It is always dangerous to trust them with anything important.

              Out of billions of humans on the planet, once in a while someone would get an original thought.

              Otherwise they're all just stochastic parrots.

            • by Njovich ( 553857 )

              I don't even know what subject-matter expert means in this context. We are talking about like high school level learning right? I dont think you have many experts at hand there in general.

              I'm not saying ChatGPT should be a university professor.

              I was just asking "one question that would realistically be asked in school that ChatGPT 4 will get wrong". It's not like a rethorical question, I'm legitimately curious about this. I suspect it would be like mathematics or so. A lot of school exercises and textbooks

              • I am not paying for 4.0, but ask it this if you can:
                "A shoe can fit in a room, and a room can fit in a house. Can it fit in a shoe?"

                There are only two right answers for this. A 5 & 7 year old got it in a second. 3.5 gave a long "it could be this or this" answer, which is wrong; the right answer then was "I don't know."

                You may say this is an ambiguous question but that is life. Answers become simpler as the question becomes specific. But normally, we don't have enough information about the answer to m

                • by Njovich ( 553857 )

                  It gives me this:

                  Your statement plays with the concept of scale and containment in an interesting way. When you say "a shoe can fit in a room, and a room can fit in a house," you're talking about objects that can contain one another based on their size. However, the twist comes with the question "Can it fit in a shoe?" Here, the "it" presumably refers to the house or the room, which obviously cannot fit inside a shoe due to the significant difference in size. This question cleverly flips the perspective, ch

                  • The problem is that from a "social" and "human" perspective, the answer is not really useful. Many answers can be right, very few are useful. Overly explanatory answers or ones that cover all permutations are rarely useful. The correct answer from the AI should have been "I don't know."... cause it doesn't without a more constrained question.

                    I asked this question now of a 3, 5, 12, and 37 year old. I got a No from all but the 5 who said he didn't know. Which is also a right answer. The 12 did ask after if

                    • by Njovich ( 553857 )

                      Actually in the context of school questions that we discussed, I disagree. If you did not provide adequate context to determine what 'the' refers to, then the answerer is free to pick whatever (reasonable) guess he has about it. The guess of ChatGPT what context was picked was entirely reasonable. It also specifically mentioned the assumption it made.

                    • The guess of ChatGPT what context was picked was entirely reasonable. It also specifically mentioned the assumption it made.

                      Most standardized tests like the SAT/ACT/LSAT have questions at the +90% mark to appear vague and ask for the "best" answer. Also, you can't guess "anything", most test takers will see one option that, although appears possible, can't be given the context. Mine were Yes/No questions to keep it overly simple. "Blue" is not a possibility.

                      And for my question, ChatGPT did not give you an answer. It gave you reasons why it could be different answers. It chose a very worded "maybe" which isn't really an answe

              • LLMs are an encoding of language. They are great at parsing, summarizing, and querying language. Play word games with them if you like. But, they have no deep understanding of the concepts behind the language. They can answer questions *just enough* to seem like they know what they are talking about, and increasing the parameters improves that *just enough* to seem like it's possible to reach a state where they are reliable. But it isn't. Fundamentally they do not work that way. Now, I think there is a lot
    • This should not be marked informative. The electronic calculator actually does not give you the correct answer. So the distinction you are trying to make with AI is only a matter of degree.

      Your last point is quite agreeable though. In all such cases, it is necessary to write the actual answer down first, if you want to assess if the computer answer is within tolerance. The trick is to find a way to write the actual answer which doesn't require writing the actual answer explicitly. That way the compariso

      • The electronic calculator correctly gives you the answer to the question you asked. You have to know how to ask the question in order to correctly get the answer that you actually need.

        You don't need to know anything to ask the LLM a question to which it will give a plausible-looking answer. The LLM may or may not correctly give you the answer to the question you asked, the question you needed answered, or in fact any question at all. It will confidently Dunning-Krueger its way to providing what looks like

        • That's not correct about the electronic calculator. There are inherent limitations on floating point calculations which a calculator cannot do anything about. Few people beyond numerical analysts are aware of them.

          I don't quibble with your statements about LLM, I even agree with them, but the electronic calculator has exactly the same issues in the numerical domain, mutatis mutandis. The answers spat out by a handheld electronic calculator come with very weak guarantees of accuracy which simply don't surv

    • Re:Nope. (Score:4, Interesting)

      by Seclusion ( 411646 ) on Saturday April 06, 2024 @07:34PM (#64375736)

      Posted without Karma bonus, generated by ChatGPT 3.5.
      Me. "Why is AI better than a calculator?"

      "ChatGPT @ openai.com

      AI and calculators serve different purposes and have distinct capabilities. However, AI can often outperform calculators in various tasks due to its ability to learn, adapt, and handle complex scenarios. Here are some reasons why AI can be considered better than a calculator in certain contexts:
              Adaptability: AI systems can adapt to new data and situations, whereas calculators are typically designed for specific functions and can't generalize beyond their programmed capabilities. AI can learn from experience and improve over time, making it more versatile in handling diverse tasks.
              Complexity: AI can handle more complex problems that may involve multiple variables, uncertainties, or nonlinear relationships. Calculators are limited by their predefined algorithms and often struggle with tasks that go beyond basic arithmetic or simple equations.
              Natural Language Processing (NLP): AI can understand and process natural language inputs, allowing users to interact with it in a more intuitive manner. This capability enables AI to perform tasks like answering questions, summarizing information, or interpreting instructions without the need for specialized input formats.
              Pattern Recognition: AI excels at recognizing patterns and extracting insights from large datasets, which is valuable in fields like data analysis, predictive modeling, and image recognition. While calculators can perform calculations based on predefined formulas, AI can identify trends, anomalies, or correlations that may not be apparent through manual calculations alone.
              Decision Making: AI can make autonomous decisions based on predefined criteria or learned patterns, whereas calculators require human input for decision-making processes. AI-powered systems like recommendation engines, autonomous vehicles, or medical diagnosis tools can analyze complex data and provide recommendations or actions without constant human intervention.
              Learning and Improvement: AI can continuously learn from new data and feedback, allowing it to improve its performance over time. This adaptability enables AI to stay relevant and effective in dynamic environments, whereas calculators remain static in their capabilities unless updated or replaced with newer models.
      Overall, while calculators are highly efficient tools for performing specific mathematical calculations, AI offers a broader range of capabilities, including adaptability, complex problem-solving, natural language understanding, pattern recognition, decision-making, and continuous learning, making it better suited for addressing diverse and evolving challenges in various domains."

    • Posted without Karma bonus, generated by ChatGPT 3.5.
      Me. "Why is a calculator better than AI?"

      "ChatGPT @ openai.com

      While AI possesses numerous advantages over calculators, there are situations where calculators may be considered better suited for certain tasks:
      Speed and Precision: Calculators are designed specifically for numerical computations and can perform calculations quickly and accurately. In scenarios where speed and precision are paramount, such as during exams, financi

    • A calculator will give the correct response only if you give it the correct calculation. Having been given 12 or more (I did it by calculator so it must be correct) different answers in the classroom, I preferred (I quit teaching) to try and impart something like number sense and an appreciation of what the answer might be before reaching for a calculator.
    • Sure, current AI often gives bad answers, but it's easy enough to check if they run and produce correct results. There's no reason to claim that you need to know the correct answer in order to be able to see that the result doesn't compile or produces a wrong result.

  • by paul_engr ( 6280294 ) on Saturday April 06, 2024 @02:45PM (#64375172)
    Maybe if the calculator cost $100,000 amd gave you wrong information like 2+2=applesauce most of the time.
  • Why would someone want to develop coding skills when we are fast approaching the era where systems are programmed pretty much the way the computer on the Starship Enterpise is programmed? Just say what you want and it does the rest. Nobody will need "coding sense."

    Already I rely on GPT (and there are better ones out there) to speed up my projects. Sure I can do Python/C/Java/Typescript/HTML/CSS (or whatever) by hand the same way I can still do long division by hand but this isn't a matter of aestheti

    • by gweihir ( 88907 ) on Saturday April 06, 2024 @03:06PM (#64375220)

      Naa, the simplistic things ChatGPT can do now are already pretty much the maximum possible. For example, if you want to, say, convert Latin-1 to UTF8 in Python, ChatGPT can save you a few minutes of looking it up. But that is about it.

      so it doesn't relieve anyone from knowing what they are doing. Currently.

      Actually, that problem cannot be fixed for LLMs. It is a fundamental property of the approach used. The other thing is that current LLMs are not early prototypes. They are pretty much the end-result of a long and tedious optimization process and all the easy things are already in there. So no easy wins to be had anymore.

      • So no easy wins to be had anymore.

        They could try using petabytes of training data instead of terabytes.

        • by gweihir ( 88907 )

          That would not do a lot and it would just somewhat reduce some problems. Increasing training data size is subject to diminishing returns and problems like overfitting. It would also not be easy, decades/centuries of training time tend to have that issue. But it does not matter, since they cannot get that much genuine training data anyways. And even worse: Because of model collapse and too much AI generated content now out there, they cannot even get fresh terabytes anymore.

      • I totally agree. There has been no progress in LLMs. From GPT 2 to GPT 4 there have been no improvements. They give the exact same answers. Also, there will be no progress in the future. EVER!

        And, no one is using LLMs for code generation. Github copilot and others are just a myth. They don't exist in reality and therefore no one uses them and so they will never improve. EVER!

        Sora, Stable Diffusion and DALL-E are just plain lies.

        Slashdot community is wise. We come here to learn the truth about AI.

        • I totally agree. There has been no progress in LLMs. From GPT 2 to GPT 4 there have been no improvements. They give the exact same answers. Also, there will be no progress in the future. EVER!

          And, no one is using LLMs for code generation. Github copilot and others are just a myth. They don't exist in reality and therefore no one uses them and so they will never improve. EVER!

          Sora, Stable Diffusion and DALL-E are just plain lies.

          Slashdot community is wise. We come here to learn the truth about AI.

          Yeah, he is a bit of a one-note symphony on this topic.

          I've tried a few times to explain to him how I use it, as a useful, though (fully acknowledged) fallible tool for programming. Nothing sinks in through his anti-LLM barrier, lol

    • "it doesn't relieve anyone from knowing what they are doing"

      I think this is the issue. Let's just start a fight right now. The same people that insist that you can't do "enterprise computing" without systemd (Ubuntu and RedHat come to mind)... Sure there are some functionality of systemd that is attractive for larger installations BUT it all comes with the humongous blobs and bugs of systemd, and an insane amount of overhead to get your python program to print "hello world". The cost benefit ratio of system
    • by cowdung ( 702933 )

      The guy that is improving ChatGPT probably has some coding sense.

      • The guy that is improving ChatGPT probably has some coding sense.

        Such comments are wasted on Slashdot. This is a site full of old farts that deny reality and live in a fantasy world where science and technology has frozen. They jerk off each other by modding up!

    • Just say what you want and it does the rest. Nobody will need "coding sense."

      The only problem with that is that English and other human languages are not precise enough to "just say what you want"

      Perhaps we could develop some sort of symbolic language to tell the AI precisely "what we want" ... oh wait.

      (That said, absolutely, we will keep getting higher and higher levels of abstraction, no doubt. And LLMs are currently a useful tool, as long as you know what you are doing and know how and where they are fallible.)

  • Pro-tip: Doing calculations is not math. It is just _applying_ some math.

  • 1. Train AI from the human-built internet.
    2. PROFIT.
    3. AI destroys the human-built internet.
    4. ???

  • Hey Slashdot... (Score:5, Insightful)

    by 26199 ( 577806 ) on Saturday April 06, 2024 @03:28PM (#64375266) Homepage

    ...how about we fight the enshittification of the English language a bit here?

    Writing code has very little to do with "computer science".

    You can call it "software engineering" if you like but there's no such discipline, I prefer "software development".

    Does AI help you develop software or prevent you from learning to develop software? Ehm ... not really?

    • Does AI help you develop software or prevent you from learning to develop software?

      Seems to be both. Regardless, very well put.

    • by Anonymous Coward

      Does Rust help you develop software or prevent you from learning to manage pointers correctly?

      • I don't get the fuss about Rust. Java was designed to solve those problems and more 30 years ago. We have two generations of programmers who are way beyond understanding and using pointers, because they simply never had to worry about such things in Java.

        There are way more interesting and advanced safety questions out there. Even Perl had the concept of tainted variables.

        Maybe Rust is a case of NIH syndrome?

    • Huh? Writing code is just as much computer science as knowing how to synthesize chemicals is chemistry. If someone tells you they're a chemist and you ask them "what happens if you put zinc and hydrochloric acid together" and they have no clue .. that's going to be a bit ridiculous isn't it?

      Also, AI does help you write software, not only that but also provided detailed instructions and breakdown of code.

    • by Coius ( 743781 )

      I gotta admit, I read this entire statement with a boston accent. It fits so well and I laughed out loud after reading it that way. Thank you for that joy of causing me to read it in Boston-ian. The way you wrote that just somehow triggered the artistic method to pronounce it.

    • Engineering is applying scientific theory to practical problems. Programming is that; It applies computer science to practical problems.

      Engineering is not science, but its not true that it has nothing to do with science.

      I know that Dijkstra said that "software engineering" makes programming seem like it as simple as mechanical engineering or architecture. But Dijkstra underestimated the shit that engineers in other disciplines have to deal with, which is very much the same as in software development.

      Also,

    • Writing code has very little to do with "computer science".
      You can call it "software engineering" if you like but there's no such discipline, I prefer "software development".

      Is there truly no such thing as software engineering?

      Some early and critical code used to be mathematically proven before use, which was feasible because it was very small. Was that not engineered? Science, math, best practices, cross-checking...

      Certainly software engineering is not so universally taken seriously as other forms of engineering, but it seems as though it still exists, even though it also seems like most software development should not be described as such. But if a formal methodology and scie

      • Lots of engineering, if not perhaps most of it, deals with limitations of shaping the physical world to our needs (imperfect materials, imperfect measurements, safety factors etc. etc.) that simply don't exists in mathematical objects like trees or graphs. Hence the similarity between "software engineering" and traditional engineering will necessarily be very limited.
  • Cut the crap! (Score:4, Insightful)

    by Anonymous Coward on Saturday April 06, 2024 @03:32PM (#64375280)

    Unless there is a floating point error, calculators do not hallucinate answers.

    • by PPH ( 736903 )

      calculators do not hallucinate answers

      Try some complex expressions given as PEMDAS puzzles. Even different versions of the same brand and model calculator give different answers on occasion.

  • But we need 'experience' before we can architect anything, right? Or was that all a lie this whole time? Just a way to haze the younger people into bleeding their life for the company?

    Is possible to create a good system without ALSO creating a lot of bad systems first?

    Or is it better to think of this like 'designing for manufacture' versus blacksmiths? Where there are different ways of making the same result (at different scales, with very different tools).

  • Or maybe I've just been watching too much British TV. I read that as cack'em (kill them).

  • People keep talking about how wonderful AI generated code is, and how it's revolutionary, but I still haven't seen any real life examples of working AI-generated code, or any AI-generated code in a language that's not Python.

    Can anyone point me to some examples of AI generated code actually existing or doing something?

    • Deepmind's AlphaCode can write short programmes in C++.

      But its for toy problems from code competitions. Nothing actually useful.

      The problem is that ever since Charles Babbage sought funding for his Analytical Engine, people have asked if the machine will correct garbage input. There are FORTRAN compilers that tolerate synatx errors, but that's basically defining an informal programming language. With AI, people began to dream of using computers as depicted in Star Trek: Using natural language voice input

  • by cowdung ( 702933 ) on Saturday April 06, 2024 @05:35PM (#64375562)

    As someone who taught programming for 15 years, I find LLMs a bit problematic. Since a common strategy was to give our students small problems to solve like:

    - write a method the prints hello 10 times
    - write a method that finds the even numbers in an array
    - etc..

    These sort of little exercises helped them understand things like the for loop and other basic concepts. The problem is that today you can feed that into ChatGPT and it probably would spit out the solution. Now, you could say that the student shouldn't do that. And cheaters never learn. But cheating has just become much easier and the temptation is great.

    The same goes for problems where you complete the missing code, fix the bug in the code, or a small project like write up a calculator like the one Windows or Minesweeper.

    Teachers will need new teaching methods, since ChatGPT can lift the weight of the simple problems. But if you can't solve simple problems, how do you develop the skills to solve the harder ones?

    • ... since ChatGPT can lift the weight ...

      The problem with lexical guessing (LLM) isn't GPTs citing and re-using their own garbage: It's every (accurate) answer will be fed into a neural network: An LLM will know all the answers to all the training problems. So, a student can cheat their way through the first 2 or 3 semesters of every subject. That's a bigger problem because only the core competency (eg. software development, accounting, sonograph analysis) extends through 4 semesters. Out-of-classroom assessment will not be possible because

      • An LLM is not a file system. It will not have perfect recall of all the garbage it was fed.

        Any answers you feed it will only adjust the weight of the coefficients of letter combinations. It has no way of assessing if the answer it produces is correct or if it fits the problem, it will only produce something that is (probably) syntactically correct. In the case of a transformer, it will also produce something that matches the context of the prompt, but that is about style and tone, not about semantics.

    • Teachers will need new teaching methods, since ChatGPT can lift the weight of the simple problems. But if you can't solve simple problems, how do you develop the skills to solve the harder ones?

      They could do what one of my professors did when he got two identical answers to a test question. Called us in separately and asked us how we’d approach solving a similar problem. When I got it right it was clear I was not the one that copied an answer.

      Have them explain the logic behind their answer. Sure a student could ask Chat-GPT the same question, but then again so could the professor.

    • I think teachers can continue using the same methods. The onus is on students who don't learn, surely they'll fail in the exam.

      It is the evaluation of assignments and projects that needs to be changed. You can't prevent them from using all available resources, and you shouldn't because that's what they will be doing in real life. Oral and in-person grilling on the nitty gritty should be the basis. Sure, you're allowed to copy code from where ever you so desire, but then you should be able to explain it in d

    • I hate for-loops. Recursion is much more elegant. Not only is it simpler, it is also safer, and more powerful, and can compile to simpler machine code.

      For example:

      FOR i := 1 TO 10 STEP 1 DO
      BEGIN
      PRINT "hello"
      END

      mapM_ print $ take 10 $ repeat "hello"

      Ok, not a good example. Maybe the point is better made by comparing "for i in {1..10}; do echo hello; done" with "yes hello | head -n10".

      FOR i IN a DO
      BEGIN
      GOTO AND(i,1)
      0 PRINT i
      1
      END

      mapM_ print $ [ i | i <- a, (mod i 2) == 0 ]

      Also not a good example.

      My point

      • >But recursion is fundamental to good code,

        If you think that having a data dependent stack depth is ok.
        The rest of us have to write secure and reliable code.

        • If you think that recursion always requires a stack, you don't know how to program.

          • If all you do is tail recursion that gets optimized away, why bother?

            • Why bother avoiding it? Didn't you not read what I wrote, or do you just not understand it?

              You said something about having to write secure and reliable code. Recursion provides the guarantees that for-loops cannot. It is also easier to read and easier to write. If you don't understand recursion, you are not able to write code that is secure or reliable.

              Besides, tail calls are not the only form of recursion that can be optimised. (Also, it is not "optimised away". It is possible to write recursive code

              • >If you think that stack size depends on the algorithm

                I said stack size depends on the data. Data dependent timing and memory use are gifts to the side channel people.
                Not always, but you know that.

                >Besides, tail calls are not the only form of recursion that can be optimised.
                Yeah yeah, I did the algorithms course 35 years ago on recursion optimization in functional languages (Pre Haskell - We had standard ML and similar stuff back then).
                You are arguing for correctness of code over side channel cleanlin

                • I said stack size depends on the data.

                  The data doesn't care if you use recursion of for-loops.

                  Data-runtime independence is not achieved with a recursion everywhere approach.

                  It can be. Usually you don't want that, you want things to be fast and efficient, which means lazy evaluation and dynamic programming. Side channel attacks are only relevant for cryptanalysis.

                  If you want deterministic runtime (outside of real-time applications), matrix multiplication is the way to go. Which can be defined recursively. A time-independent case evaluation is really just a 2x2 matrix, whether you make it explicit or not.

                  Is your compiler drunk?

                  You really didn

                  • Do you try to teach your grandmother to suck eggs also?

                    >Have you ever coded anything in assembly?
                    I design the silicon on which the machine code runs.

                    You make a lot of assumptions.

                    • You make a lot of assumptions.

                      Not really. You make a lot of statements that reveal how little you know about programming.

                      So I address your misconceptions. I don't need to make assumptions for that.

                      Do you try to teach your grandmother to suck eggs also?

                      I don't know what that means. If it is supposed to be an insult, you couldn't be embarrassing yourself more.

  • > "Johnson is caught by surprise when asked whether there will even be a need for Blockly at all in the AI-driven world as described — and the Google VP concedes there may not be."

    haahah Not many people imagine that AI will replace what *they* do, only what *other people* do.

  • ... understanding and assessing ...

    That comes from learning what breaks the machine: Learning when human intellect ignores the rules controlling a machine. A calculator didn't eliminate the need for recognizing, filtering and organizing the elements of a problem. Now, the student doesn't have to make the machine fail, a (trained) LLM can regurgitate the answer, no filtering, organizing, or thinking involved.

  • It leaves the student as clueless as they started. AI is completely void of the process of learning that humankind has found essential. It has as an implicit goal separating humankind from understanding.
  • We had 4GL and other things that were easy to create and impossible to maintain. We have decades of code written that is not readable. Suddenly AI that learns on our code base will be somehow easy to read and maintain? Doubt it.
    • by ghoul ( 157158 )
      To train AI to code , you have to give it examples of code to read and train on. You have to also tell it what that code does. When selecting such training data, AI engineers choose code that is readable to them. AI models are biased in that they are trained on readable code rather than all the code out there. Ergo the code they will generate will be more readable on average than all the code out there.
    • What you want is something that reads code and translates it into readable code. That requires a compiler, not an AI.

      Although, maybe AI can help with writing a front-end for that. Not GPT though, wrong type of AI.

    • AI seems to be pretty good at translating things from one thing to another. Unfortunately, if it fails subtly the person dependent on it will not know unless there is already a test in place and the test is sufficiently well-developed. You certainly can't trust the LLM to create the test. So, just like now, avoiding subtle failures depends on someone who actually understands the problem creating tests that determine whether the task was performed correctly.

      On the other hand, if the only thing we have to do

  • Lots of people here comment about AI returning wrong answers. While true, the quotes did say:

    "One can imagine a future where these generative coding systems become so reliable, so capable, and so secure that the amount of time doing low-level coding really decreases for both students and for professionals."

    Either those people didn't read this, couldn't imagine this, or they just ignored it, but it's annoying to just read this again and again.

    At least in terms of AI for programming, I'm sure that people usin

  • Despite the hype train about OpenAI's large language model (and it is large, really impressive), an LLM is not in itself an AI. It is a model. The AI is the thing that built that model. The model is just a model, in this case a purely empirical model of language.

    Yes, Deepmind's first LSTM network that was trained on movie subtitles was a surprisingly usable chatbot. But it would be a stretch to call a chatbot an AI. ELIZA is a chatbot, and it has a memory it can refer to. Does that make it an AI? It

  • ... a useful servant, a terrible master.
  • Right now AI is more like a compiler which turns natural language into machine code.

    Soon, once the excitement dies down, people will realize that natural language is a bad way to specify the precise behavior of very large systems.

    This will (if we are smart) lead to the creation of very high level languages which will still be a type of code with precise meaning.

    If we are dumb, we will end up work mountains of inefficient high level code which will make each web page into gigabytes of JavaScript and one tera

  • I'm old enough to have seen what calculators, especially scientific calculators, did to education. They didn't make you smart. You still had to understand the underlying theory. The device just made the plug-and-chug part easier. They saved time. I'm also old enough to have been able to use early word processors only to have teachers refuse to accept a paper if it was printed on a dot-matrix printer. The content was irrelevant to these people. Only appearance mattered. A valuable life lesson but never tell

  • The full impact of calculators on the U.S. education system was amply documented in this short skit on Saturday Night Live [youtube.com]...

I program, therefore I am.

Working...