Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Education AI

Code.org Launches AI Teaching Assistant For Grades 6-10 In Stanford Partnership (illinois.edu) 16

theodp writes: From a Wednesday press release: "Code.org, in collaboration with The Piech Lab at Stanford University, launched today its AI Teaching Assistant, ushering in a new era of computer science instruction to support teachers in preparing students with the foundational skills necessary to work, live and thrive in an AI world. [...] Launching as a part of Code.org's leading Computer Science Discoveries (CSD) curriculum [for grades 6-10], the tool is designed to bolster teacher confidence in teaching computer science." EdWeek reports that in a limited pilot project involving twenty teachers nationwide, the AI computer science grading tool cut one middle school teacher's grading time in half. Code.org is now inviting an additional 300 teachers to give the tool a try. "Many teachers who lead computer science courses," EdWeek notes, "don't have a degree in the subject -- or even much training on how to teach it -- and might be the only educator in their school leading a computer science course."

Stanford's Piech Lab is headed by assistant professor of CS Chris Piech, who also runs the wildly-successful free Code in Place MOOC (30,000+ learners and counting), which teaches fundamentals from Stanford's flagship introduction to Python course. Prior to coming up with the new AI teaching assistant, which automatically assesses Code.org students' JavaScript game code, Piech worked on a Stanford Research team that partnered with Code.org nearly a decade ago to create algorithms to generate hints for K-12 students trying to solve Code.org's Hour of Code block-based programming puzzles (2015 paper [PDF]). And several years ago, Piech's lab again teamed with Code.org on Play-to-Grade, which sought to "provide scalable automated grading on all types of coding assignments" by analyzing the game play of Code.org students' projects. Play-to-Grade, a 2022 paper (PDF) noted, was "supported in part by a Stanford Hoffman-Yee Human Centered AI grant" for AI tutors to help prepare students for the 21st century workforce. That project also aimed to develop a "Super Teaching Assistant" for Piech's Code in Place MOOC. LinkedIn co-founder Reid Hoffman, who was present for the presentation of the 'AI Tutors' work he and his wife funded, is a Code.org Diamond Supporter ($1+ million).
In other AI grading news, Texas will use computers to grade written answers on this year's STAAR tests. The state will save more than $15 million by using technology similar to ChatGPT to give initial scores, reducing the number of human graders needed.
This discussion has been archived. No new comments can be posted.

Code.org Launches AI Teaching Assistant For Grades 6-10 In Stanford Partnership

Comments Filter:
  • What ever happened to making sure the experiments are a success before rolling it out to the public? A product appearing this quickly has no chance of being thoroughly tested.

    • by Bite The Pillow ( 3087109 ) on Thursday April 11, 2024 @06:58PM (#64387874)

      "in a limited pilot project involving twenty teachers nationwide"

      It isn't production until everyone is allowed to use it. They started with 20 people. And that was probably after internal testing. They are making sure the experiments are a success before moving on. Next is 300 teachers, and that's nothing. 6 per state?

      Do you have a more specific question or concern?

    • The footnote on grading essays is far more questionable than using AI to help teach CS if you ask me. CS is relatively objective, and the functionality of code is testable in a way that logic errors in an essay are not. In my experience ChatGPT4 is very, very good at explaining concepts in CS and AI.
      • Is it explaining them, or just copying? Pointing to wikipedia also gives really good answers as well.

        • Static text (wikipedia or textbook) can't address specific followup questions, like a 1-on-1 tutor or ChatGPT can. You can say to ChatGPT, "...based on my understanding it seems like this should be that, but it isn't, why not?" and it can help debug your assumptions or logic.

          Wikipedia does make a little stab at this by sometimes starting with a summary and then explaining things in more detail. But that's still a far cry of being individually tailored to your needs like a back-and-forth discussion is.

          • by narcc ( 412956 )

            Static text (wikipedia or textbook) can't address specific followup questions

            Neither can ChatGPT. Not reliably, anyway.

            I use wikipedia. But if I find parts of it hard to understand, I'll ask ChatGPT.

            I'm so sorry...

            • The fact that ChatGPT gives bad answers is a plus when using it to debug your knowledge. I often bring it down from GPT 4 to GPT 3 when Iâ(TM)m using it to help with my reasoning, because when it gives me something that doesnâ(TM)t sound right, if I can then reason out why it doesnâ(TM)t sound right, it means Iâ(TM)ve reached the understanding Iâ(TM)m looking for. Itâ(TM)s not about using it as a source. Itâ(TM)s a better version of explaining something to a rubber duck t
          • If ChatGPT does not know the answers. ChatGPT is essentially just looking up on the internet for you and then summarizing in different words. ChatGPT just does CHAT. None of this is "intelligence". It did not get training data from CS professors, it got it from the internet, and this means the low quality stuff you find on Stackoverflow maybe, dubious answers to students asking to solve their coding assignment.

            What ChatGPT, or other LLM models, current *or* advanced, can give is that it lacks real experi

    • From the outset, Code.org established itself as an evidence-free organisation. They just want to expand & get people into their programme, regardless of its quality or utility. They also have a marketing department that is divorced from the evidence-informed realities of learning & teaching.

      Simply put, they just don't care but they spend $millions in PR & marketing telling us that they do. I bet they spend more on this than on curriculum development.
  • Their whole mission is bogus. Making it more bogus by using AI does not change much.

  • It'll be old news by the time they get out of school.
  • Every education bureaucrat in the country hungrily kneading their hands as they make plans to fire every teacher and replace them with a screen.

    Hey wait, I think we've seen that somewhere before, haven't we?

    https://www.youtube.com/watch?... [youtube.com]

  • So less people will know less about kids abilities, and less in general about the kids in their classes. And as a bonus, kids will know less about it all than ever before.

The earth is like a tiny grain of sand, only much, much heavier.

Working...