Code.org Launches AI Teaching Assistant For Grades 6-10 In Stanford Partnership (illinois.edu) 16
theodp writes: From a Wednesday press release: "Code.org, in collaboration with The Piech Lab at Stanford University, launched today its AI Teaching Assistant, ushering in a new era of computer science instruction to support teachers in preparing students with the foundational skills necessary to work, live and thrive in an AI world. [...] Launching as a part of Code.org's leading Computer Science Discoveries (CSD) curriculum [for grades 6-10], the tool is designed to bolster teacher confidence in teaching computer science." EdWeek reports that in a limited pilot project involving twenty teachers nationwide, the AI computer science grading tool cut one middle school teacher's grading time in half. Code.org is now inviting an additional 300 teachers to give the tool a try. "Many teachers who lead computer science courses," EdWeek notes, "don't have a degree in the subject -- or even much training on how to teach it -- and might be the only educator in their school leading a computer science course."
Stanford's Piech Lab is headed by assistant professor of CS Chris Piech, who also runs the wildly-successful free Code in Place MOOC (30,000+ learners and counting), which teaches fundamentals from Stanford's flagship introduction to Python course. Prior to coming up with the new AI teaching assistant, which automatically assesses Code.org students' JavaScript game code, Piech worked on a Stanford Research team that partnered with Code.org nearly a decade ago to create algorithms to generate hints for K-12 students trying to solve Code.org's Hour of Code block-based programming puzzles (2015 paper [PDF]). And several years ago, Piech's lab again teamed with Code.org on Play-to-Grade, which sought to "provide scalable automated grading on all types of coding assignments" by analyzing the game play of Code.org students' projects. Play-to-Grade, a 2022 paper (PDF) noted, was "supported in part by a Stanford Hoffman-Yee Human Centered AI grant" for AI tutors to help prepare students for the 21st century workforce. That project also aimed to develop a "Super Teaching Assistant" for Piech's Code in Place MOOC. LinkedIn co-founder Reid Hoffman, who was present for the presentation of the 'AI Tutors' work he and his wife funded, is a Code.org Diamond Supporter ($1+ million). In other AI grading news, Texas will use computers to grade written answers on this year's STAAR tests. The state will save more than $15 million by using technology similar to ChatGPT to give initial scores, reducing the number of human graders needed.
Stanford's Piech Lab is headed by assistant professor of CS Chris Piech, who also runs the wildly-successful free Code in Place MOOC (30,000+ learners and counting), which teaches fundamentals from Stanford's flagship introduction to Python course. Prior to coming up with the new AI teaching assistant, which automatically assesses Code.org students' JavaScript game code, Piech worked on a Stanford Research team that partnered with Code.org nearly a decade ago to create algorithms to generate hints for K-12 students trying to solve Code.org's Hour of Code block-based programming puzzles (2015 paper [PDF]). And several years ago, Piech's lab again teamed with Code.org on Play-to-Grade, which sought to "provide scalable automated grading on all types of coding assignments" by analyzing the game play of Code.org students' projects. Play-to-Grade, a 2022 paper (PDF) noted, was "supported in part by a Stanford Hoffman-Yee Human Centered AI grant" for AI tutors to help prepare students for the 21st century workforce. That project also aimed to develop a "Super Teaching Assistant" for Piech's Code in Place MOOC. LinkedIn co-founder Reid Hoffman, who was present for the presentation of the 'AI Tutors' work he and his wife funded, is a Code.org Diamond Supporter ($1+ million). In other AI grading news, Texas will use computers to grade written answers on this year's STAAR tests. The state will save more than $15 million by using technology similar to ChatGPT to give initial scores, reducing the number of human graders needed.
Experimental becomes production (Score:2)
What ever happened to making sure the experiments are a success before rolling it out to the public? A product appearing this quickly has no chance of being thoroughly tested.
Re:Experimental becomes production (Score:4, Insightful)
"in a limited pilot project involving twenty teachers nationwide"
It isn't production until everyone is allowed to use it. They started with 20 people. And that was probably after internal testing. They are making sure the experiments are a success before moving on. Next is 300 teachers, and that's nothing. 6 per state?
Do you have a more specific question or concern?
Re: (Score:2)
Re: (Score:3)
Is it explaining them, or just copying? Pointing to wikipedia also gives really good answers as well.
Re: (Score:2)
Wikipedia does make a little stab at this by sometimes starting with a summary and then explaining things in more detail. But that's still a far cry of being individually tailored to your needs like a back-and-forth discussion is.
Re: (Score:2)
Static text (wikipedia or textbook) can't address specific followup questions
Neither can ChatGPT. Not reliably, anyway.
I use wikipedia. But if I find parts of it hard to understand, I'll ask ChatGPT.
I'm so sorry...
Re: Experimental becomes production (Score:2)
Re: (Score:2)
Those mental gymnastics could win an Olympic gold medal.
Re: (Score:2)
If ChatGPT does not know the answers. ChatGPT is essentially just looking up on the internet for you and then summarizing in different words. ChatGPT just does CHAT. None of this is "intelligence". It did not get training data from CS professors, it got it from the internet, and this means the low quality stuff you find on Stackoverflow maybe, dubious answers to students asking to solve their coding assignment.
What ChatGPT, or other LLM models, current *or* advanced, can give is that it lacks real experi
Re: (Score:2)
Simply put, they just don't care but they spend $millions in PR & marketing telling us that they do. I bet they spend more on this than on curriculum development.
Does not matter much (Score:2)
Their whole mission is bogus. Making it more bogus by using AI does not change much.
Might as well teach them punch cards (Score:2)
Re: (Score:2)
If you think AI is going to replace developers, you're in for quite a surprise.
Just Imagine (Score:2)
Every education bureaucrat in the country hungrily kneading their hands as they make plans to fire every teacher and replace them with a screen.
Hey wait, I think we've seen that somewhere before, haven't we?
https://www.youtube.com/watch?... [youtube.com]
Ah ok (Score:2)