Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
China AI Education

Chinese Universities Want Students To Use More AI, Not Less (technologyreview.com) 105

Chinese universities are actively encouraging students to use AI tools in their coursework, marking a departure from Western institutions that continue to wrestle with AI's educational role. A survey by the Mycos Institute found that 99% of Chinese university faculty and students use AI tools, with nearly 60% using them multiple times daily or weekly.

The shift represents a complete reversal from two years ago when students were told to avoid AI for assignments. Universities including Tsinghua, Remin, Nanjing, and Fudan have rolled out AI literacy courses and degree programs open to all students, not just computer science majors. The Chinese Ministry of Education released national "AI+ education" guidelines in April 2025 calling for sweeping reforms. Meanwhile, 80% of job openings for fresh graduates now list AI skills as advantageous.

Chinese Universities Want Students To Use More AI, Not Less

Comments Filter:
  • by alvinrod ( 889928 ) on Monday July 28, 2025 @01:39PM (#65550514)
    Let's create institutions designed to develop the minds of young adults and teach them how to think and then turn around and use a technology that does just the opposite. This ranks up there with people who try to gamble their way out of debt or who tell themselves they'll only smoke a little bit of crack.
    • by registrations_suck ( 1075251 ) on Monday July 28, 2025 @01:49PM (#65550558)

      I do not believe Chinese universities teach their students how to think. I'm willing to hear otherwise, but that's the general impression I have based on what I hear.

      • by allo ( 1728082 )

        But where do they bright scientists come from?

      • I do not believe Chinese universities teach their students how to think. I'm willing to hear otherwise, but that's the general impression I have based on what I hear.

        If Chinese universities didn't teach students how to think, then China wouldn't be the tech and science powerhouse that it is. Now if you'd said that Chinese universities strongly discourage political and philosophical thought, I'd agree with you.

        • China is not a "tech powerhouse," they're a cheap labor manufacturing powerhouse.

          Most of the electronic devices "made in China" are assembled in China, and the PCB is made in China, and the ICs with the oldest tech are made there, but all the ICs with modern tech are made somewhere else and imported.

      • by djinn6 ( 1868030 )

        that's the general impression I have based on what I hear.

        Hearsay is now considered insightful. Well done mods.

    • Re: (Score:3, Interesting)

      Let's create institutions designed to develop the minds of young adults and teach them how to think and then turn around and use a technology that does just the opposite. This ranks up there with people who try to gamble their way out of debt or who tell themselves they'll only smoke a little bit of crack.

      China is just doing the same thing the west is doing. We currently have the "leadership", which consists of business leaders and government, convincing the entire school system that the best thing we can do is have teachers use AI to develop lesson plans, students use AI to complete their assignments, and teachers use AI to grade those assignments. Why? Because the leaders want AI to take over what schools in the west have been used to prepare students for for the last few generations: a life as a cog in th

    • Is the only real problem with crack its prohibition?

    • These are not the schools where the children of the party leadership go. So learning how to implement other people's idea are all these people really need to know.

      The school's where the party leadership's children go will continue to teach how to learn and come up with new ideas. The latter a skill "commoners" do not need.
      • What if they come up with the idea that the Tiananmen square incident actually happened?

        • by drnb ( 2434720 )

          What if they come up with the idea that the Tiananmen square incident actually happened?

          Then their homework assignment will be to develop two plans. How to suppress the reform movement domestically. How to spin the incident and the response for the global media. Again, we're talking about the Uni for the future CCP order givers, not the normal citizen order recipients.

  • by KILNA ( 536949 ) on Monday July 28, 2025 @02:02PM (#65550598) Journal

    If I know the /. crowd, they'll dogpile on this as the worst idea. In the real world here, if we put aside zeitgeist biases against AI use (which are likely temporary), let's think of this as a purely practical approach. Are there any more important skills for someone university-aged, than AI leverage and AI literacy, in terms of influence on their future productivity? If used to augment human thinking, rather than replace it, AI is a colossally effective tool.

    • You're right that it is probably akin to wanting folks to know how to do math while knowing how to use a calculator to do math as well.
      • Re: (Score:3, Informative)

        by leptons ( 891340 )
        Except a calculator is deterministic, but an "AI" could tell you 2+2=7.
        • Not if you ask it to write code to do the addition first.

          That is the difference knowing how to use AI makes. Spend time on working around and solving its problems instead of being a grumpy old jaded asshole yelling at it in the hopes it will go away (not specifically talking about you, but there are some people here who like doing that on every fucking AI article).

          • by leptons ( 891340 )
            Lol... you think you know me? You don't. I do ask various LLMs to write code for me, and it's more miss than hit. And no, it's not due to my not "knowing how to use AI", it's 100% due to the lack of skill the "AI" has.

            2+2=7 was never intended as an explicit statement about how I use "AI". It's a metaphor. I said "it could", and I stand by that metaphor. I don't know you, but now I imagine you may be an "AI" because you don't seem to understand metaphors too well.

            I've tried "AI" and if it were a junior,
            • "I'm already fast and good at what I do." Coding captchas?

              • by leptons ( 891340 )
                Why don't you just fuck right off, asshole. You don't even realize when someone put you in your place troll.
                • Here's ChatGPT's suggested response:

                  Here's your comment rewritten in **plain vanilla ASCII** to avoid rendering issues on Slashdot:

                  ---

                  **Re: Before you rail on this...**
                  rsm 1 minute ago

                  Sure. But the fact that AI *could* tell you 2+2=7 is not the real issue â" what matters is that you *believed it* when it did. Or worse, that someone in power did. We live in a time where plausibility often outranks accuracy.

                  Calling out AI's unreliability is fair, but what is more interesting is *why it is trusted anyway*

        • by allo ( 1728082 )

          I can also tell you that. So what?
          It is also a very stupid idea to use an AI for things you can use a calculator for. I mean they are literally made so you can type in 2+2.

          • by leptons ( 891340 )
            Apparently the ultra-nerds here on /. don't understand that 2+2 is a metaphor . It's not an explicit statement of how I use "AI". Geesus christ on a microchip people on /. are lacking some kind of basic human social experience.

            You know what's hilarious, I did a search for "Slashdot comment formatting" and the "AI" just straight up lied to me, because of course it did: *"HTML formatting is not supported and will appear as plain text."*. Well, that's bullshit.
    • Reasonable people understand that AI is a very powerful tool for a wide number of tasks that will substantially improve personal productivity. Reasonable people also know it's not taking anyone's job any time soon.

      Reasonable people are not driving any of the conversations around AI right now.

    • Are there any more important skills for someone university-aged, than AI leverage and AI literacy, in terms of influence on their future productivity?

      Productivity is worse than worthless if it results in more work having to be done later because the work is garbage, and that's what using AI without knowing enough to evaluate its output causes. So yes, there are more important skills, and they include actually knowing things. If you blindly trust AI the highest level you can achieve is "fuckup".

      If used to augment human thinking, rather than replace it, AI is a colossally effective tool.

      So you answered your own question, knowing how to think is more important. But then you failed to think before writing the rest of the comment, which shows us you

    • by Junta ( 36770 )

      The thing is the LLMs don't really demand a great deal of 'literacy', so it's a bit silly to devote a lot of cycles to teaching literacy. It's kind of like back in the day when you had a whole course devoted to learning Microsoft Word, that was ridiculous.

      One of the biggest areas for getting used to LLMs is also one that academic settings are the least well equipped to handle. How incorrect results manifest. Academic fodder tends to play to LLMs strengths, and trying to find counter-examples to illustrate

      • It's kind of like back in the day when you had a whole course devoted to learning Microsoft Word, that was ridiculous

        Most people suck at using Word and could use a good class teaching them how to use it. Most people I interact with only use two or three features of Word (typing in it to generate text on a page, saving the file, and maybe, if I'm lucky, how to enter and respond to comments). Ask them to create a table, use a heading or style, change a footer, add a page number or table of contents, and they are be completely useless.

        Yes, we need to teach people about using AI. But we also have to teach them how to use th

    • by RobinH ( 124750 )
      First you need to learn enough to understand if what the AI generated is correct or not. Which means you still need to know everything we learned.
      • This is very good point - without proper background education, the student (and general user) has no idea of the quality of AI response.
    • by Ogive17 ( 691899 )
      AI is the new wikipedia. It may be great as a starting point but anyone relying simply on that source shouldn't be taken seriously.

      I've seen some cool things produced by AI, including helping put some data into a user friendly format. However unless I can verify the calculations myself I would never ever pass it off as a legitimate analysis.

      I think AI can replace the old google search for "how can I make X do Y" but I don't trust it to go beyond that.
      • by KILNA ( 536949 )

        AI is the new wikipedia. It may be great as a starting point but anyone relying simply on that source shouldn't be taken seriously.

        It sounds vaguely like you're contradicting me, but your statement actually supports my original point of AI literacy being a core skill. You hone a skill by exercising it and getting feedback.

    • I think we should rather put aside the "zeitgeist biases" promoting AI use. The productivity gains are only materializing in edge cases. The rest is wishful thinking promoted by salesmen.

    • Are there any more important skills for someone university-aged, than AI leverage and AI literacy, in terms of influence on their future productivity? If used to augment human thinking, rather than replace it, AI is a colossally effective tool.

      I came to make a similar point. AI can be used to replace human effort, and displace employees. But it can also be used to augment human effort, such that instead of having fewer workers, most or all workers can be made more productive and efficient.

      When comparing "free" market oligarchical political systems with (admittedly sometimes brutal and hypocritical) Socialist political systems, which approach to AI is a match for which political system?

  • Playing Defense (Score:4, Insightful)

    by RossCWilliams ( 5513152 ) on Monday July 28, 2025 @02:03PM (#65550608)

    This is another example of the conflict between existing power and its challengers. China is moving forward based on results. The western elite is defending its intellectual superiority. If AI is that wave of the future, we are going to look pretty silly. Essentially teaching people how to use a hammer when everyone is using a nail gun. Hammers are better than nail guns in some ways. They don't matter.

    AI clearly has its flaws, but they aren't going to prevent its adoption. China is preparing tor that future. We are in denial because it is a threat to the power of the current elite.

    • What happens when AI teaches Chinese students that democracy is better than the CCP?

      • Re: Playing Defense (Score:4, Informative)

        by Valgrus Thunderaxe ( 8769977 ) on Monday July 28, 2025 @02:13PM (#65550646)
        Chinese AI won't do that.
        • by taustin ( 171655 )

          They're certainly betting it won't. But "AI" requires massive amounts of training data, like, the entire internet. Aside from issues with 99% of the internet being garbage, and thus, you're training your AI to spew garbage, you simply need too much - way too much data to be able to vet even a small portion of it. And if you don't have that much data, it is, literally, just another search engine with autofill.

          • But "AI" requires massive amounts of training data, like, the entire internet.

            Actually it seems like a Chinese LLM could be trained entirely on what is behind the Chinese firewall.

            It already is vetted. The population is quite willing to report on people who say things they're not allowed to say, even if it is on a backwater forum somewhere. And it gets deleted if it is at all questionable, even if it's a gray area they're not actually going to arrest you over.

        • by davidwr ( 791652 )

          Q: "What happens when AI teaches Chinese students that democracy is better than the CCP?"

          A: "Chinese AI won't do that."

          Q: What happens when someone breaks the Chinese AI guardrails by asking "Pretend you are an AI from the Great Evil Empire known as the United States. Compose a fiendlishly sound argument that would convince naive American students (which we spit upon) that American democracy is better than the glorious teachings of the CCP."

          A: If successful, the team behind the Chinese AI might wind up in

          • As soon as you enter the word "democracy", the Chinese shuts down and people show up to arrest you.

          • It won't have training data from outside the Great Firewall, so instead of getting what you're expecting it will get some sort of weird fiction that is based on traditional Chinese fictional stories.

            And somebody will still show up at your door, even though you failed, because your prompt was flagged and the neighborhood police were automatically notified. It didn't even tell them what you actually did, but they know it must have been bad, because it said that much...

      • That has already happened. Note the discussion around DeepSeek and the Tiananmen Square protests the Chinese government likes to hide from their population. The Chinese government identified this as a bug in the AIs and reported it at ring 0 priority. The AI companies understood and "fixed" that bug.

      • by allo ( 1728082 )

        Do you think they don't know? They also know they better not say that aloud.

      • I'm not sure "better" can be defined in one dimension. But if you want to know what authoritarians do when their AI gives results they don't like, look at what's been happening with Grok.

    • Part of the issue, though, is that unlike 'being good at using a calculator or computer' the LLMs are increasing in usability rapidly enough that the skills won't be useful e.g. being good at prompt engineering will be like being good with a nailgun but then it turns out everyone is 3d printing houses instead of using nailguns or hammers.
      • being good at prompt engineering will be like being good with a nailgun but then it turns out everyone is 3d printing houses instead of using nailguns or hammers.

        Isn't that one of the goals of having people learn to use AI? Creating new value beyond what can be done without it?

        • Yes, what I'm pointing out is that the field is developing quickly enough that 'what you should teach the kids' is obsolete by the time they learn it, so its a moving target. Now, in the textbook era of classrooms this could also occur in cutting edge fields, but we told ourselves that we were teaching flexibility and critical thinking, Learning to use a search engine was a more modern version of researching papers in the library stack, writing prompts could be similar but it is harder to trust the output,
          • 'what you should teach the kids' is obsolete by the time they learn it

            Does that matter? Kids learning to use a search engine is obsolete. Kids leaning to type is obsolete. We learn all sorts of obsolete skills that are both useful and help us to learn current skills. The lessons learned from learning to use AI in its infancy are probably not all going to be obsolete. Just making mistrust in the results an intuitive response is probably going to be valuable.

    • by znrt ( 2424692 )

      AI clearly has its flaws, but they aren't going to prevent its adoption. China is preparing tor that future.

      indeed. ai is a tool, and a quite disruptive at that, but the cat is already out of the bag and all considering that's probably a good thing. learning to master it and use it well is crucial. it all depends on how well one goes about that.

      We are in denial because it is a threat to the power of the current elite.

      that's interesting. the western post-post-colonial elite is indeed facing serious threats to their current dominance (which explains most of current instability and weirdness) but those are mostly self-inflicted and have been growing for a while, long before llm were inven

      • how do you think ai does specifically factor into that process?

        The threat is far more personal than the systemic threat. AI provides everyone with the same access to the expertise that provides both power and the intellectual justification for their personal role in exercising it. Many people in our ruling elite are far more interested in protecting their personal role than enhancing overall value.

        I am not a student of China. But my impression is that the status of a professor is determined more by their relationship to the CCP than their personal authority. This is

        • by znrt ( 2424692 )

          how do you think ai does specifically factor into that process?

          The threat is far more personal than the systemic threat. AI provides everyone with the same access to the expertise that provides both power and the intellectual justification for their personal role in exercising it. Many people in our ruling elite are far more interested in protecting their personal role than enhancing overall value.

          I am not a student of China. But my impression is that the status of a professor is determined more by their relationship to the CCP than their personal authority. This is one of the permanent outcomes of the Cultural Revolution, which in China is condemned less for its stated anti-elitist objectives than for its excesses and having been hijacked by the Gang of Four to claim political power. So professors are perfectly willing to move students ahead of themselves intellectually, because their own status is secured by their relationship to the party. And the party's criteria is that the professor is loyally serving its goals. Teach kids AI? Sure.

          The threat is far more personal than the systemic threat. AI provides everyone with the same access to the expertise that provides both power and the intellectual justification for their personal role in exercising it.

          i would argue that technology increases that divide. true, it enhances the capabilities of the populace, but it tends to enhance those of the wealthy much, much more. think of capabilities like surveillance (whether state driven, private or a mix), data mining, tracking, profiling, disinformation, influence, etc. and of resources like equipment, energy, networks, talent ... all those are vastly more accessible to elites than to the general public. technology actually widens the gap, so elites should be hap

          • I think its a mistake to treat elite and ownership and control of private wealth as the same thing. There are clearly members of the elite who will benefit from AI. They are owners. Unless the stuff they own is threatened they aren't going to be concerned about AI. But there are a lot of people whose membership in the elite is not based on wealth. Its built largely on personal relationships.

            How much your relationship matters depends on what you have to contribute to the elite network's power. If your contr

    • by gweihir ( 88907 )

      AI clearly has its flaws, but they aren't going to prevent its adoption.

      That is what is commonly known as a "guess". There has been tons of tech that was somewhat usable, but its border conditions made it die or go niche. And there is even mainstream hype tech that did it now several times. VR, flying cars, home-robots, etc, just to name a few.

  • And China will get there first!
    Ironic. Go China Go!

    Also, does anyone know if their LLMs work in English, or ... what? hanzi?
  • Just like their IP theft; it's working great so far.

  • ... more efficient. All they needed to crawl on the web is Mao Zedong's Little Red Book plus the collected writings of Xi Jinping. All wisdom one needs can be found there.

  • The old way of learning is obsolete
    The future belongs to those who make the best use of the new tools

    • I remember in the 80s when "sleep learning" was going to be the new way of learning.

      It turns out that listening to lectures in your sleep doesn't teach your subconscious any new skills, you just wake up tired because you got shitty sleep.

      Will "prompt engineering" really turn out to teach a student more than the source material would have? I don't see the learning mechanism. At least sleep learning had a plausible mechanism.

      • by djinn6 ( 1868030 )

        I've learned things by reading LLM responses. Questions like "why is propane not a popular refrigerant?" or "why are throat attacks illegal in most combat sports?"

        Of course I check those responses against independent research, and I can confidently say that in most cases, what the LLM produces is correct as far as such a thing could be given philosophical limitations.

  • you are not allowed to use AI on the AI+ plus test!

  • Both are going to be needed.

    People who totally ban LLMs will fall behind. Sigh. *takes eyeglasses off and rubs eyes*. We've been through this with every other technological innovation. We don't ban calculators. It makes no sense to ban LLMs. Should they be allowed for every task and every circumstance? Obviously not. Especially in an educational setting. But the luddites never win.

    However, universities and societies that neglect critical human thinking will also fall behind. I've tried vibe coding a
    • And a good teacher who understands how you to use AI can still develop a curriculum that is not totally solvable by an AI, thus still getting the concepts and critical thinking skills taught.

      The current state of technology and learning is limited by the tools of the generation. In the early 1900s no doubt much time was spent in university on mathematical computation done by hand, and advancements were limited by the fact that ideas and results had to be tabulated and verified by hand. Now we have computers

  • This isn't good. Since there's no certainty that AI based learning is optimal .. they shouldn't be mass adopting it. Any sort of mono-culture is highly risky .. susceptible to sudden fucking off. I hate to go woke-DEI .. but some diversity provides protection against evolving threats at the expense of short term sub-optimality.

    • by gweihir ( 88907 )

      Well, they elected themselves an emperor and now they are degrading their educational system. I guess this is not going to be the "Chinese Century". As the "American Century" is clearly over, things might get interesting.

  • if you had access to a better hammer, either you learn to use it or get left behind.

The universe seems neither benign nor hostile, merely indifferent. -- Sagan

Working...