Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI Education

Stanford Releases 386-Page Report On the State of AI (techcrunch.com) 22

An anonymous reader quotes a report from TechCrunch: Writing a report on the state of AI must feel a lot like building on shifting sands: by the time you hit publish, the whole industry has changed under your feet. But there are still important trends and takeaways in Stanford's 386-page bid to summarize this complex and fast-moving domain. The AI Index, from the Institute for Human-Centered Artificial Intelligence, worked with experts from academia and private industry to collect information and predictions on the matter. As a yearly effort (and by the size of it, you can bet they're already hard at work laying out the next one), this may not be the freshest take on AI, but these periodic broad surveys are important to keep one's finger on the pulse of industry.

This year's report includes "new analysis on foundation models, including their geopolitics and training costs, the environmental impact of AI systems, K-12 AI education, and public opinion trends in AI," plus a look at policy in a hundred new countries. But the report goes into detail on many topics and sub-topics, and is quite readable and non-technical. Only the dedicated will read all 300-odd pages of analysis, but really, just about any motivated body could.

For the highest-level takeaways, let us just bullet them here:

- AI development has flipped over the last decade from academia-led to industry-led, by a large margin, and this shows no sign of changing.
- It's becoming difficult to test models on traditional benchmarks and a new paradigm may be needed here.
- The energy footprint of AI training and use is becoming considerable, but we have yet to see how it may add efficiencies elsewhere.
- The number of "AI incidents and controversies" has increased by a factor of 26 since 2012, which actually seems a bit low.
- AI-related skills and job postings are increasing, but not as fast as you'd think.
- Policymakers, however, are falling over themselves trying to write a definitive AI bill, a fool's errand if there ever as one.
- Investment has temporarily stalled, but that's after an astronomic increase over the last decade.
- More than 70% of Chinese, Saudi, and Indian respondents felt AI had more benefits than drawbacks. Americans? 35%.
The full report can be found here.
This discussion has been archived. No new comments can be posted.

Stanford Releases 386-Page Report On the State of AI

Comments Filter:
  • I'll skip (Score:5, Funny)

    by test321 ( 8891681 ) on Tuesday April 04, 2023 @08:07PM (#63426414)

    I'll wait for the 486 page report.

    • I checked it anyway https://aiindex.stanford.edu/r... [stanford.edu] I find it interesting that, after China and Saudi Arabia, the highest positive scores were a long list of Central/South American countries starting with Peru, Mexico, Colombia, Chile, Brazil and, for some reason, Spain. The least positive were the Europeans, in line with USA.

      • I'm guessing nothing negative is ever reported from those countries. I'm guessing Fairness and Bias Tradeoff don't factor high in their programs or if they do they try to remove Fairness and ramp up Bias to the max. Whereas the US has lots of negative stories because it publicly announces programs and has critiques written on them.
    • by eonwing ( 934274 )

      I'll wait for the 486 page report.

      I'll wait for the Pentium version of the report.

    • by antdude ( 79039 )

      I'll wait for 586. No, 686!

    • Just buy the FPU appendix and move on.

    • We all knew this wasn’t about 386 computers. We still came for the nostalgic comfort of posting about 386s.

  • But it's actually just a long graphic novel featuring Kizuna AI [wikipedia.org].

  • They think they know how this will pan out. They're betting the US companies' AI will destroy the US first. That's the "more benefits than drawbacks" disconnect.

  • Did they ... (Score:4, Insightful)

    by PPH ( 736903 ) on Tuesday April 04, 2023 @08:35PM (#63426446)

    ... get ChatGPT to write it for them?

  • by thesjaakspoiler ( 4782965 ) on Tuesday April 04, 2023 @08:36PM (#63426452)

    because also scientists rather watch Netflix than writing 486 page reports.

  • They should have asked ChatGPT to write a shorter report.

  • -- Sent from my Tandy 386SX running Windows 3.1

  • by ctilsie242 ( 4841247 ) on Wednesday April 05, 2023 @12:30AM (#63426680)

    Not sure about other nations, but AI has always had a dark element in the US. Everything from the movie "Alien" to "I Have No Mouth and I Must Scream", to Halo and rampant AIs... to the point where rogue, man-destroying robots are on par with zombies. Some books have been opposite of that, i.e. Asimov's writings, but in general, you can bet that zero AI platforms have anything to do with the Three Laws of Robotics, much less the Zeroth law.

    Because of this, coupled with the fact that any advance tends to be used to fire workers has AI advances met with reluctance. However, things like ChatGPT should be not discarded entirely. Instead, they should be treated as advances. For example, the search bar of Google was a great advance compared to what Lycos, Hotbot, and Yahoo had. In some ways, ChatGPT is just an enhanced Google search bar that can gather its own results and present something. It might also be considered something like a virtual intern... dumb in some ways, but can present work and such that at least gives you a start, so you are not looking at a blank screen, but have something to go on, even if the "something" is 100% useless and even wrong.

    • Agreed, the overwhelming majority of articles reporting on advances in AI cast it in a largely negative light. Either the focus is on jobs it will eliminate, or how ‘experts’ predict it will go rogue and ‘destroy humanity’, very few (actually none that I personally have seen) clearly articulate what actual benefits AI produce. There are vague aspirational allusions, such as
      may revolutionize drug development”
      may increase productivity in industry X”
  • - Policymakers, however, are falling over themselves trying to write a definitive AI bill, a fool's errand if there ever as one.

    Just in case anyone is in doubt neither the words "fool" nor "errand", nor the phase "falling over themselves" appear in the Standford report. As you might expect it's a measured, clinical write up of the state of the art backed by lots of references. The click baity take we are given here is almost entirely TechCrunch's invention, with only the spiciest bits of Standford's report cherry picked to lend a feel of authenticity.

  • so I asked chatgpt to write car analogies:
    _ AI writing articles is like a car that's convinced it's a poet, but all it can come up with is haikus about cats and rainbows.
    _ AI writing articles is like a car that thinks it's a comedian, but all its jokes are about binary code and algorithms.
    _ AI writing articles is like a car with a faulty GPS system, sending you on a wild goose chase for information you don't need.
    _ AI writing articles is like a car that's been possessed by a thesaurus, spewing out synonyms

  • Since it isn't, we're still just talking expert systems here, not real Artificial Intelligence. If they had to refer to these as ES instead of AI, all the floof-lah would pretty much go poof. Or have they finally completed the full relational database and developed self-modifying code and that's what's being deployed instead of just finely tuned neural networks (math equations processing data)? Just looking at two definitions for neural networks, you can tell which is hyping and which is not. From Amazo

Elliptic paraboloids for sale.

Working...