Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
Wikipedia AI

Wikipedia Pauses AI-Generated Summaries After Editor Backlash (404media.co) 65

The Wikimedia Foundation halted an experiment that would have displayed AI-generated summaries atop Wikipedia articles after the platform's volunteer editor community delivered an overwhelmingly negative response to the proposal. The foundation announced the two-week mobile trial on June 2 and suspended it just one day later following dozens of critical comments from editors.

The experiment, called "Simple Article Summaries," would have used Cohere's open-weight Aya model to generate simplified versions of complex Wikipedia articles. The AI-generated summaries would have appeared at the top of articles with a yellow "unverified" label, requiring users to click to expand and read them. Editors responded with comments including "very bad idea," "strongest possible oppose," and simply "Yuck."

Wikipedia Pauses AI-Generated Summaries After Editor Backlash

Comments Filter:
  • The Cebunano Wikipedia is a known embarrassment for Wikipedia, and then there is the "Wikifunctions" project that want to make glorified templates and sql queries into "articles". Wikipedia got a lot of its start from "RamBot" all the way back in 2002, when US census data was dumped into the wiki. Wikipedia is kind of a bad word these days, but it needs to redeem itself and become an example of human edited content in the age of AI slop.
  • by NaCh0 ( 6124 ) on Wednesday June 11, 2025 @01:43PM (#65442873) Homepage

    The last thing we would want is inaccuracies in wikipedia. lol

  • Has there ever been a single time that someone rolled out an AI product and the entire community, customer base, and company employees had a positive reaction to it? I just resorted to using it to solve a REALLY annoying powershell problem this morning and all it gave me was made up, stitched together bullshit that didn't function because the commands mixed versions and didn't match the module it said to use.
    • Sure, basically every new LLM / Diffusion model is better than the one that came before it, so yeah. Positive reactions from community, customers, and employees. It sounds like you're butt-hurt about something in particular though.
  • AI has its uses (Score:5, Insightful)

    by FudRucker ( 866063 ) on Wednesday June 11, 2025 @02:03PM (#65442925)
    But it is too primitive and sloppy for complex data generation, try again next year
    • AI is what it is, so I agree it is too primitive now for complex data generation. It is.. like a child. not a human child. It does simple things great, and complex things poorly... in my humble opinion.. What I would like to see as a programmer is for an AI company to attach micro-controller simulators to the AI's, and before they confidently give me a solution, to run it past a simulator. If it does not work, the AI say's: this is my best solution but it does not work. Instead it says: "This wor
  • I like the Google AI summaries. I only go to google for stupid trivia questions. The AI gives me what I want. For Wikipedia, I'm like.... why not? A simple summary that is only available if I click a "+" button... why not? For any personal searches, I use Startpage, or DuckDuckgo, and I use the DuckDuckgo anon AI's at work to help me with coding, and it is excellent at simple structures and syntax.
    • Check out kagi.com you get the best of both worlds.

      Kagi's search is an aggregated engine of about a dozen big to small search engines (google included) without the ads & seo.
      It also has a weight system that you can use to remove or highlight sites yourself.
      It's AI functions are better that any of the free stuff out there.

      It's only downside is it's a paid service since there's no ads and they do not save/sell your data.

      • "It's only downside is it's a paid service since there's no ads and they do not save/sell your data.". I think that is a feature, and not a downside. The downside of the current internet, in my humble opinion, is that people do not realize that you pay for everything one way or another, with your soul or with $$, and way to many people sold their soul.
    • I like the Google AI summaries.

      I detest Google AI summaries. They don't tell you where the AI got the information, so I don't have any way to know whether the informatin is reliable.

      • I should have that opinion too, as they do not give sources. I was "trained" to not give information without citing the source. However, Google does give me answers as to why the sky is blue, for example. I do not use Google for anything serious.
  • by Tony Isaac ( 1301187 ) on Wednesday June 11, 2025 @02:13PM (#65442955) Homepage

    Google already generates them, and people will just not bother clicking through to Wikipedia.

    I personally would find AI summaries useful, for a lot of Wikipedia articles that are very, very long-winded.

    • I like the detail in Wikipedia. For example, for some reason, I want to know how the chemistry works when wood is heated up, and gas is extracted for clean burners for cooking, or for running cars (yes, at one time, there were cars that gassified wood and fed it into engines, bang bang bang). At times, I like just reading a quick summary of a definition of a word. I think Wikipedia is the "bomb", love it.
    • by RobinH ( 124750 )
      If they're accurate, right? Like... you actually care if the technology can generate accurate summaries? Because at this point there's lots of evidence that AI-generated summaries contain lots of mistakes.
      • Yes of course. Hallucinations will happen whether Wikipedia's AI generates the summaries, or Google generates the summaries. That fact will not stop Google from generating the summaries, and it won't stop users from assuming the summary is accurate.

    • If you find inaccurate summaries useful, you don't need Wikipedia. You can just make up your own answer and save even more time instead of spending it on learning.

      • Remember when Wikipedia first came out? There were howls of protest from every corner of academia, that because *anybody* could update a Wikipedia article, it *must be* riddled with errors. But what people learned over time, was that the result was articles that were *more* accurate and thorough than traditional encyclopedias.

        AI summaries are now the bogeyman because of hallucinations. Yes, hallucinations happen all the time. But in general, AI summaries are on target. Where they typically mess up, is in ci

        • I never howled like that about Wikipedia. I never held to that view which you invoke. I know no-one who did. You're erecting a strawman. But everyone who knows anything about LLM's hold the view I hold on the slop.

          AI summaries are not "the bogeyman". They are slop generated by statistical proximity of tokens. They are inherently incapable of being anything but slop. That is a fundamental part of the technology.

          And the fact that they "in general" are "on target", while always coming off as cocksure, is exact

          • Your experience with Wikipedia is not representative. Wikipedia itself notes that schools have often prohibited use of Wikipedia for classwork. https://en.wikipedia.org/wiki/... [wikipedia.org].

            There is a place in this world for people like you, who skip past the quick-and-dirty summary and go straight to the source. There is also a place in this world for people who don't want to spend the extra time. For these people, the summaries are useful, despite their errors.

            • Of course schools are often prohibited to use an unvetted information collection site for routine tasks. That should be the rule, instead of only an exception. I don't see how this in any way does anything but support MY point.

              There is no place what so ever in the world for anyone who gets spoon fed slop. They will grow up malformed, and correcting that misgrowth will be painful. Nobody should be subject to the gross violation of their mental understanding that slop inflicts.

              In short, you're simply wrong. A

              • There is no place what so ever in the world for anyone who gets spoon fed slop

                We definitely don't live in the same world, if that's what you see. Some examples of places where people get spoon-fed slop:
                - Fox News
                - CNN (yes, they are just as guilty of Fox News when it comes to one-sided knee-jerk reporting)
                - Facebook news

                In the world I see, sloppy summaries are all I see. When it comes to major news sources, NPR is the highest quality, but even their news is heavily opinionated in certain areas.

                So the current generation of AI has a hallucination problem. This is not an unsolvable prob

    • by jonadab ( 583620 )
      Eh. The people who actually like Google's horrible AI-generated garbage, were already using Google anyway. No change there.

      Whereas, I have entirely *stopped* using Google's main site, and my usage of Wikipedia has increased, because I'm now using it (plus the browser's in-page search feature) for quick lookup of things that, six months ago, I could more quickly find on Google, but now I can't. Other things I find using ddg or startpage, and still others I now have to resort to older, pre-internet methods
  • If I wanted to read a summary of a lengthy and complex wikipedia article, I would just read the first few paragraphs. The summary is right there.

    In fact, any use of AI for text summary is a bit suspect. Why wasn't the summary included by the authors in the first place? Any decent news article or scientific paper has one.

    • One possible answer to that answer is that AI's have this ability to digest information and summarize it in a different way than a human can. In fact, from what I read, College students are feeding spreadsheets and long, droll blah blah blah stuff into AI's to get a summary, and asking it different questions to get a unique perspective on the data or information. That is what I am understanding about this "revolution", and am interested in hearing about other perspectives.
  • It's not a surprise that editors are against a computer replacing their human-written summaries with AI-generated summaries. The question might as well have been worded as "Would you prefer that a computer puts you out of a job?" So, no surprise there.

    What's more interesting is how readers would consider AI-generated summaries. Instead of an AI-generated summary from a website, it would be interesting for browsers to have this summarization capability that could be enabled or disabled by the reader. My

  • Why not use the AI to Verify the data on the page?

    I looked at a page the other day for Red Alert 2: Yuri's Revenge. It was developed by Westwood Studios, which is what it said in the paragraph, but in the table it had EA Games, something completely different. I think we are being gaslighted through it.

    Example: Reddit post about a game from 1995 here: https://www.reddit.com/r/comma... [reddit.com]

    People have all different years for when the logo originated. While Google AI says it was purchased in 1998, others
    • (A Vector database can be used to store the changes and update the model's "approval" memory after it has been verified)

      It could also be used to identify inconsistencies or commonly changed parts and allow some kind of voting or something, or take parts and offer different versions within the document, because gaslighting is real.

      My point is, they need to innovate or risk dying right now. The old-school original editors aren't going to like it, but it's not a choice.
  • but not when it comes to summaries. It's the kind of C vs Rust futile resistance...

The nation that controls magnetism controls the universe. -- Chester Gould/Dick Tracy

Working...