Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
AI The Media

Bloomberg's AI-Generated News Summaries Had At Least 36 Errors Since January (nytimes.com) 25

The giant financial news site Bloomberg "has been experimenting with using AI to help produce its journalism," reports the New York Times. But "It hasn't always gone smoothly."

While Bloomberg announced on January 15 that it would add three AI-generated bullet points at the top of articles as a summary, "The news outlet has had to correct at least three dozen A.I.-generated summaries of articles published this year." (This Wednesday they published a "hallucinated" date for the start of U.S. auto tariffs, and earlier in March claimed president Trump had imposed tariffs on Canada in 2024, while other errors have included incorrect figures and incorrect attribution.) Bloomberg is not alone in trying A.I. — many news outlets are figuring out how best to embrace the new technology and use it in their reporting and editing. The newspaper chain Gannett uses similar A.I.-generated summaries on its articles, and The Washington Post has a tool called "Ask the Post" that generates answers to questions from published Post articles. And problems have popped up elsewhere. Earlier this month, The Los Angeles Times removed its A.I. tool from an opinion article after the technology described the Ku Klux Klan as something other than a racist organization.

Bloomberg News said in a statement that it publishes thousands of articles each day, and "currently 99 percent of A.I. summaries meet our editorial standards...." The A.I. summaries are "meant to complement our journalism, not replace it," the statement added....

John Micklethwait, Bloomberg's editor in chief, laid out the thinking about the A.I. summaries in a January 10 essay, which was an excerpt from a lecture he had given at City St. George's, University of London. "Customers like it — they can quickly see what any story is about. Journalists are more suspicious," he wrote. "Reporters worry that people will just read the summary rather than their story." But, he acknowledged, "an A.I. summary is only as good as the story it is based on. And getting the stories is where the humans still matter."

A Bloomberg spokeswoman told the Times that the feedback they'd received to the summaries had generally been positive — "and we continue to refine the experience."
This discussion has been archived. No new comments can be posted.

Bloomberg's AI-Generated News Summaries Had At Least 36 Errors Since January

Comments Filter:
  • by Anonymous Coward

    I mean maybe it's the AI making errors summarizing, maybe it's the articles being crap correctly summarized.

  • Only 36? (Score:5, Interesting)

    by Gravis Zero ( 934156 ) on Sunday March 30, 2025 @04:48PM (#65270229)

    The real question: did the summary AI only make 36 errors or did only 36 errors get published? The difference is that the summary AI could be making a lot more errors but a human editor is accepting or rejecting summaries generated by the summary AI and incorrectly accepted 36 that contained errors.

    • Re:Only 36? (Score:4, Informative)

      by ewibble ( 1655195 ) on Sunday March 30, 2025 @06:19PM (#65270407)

      The question is out of how many summaries, was it 36 out of 1000 or 36 out of 37? Is the error rate higher or lower than humans?

      • Right, this is the real question that I want answered.
        36 doesn't some great... but I also know humans don't typically have great performance in this particular role either.
        I'd like an honest accounting.
  • Just curious...

    • Well humans at least have editors or fact checkers (remember those people?) The whole point in AI writing stories is not having to pay any staff. Errors are minor details when you're saving the company money.

      • It's not like the general public actually fact checks the news that they consume anyways. My guess is thousands of people consumed this garbage without batting an eye till someone really dug into it.
      • by Hadlock ( 143607 )

        Very little fact checking gets done these days. Journalism used to be a solidly middle class job but now it's something people do to pay the rent between jobs. The number of people actually doing investigative reporting as a career is vanishingly small. Trusting your news article to be correct and fact checked is not a reality anymore outside of a handful of places like wsj and nyt

        • Gell-Mann amnesia is the phenomenon of being able to easily spot bullshit in a news story into which you have some visibility while blithely assuming the news is correct in reporting on things into which you have no visibility.

          Human written news and human written summaries are also full of errors.

      • Fact-checking is possible to do, even with AI.

        Code quality analysis tools that make us of AI are already widely available. Are they perfect? No. But they do catch a lot of issues. Kind of like human code reviewers.

        Are they doing fact checking on these article summaries? I have no idea. But it's possible.

  • Just a couple of days ago my tablet gave me a notification about a famous singer retiring and cancelling the rest of his tour dates - but of course left out the name in the notification. (Standard clickbait.) I clicked through and saw it was an 89-year-old guy whose name I d8d recognize but wasn't really that famous (and have since forgotten who it was).

    But I'm sure you're familiar with Google searches - beneath the main results they have "People also asked ..." one of which was when did the singer die. As

    • Remembered the name .,. Johnie Mathis.

      • LOLLERSKATES @ someone saying Johnny Mathis "wasn't really that famous".

        But I've no doubt that's what someone in 2065 will say about Usher and Lady Gaga.

        • Their description was "Legendary", I didn't see him as quite that famous. Wouldn't consider either you named as "legendary" either - yet. Maybe Gaga will get there, eventually. I don't consider rap as singing (just rhythmic poetry) - or do you consider William Shatner a singer for all those times he did songs?

          • Their description was "Legendary", I didn't see him as quite that famous. Wouldn't consider either you named as "legendary" either - yet. Maybe Gaga will get there, eventually. I don't consider rap as singing (just rhythmic poetry) - or do you consider William Shatner a singer for all those times he did songs?

            I was going strictly by your use of the word "famous". Fame has exactly one metric, which has little to do with artistic/performance merit and is inescapably generational and cultural.

            "Legendary" did not appear in your original comment and has different criteria which are generationally/culturally dependent in different ways than "famous". You can be either one without being the other. For example, Lizzo is famous but not legendary; Philip Glass is legendary but not famous.

            Your personal opinion on whether r

  • by Morromist ( 1207276 ) on Sunday March 30, 2025 @05:32PM (#65270287)

    If the AI is getting its up-to-date facts from the major news outlets and the major news outlets are using AI - I forsee a problem.

  • by RossCWilliams ( 5513152 ) on Sunday March 30, 2025 @06:47PM (#65270431)
    Bloomberg is unreliable. Who knew? You can apply that to any news source on the internet whether they use AI or not.
  • How is this hard? Most business news summaries are Mad-Libs:
    The price of [Business Name Shares | Business Name Bonds | Commodity] are/were [up|down|unchanged] on [News (mentioning the Business | Commodity)].

  • Slashdot's Uninterested Human-Generated News Summaries Had At Least 36 Dupes Since Tuesday.

Think of it! With VLSI we can pack 100 ENIACs in 1 sq. cm.!

Working...