

Bloomberg's AI-Generated News Summaries Had At Least 36 Errors Since January (nytimes.com) 24
The giant financial news site Bloomberg "has been experimenting with using AI to help produce its journalism," reports the New York Times. But "It hasn't always gone smoothly."
While Bloomberg announced on January 15 that it would add three AI-generated bullet points at the top of articles as a summary, "The news outlet has had to correct at least three dozen A.I.-generated summaries of articles published this year." (This Wednesday they published a "hallucinated" date for the start of U.S. auto tariffs, and earlier in March claimed president Trump had imposed tariffs on Canada in 2024, while other errors have included incorrect figures and incorrect attribution.) Bloomberg is not alone in trying A.I. — many news outlets are figuring out how best to embrace the new technology and use it in their reporting and editing. The newspaper chain Gannett uses similar A.I.-generated summaries on its articles, and The Washington Post has a tool called "Ask the Post" that generates answers to questions from published Post articles. And problems have popped up elsewhere. Earlier this month, The Los Angeles Times removed its A.I. tool from an opinion article after the technology described the Ku Klux Klan as something other than a racist organization.
Bloomberg News said in a statement that it publishes thousands of articles each day, and "currently 99 percent of A.I. summaries meet our editorial standards...." The A.I. summaries are "meant to complement our journalism, not replace it," the statement added....
John Micklethwait, Bloomberg's editor in chief, laid out the thinking about the A.I. summaries in a January 10 essay, which was an excerpt from a lecture he had given at City St. George's, University of London. "Customers like it — they can quickly see what any story is about. Journalists are more suspicious," he wrote. "Reporters worry that people will just read the summary rather than their story." But, he acknowledged, "an A.I. summary is only as good as the story it is based on. And getting the stories is where the humans still matter."
A Bloomberg spokeswoman told the Times that the feedback they'd received to the summaries had generally been positive — "and we continue to refine the experience."
While Bloomberg announced on January 15 that it would add three AI-generated bullet points at the top of articles as a summary, "The news outlet has had to correct at least three dozen A.I.-generated summaries of articles published this year." (This Wednesday they published a "hallucinated" date for the start of U.S. auto tariffs, and earlier in March claimed president Trump had imposed tariffs on Canada in 2024, while other errors have included incorrect figures and incorrect attribution.) Bloomberg is not alone in trying A.I. — many news outlets are figuring out how best to embrace the new technology and use it in their reporting and editing. The newspaper chain Gannett uses similar A.I.-generated summaries on its articles, and The Washington Post has a tool called "Ask the Post" that generates answers to questions from published Post articles. And problems have popped up elsewhere. Earlier this month, The Los Angeles Times removed its A.I. tool from an opinion article after the technology described the Ku Klux Klan as something other than a racist organization.
Bloomberg News said in a statement that it publishes thousands of articles each day, and "currently 99 percent of A.I. summaries meet our editorial standards...." The A.I. summaries are "meant to complement our journalism, not replace it," the statement added....
John Micklethwait, Bloomberg's editor in chief, laid out the thinking about the A.I. summaries in a January 10 essay, which was an excerpt from a lecture he had given at City St. George's, University of London. "Customers like it — they can quickly see what any story is about. Journalists are more suspicious," he wrote. "Reporters worry that people will just read the summary rather than their story." But, he acknowledged, "an A.I. summary is only as good as the story it is based on. And getting the stories is where the humans still matter."
A Bloomberg spokeswoman told the Times that the feedback they'd received to the summaries had generally been positive — "and we continue to refine the experience."
And how many of the articles had errors? (Score:2, Interesting)
I mean maybe it's the AI making errors summarizing, maybe it's the articles being crap correctly summarized.
Only 36? (Score:5, Interesting)
The real question: did the summary AI only make 36 errors or did only 36 errors get published? The difference is that the summary AI could be making a lot more errors but a human editor is accepting or rejecting summaries generated by the summary AI and incorrectly accepted 36 that contained errors.
Re:Only 36? (Score:4, Informative)
The question is out of how many summaries, was it 36 out of 1000 or 36 out of 37? Is the error rate higher or lower than humans?
Re: (Score:2)
36 doesn't some great... but I also know humans don't typically have great performance in this particular role either.
I'd like an honest accounting.
How does this compare to human error rate? (Score:3)
Just curious...
Re: (Score:3)
Well humans at least have editors or fact checkers (remember those people?) The whole point in AI writing stories is not having to pay any staff. Errors are minor details when you're saving the company money.
Re: (Score:2)
Re: (Score:2)
Very little fact checking gets done these days. Journalism used to be a solidly middle class job but now it's something people do to pay the rent between jobs. The number of people actually doing investigative reporting as a career is vanishingly small. Trusting your news article to be correct and fact checked is not a reality anymore outside of a handful of places like wsj and nyt
Re: How does this compare to human error rate? (Score:1)
Gell-Mann amnesia is the phenomenon of being able to easily spot bullshit in a news story into which you have some visibility while blithely assuming the news is correct in reporting on things into which you have no visibility.
Human written news and human written summaries are also full of errors.
Re: (Score:2)
Fact-checking is possible to do, even with AI.
Code quality analysis tools that make us of AI are already widely available. Are they perfect? No. But they do catch a lot of issues. Kind of like human code reviewers.
Are they doing fact checking on these article summaries? I have no idea. But it's possible.
Google (Score:2)
Just a couple of days ago my tablet gave me a notification about a famous singer retiring and cancelling the rest of his tour dates - but of course left out the name in the notification. (Standard clickbait.) I clicked through and saw it was an 89-year-old guy whose name I d8d recognize but wasn't really that famous (and have since forgotten who it was).
But I'm sure you're familiar with Google searches - beneath the main results they have "People also asked ..." one of which was when did the singer die. As
Re: (Score:2)
Remembered the name .,. Johnie Mathis.
Re: (Score:2)
LOLLERSKATES @ someone saying Johnny Mathis "wasn't really that famous".
But I've no doubt that's what someone in 2065 will say about Usher and Lady Gaga.
Re: (Score:2)
Their description was "Legendary", I didn't see him as quite that famous. Wouldn't consider either you named as "legendary" either - yet. Maybe Gaga will get there, eventually. I don't consider rap as singing (just rhythmic poetry) - or do you consider William Shatner a singer for all those times he did songs?
Where does the ai get their info from? (Score:5, Insightful)
If the AI is getting its up-to-date facts from the major news outlets and the major news outlets are using AI - I forsee a problem.
Who Knew? (Score:3)
Re: Who Knew? (Score:1)
Business News (Score:2)
How is this hard? Most business news summaries are Mad-Libs:
The price of [Business Name Shares | Business Name Bonds | Commodity] are/were [up|down|unchanged] on [News (mentioning the Business | Commodity)].
Whereas... (Score:2)