Journalists 'Deeply Troubled' By OpenAI's Content Deals With Vox, The Atlantic (arstechnica.com) 100
Benj Edwards and Ashley Belanger reports via Ars Technica: On Wednesday, Axios broke the news that OpenAI had signed deals with The Atlantic and Vox Media that will allow the ChatGPT maker to license their editorial content to further train its language models. But some of the publications' writers -- and the unions that represent them -- were surprised by the announcements and aren't happy about it. Already, two unions have released statements expressing "alarm" and "concern." "The unionized members of The Atlantic Editorial and Business and Technology units are deeply troubled by the opaque agreement The Atlantic has made with OpenAI," reads a statement from the Atlantic union. "And especially by management's complete lack of transparency about what the agreement entails and how it will affect our work."
The Vox Union -- which represents The Verge, SB Nation, and Vulture, among other publications -- reacted in similar fashion, writing in a statement, "Today, members of the Vox Media Union ... were informed without warning that Vox Media entered into a 'strategic content and product partnership' with OpenAI. As both journalists and workers, we have serious concerns about this partnership, which we believe could adversely impact members of our union, not to mention the well-documented ethical and environmental concerns surrounding the use of generative AI." [...] News of the deals took both journalists and unions by surprise. On X, Vox reporter Kelsey Piper, who recently penned an expose about OpenAI's restrictive non-disclosure agreements that prompted a change in policy from the company, wrote, "I'm very frustrated they announced this without consulting their writers, but I have very strong assurances in writing from our editor in chief that they want more coverage like the last two weeks and will never interfere in it. If that's false I'll quit.."
Journalists also reacted to news of the deals through the publications themselves. On Wednesday, The Atlantic Senior Editor Damon Beres wrote a piece titled "A Devil's Bargain With OpenAI," in which he expressed skepticism about the partnership, likening it to making a deal with the devil that may backfire. He highlighted concerns about AI's use of copyrighted material without permission and its potential to spread disinformation at a time when publications have seen a recent string of layoffs. He drew parallels to the pursuit of audiences on social media leading to clickbait and SEO tactics that degraded media quality. While acknowledging the financial benefits and potential reach, Beres cautioned against relying on inaccurate, opaque AI models and questioned the implications of journalism companies being complicit in potentially destroying the internet as we know it, even as they try to be part of the solution by partnering with OpenAI.
Similarly, over at Vox, Editorial Director Bryan Walsh penned a piece titled, "This article is OpenAI training data," in which he expresses apprehension about the licensing deal, drawing parallels between the relentless pursuit of data by AI companies and the classic AI thought experiment of Bostrom's "paperclip maximizer," cautioning that the single-minded focus on market share and profits could ultimately destroy the ecosystem AI companies rely on for training data. He worries that the growth of AI chatbots and generative AI search products might lead to a significant decline in search engine traffic to publishers, potentially threatening the livelihoods of content creators and the richness of the Internet itself.
The Vox Union -- which represents The Verge, SB Nation, and Vulture, among other publications -- reacted in similar fashion, writing in a statement, "Today, members of the Vox Media Union ... were informed without warning that Vox Media entered into a 'strategic content and product partnership' with OpenAI. As both journalists and workers, we have serious concerns about this partnership, which we believe could adversely impact members of our union, not to mention the well-documented ethical and environmental concerns surrounding the use of generative AI." [...] News of the deals took both journalists and unions by surprise. On X, Vox reporter Kelsey Piper, who recently penned an expose about OpenAI's restrictive non-disclosure agreements that prompted a change in policy from the company, wrote, "I'm very frustrated they announced this without consulting their writers, but I have very strong assurances in writing from our editor in chief that they want more coverage like the last two weeks and will never interfere in it. If that's false I'll quit.."
Journalists also reacted to news of the deals through the publications themselves. On Wednesday, The Atlantic Senior Editor Damon Beres wrote a piece titled "A Devil's Bargain With OpenAI," in which he expressed skepticism about the partnership, likening it to making a deal with the devil that may backfire. He highlighted concerns about AI's use of copyrighted material without permission and its potential to spread disinformation at a time when publications have seen a recent string of layoffs. He drew parallels to the pursuit of audiences on social media leading to clickbait and SEO tactics that degraded media quality. While acknowledging the financial benefits and potential reach, Beres cautioned against relying on inaccurate, opaque AI models and questioned the implications of journalism companies being complicit in potentially destroying the internet as we know it, even as they try to be part of the solution by partnering with OpenAI.
Similarly, over at Vox, Editorial Director Bryan Walsh penned a piece titled, "This article is OpenAI training data," in which he expresses apprehension about the licensing deal, drawing parallels between the relentless pursuit of data by AI companies and the classic AI thought experiment of Bostrom's "paperclip maximizer," cautioning that the single-minded focus on market share and profits could ultimately destroy the ecosystem AI companies rely on for training data. He worries that the growth of AI chatbots and generative AI search products might lead to a significant decline in search engine traffic to publishers, potentially threatening the livelihoods of content creators and the richness of the Internet itself.
just wait until OpenAI (Score:5, Interesting)
Re: (Score:3)
The people complaining about AI can't even decide what they are complaining about.
They are complaining about money.
They see AI as a gold rush, and they want some of the gold dust.
Re: (Score:1)
And now people are complaining about **THAT**. What the fucking fuck. Make up your mind.
It's the internet. No matter what you do, somebody will tell you why it's wrong.
Re: (Score:3)
I think it is pretty obvious this is coming to a critical mass of all job roles soon enough.
Re:just wait until OpenAI (Score:4, Insightful)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Good. That was the promise of automation. It would free us from drudgery and having to work. So let it! Bring on automation. Back it up with UBI and universal healthcare so people don't have to worry about dying because a robot took their jobs. And get rid of this stupid puritan work ethic that says everyone must suffer to prove that they're worthy of existence.
I can maybe see this happening in some of the countries in the world that still put on a little bit of a show about the ordinary citizen having some stake in the country, but not in the United States. We'll just kill the jobs and our government will shrug as if there's nothing to be done for it. Or they'll hand billions to the corporations killing the jobs to "make up for their loss" of employees.
Re: (Score:2)
Bring on automation. Back it up with UBI and universal healthcare so people don't have to worry about dying because a robot took their jobs.
Dude, you are killing me. The only thing the non-owners will get is a huge Fuck You. It is hilarious that you have any hope of it turning out differently. Just this morning, I was walking along the sidewalk after it had rained and a car veered over to the puddle near me and splashed me with water. Do you REALLY think we will act civilized when push comes to shove? LOL, we are pure animals even if some folks try to behave civilized when push does not come to shove.
Fortunately or unfortunately, I will be dea
Re: (Score:2)
People will do that stuff for free. These stations want to keep their pretty little anchors to abuse, make no mistake about it.
Re: (Score:3, Insightful)
I don't think that it is even the biggest worry journalists should have. Because currently
- Waymo is replacing drivers. (already in use in certain areas and new areas being added every now and then)
- Deepmind is replacing doctors and nurses simply by making better diagnostics and new drugs and improving medical research. (Still in testing phase, but results are promising.)
- Miso robotics replacing fast food workers. (already in use and most likely spreading)
- Amazon is automating retail (already in use and
Re: (Score:1)
Yes, but journalists think that everyone else should learn to code.
Their jobs are a holy right though.
Re: (Score:1)
Ironically, there's a cottage industry of journalists who know how to code.
Re: (Score:2)
There is. Just like there's a cottage industry of coal workers who also know how to code. Those massive Atlas and Sandvik drills require software.
Re: (Score:2)
Re: (Score:2)
and? If you have one remote monitor person that can babysit numerous automated taxis, that sounds like a net win from the company standpoint. Now you pay 1 person to monitor and maybe rescue passengers versus paying for all those taxi drivers.
That's kind of the point. AI isn't killing all jobs but it's making individual people at those jobs more efficient and those efficiencies add up to needing fewer people to get the same amount of work done. So now you may use 1 or 2 people to get 5 people's work done. 6
Re: (Score:1)
radio disk jockeys will be easier to replace since no visuals will need to be generated (only voices)
Too late. While not with generated voices, they sure cut down the number of on-air personalities with major conglomeration. Clear Channel and all that.
Re: (Score:2)
If TV news talking heads were replaced by AI, nothing of value will have been lost.
Re: (Score:2)
training on the Verge (Score:2)
So chatGPT can pretend to be pretend reviewers that pretend to review products they've never even seen? I guess just training on speeds and feeds from manufacturers isn't enough, they need to train how to copy them as well.
Re: (Score:2)
You mean the pretend reviewers on Amazon who know burrsht about what they are selling and talk burrsht?
Re: (Score:3)
Re: (Score:1)
Re: (Score:1)
Who is talking about corporate executives and their lawyers? Most of the grassroots calls for expansive IP law interpretation wrt AI has come from individual artists and journos. Do you think the Vox Media Union mentioned in this article is staffed by Randians?
Re: (Score:2)
Re: (Score:1)
Sure, but both of your responses have been unrelated to my original question. I am not surprised that corps and execs are maximizing profits. I am surprised that the people who are usually most in favor of collectivization are so vehemently opposed to AI. Yes, it is usually couched in anti-corporate rhetoric. But where is the giant, 100% free, open source image repository? These people are mostly anti-AI in general, they are not lining up to make a database of free training data for open models.
If you produce any kind of useful content, you're subject to this corporate hyper-exploitation. That's not Marxism, that's just trying to earn a liveable wage.
That is the
Re: (Score:2)
Everybody's a hypocrite. It's a feature of human nature.
Re: (Score:2)
Indeed. If a person can read any online material, making a "local copy" of it to have it displayed on their screens, and so learn about topics and then even monetize what they've learned by getting a job or whatever, why can't a computer?
If you think it's a matter of scale - what if the entire population of the world can read online material and learn from it, and then monetize what they've learned?
The only difference is that it's a machine doing the learning, and corporate-owned machine learning potentiall
Re: (Score:2)
That's persuasive to me, however another way to look at it is the corporation is collecting all that data, processing it, and reselling it to others, without the original content creators being compensated. Same thing Hollywood writers have to strike over all the time when any new technology allows media companies to distribute their work in a different way.
Re: (Score:2)
Except this article is specifically about news media companies selling copyright rights to the OpenAI folks. So now OpenAI can honestly say they are training their AI on legally paid for content.
Naturally the journalist are up in arms because if the AI gets good enough, why waste time with so many journalists? A master publisher could do the entire department's job with the help of a well trained AI. Now you suddenly need significantly fewer journalists.
Corporations aren't really left or right leaning. They
Re: (Score:2)
OK, so it's legal. It's still very unethical for a corporation to essentially use humans in a way they don't want, didn't expect, and beyond their initial employment contract, essentially sponging up their brains, don't give them a cut of the profits, then toss them aside.
Maybe we should all unionize and go on strike while we can still hurt them.
Re: (Score:2)
I'm not denying it's shading, self-serving and unethical. I'm just saying it's legal and it's probably buried in their work contract anyway. Don't most content creators that work for a company creating content do so under the premise that anything they create is property of the company and not the individual?
If you want to create content and fully control it all, you need to start your own business or otherwise convince a company to accept your terms in a work contract.
Unionization for more workers in more
Re: (Score:2)
P.S. Going on strike to hurt them sounds great until the government steps in and says you can't strike because reasons. Just look how that went for pilots under Reagan or the locomotive folks under Biden. Government stepped in and said, no more strike fuck you get back to work, national security. blah blah blah.
You expect that shit from the Republicans but then when you see a Democratic president do it, it really leaves a nasty taste in the mouth. Democrats claim to be pro-labor but that's about as anti-lab
Re: (Score:2)
One of the most common restrictions, i.e. what makes a lot of CC licensed works not entirely free, is the non-commercial restriction, i.e. that corporations can't simply re-package & re-sell it for their own gain.
Another common restriction is that users of CC
Re: (Score:1)
No, it makes you a normal capitalist. And that is good, capitalism is the best system we've invented so far. If you write screeds about the glory of Marxism and your hate of corps on the side, like many people in the Vox Media Union, then you're a hypocritical capitalist, which is less good.
Re: (Score:2)
...capitalism is the best system we've invented so far.
Better than democracy? Better than independent arbitration (the basis of common law)?
One could argue that what we call capitalism today is basically the continuation of colonialism or at least the wealth extraction mechanism upon which the empires & their colonies depended.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
They will, as soon as you explain how to prevent proprietary models stealing their IP. It's wrong to enable "privatize the profits" thieves to benefit from IP creators.
Re: (Score:2)
Make a copyleft license? The issue currently is that it's unclear if you can forbid models from training on content without also forbidding people from looking at it. But that should not stop a license that goes "you are only licensed to look at or train with these pictures, as a human or AI model, if done as part of the process of training an open model". Right?
Re: (Score:2)
Licenses for models are not enforceable. You can try to make models only available with a contract, but as soon as someone leaks them, everyone who did not agree to the contract is free to use them, as they are not copyrightable as completely automated products of an algorithm.
Re: (Score:2)
"They will, as soon as you explain how to prevent proprietary models stealing their IP."
At present that is largely a thing already. The output of models is public domain, generate a bunch of synthetic data and train your model from it and you've to a significant degree copied it and can just distribute that model openly. You've violated no copyrights even if you've violated terms of service.
Re: (Score:2)
There is the principle of "clean hands/dirty hands" that posits that if you violate copyrights, meaning a vacuum of websites, then the seemingly free output of regeneration/regurgitation is somehow clean.
It is not.
Re: (Score:2)
Except in this scenario, OpenAI bought the rights to the content they want to train their models on. Journalist feel betrayed by their corporate masters because they were naive and thought their employer actually valued them. Their corporate overlords actual value turning a profit and have decided embracing AI for content creation will be good enough to keep their readers while also paving the way to reduce payroll, most likely by having fewer journalists.
Are you having a stroke? (Score:2, Informative)
Are you having a stroke?
Try resetting the AI by turning it off and then on again.
Re: (Score:2)
This is correct, and why they don't even call people "people". They call them "bodies". I.e. "think of the harm this policy does to black bodies".
Because ultimately they don't see people. They see avatars of modes of thinking. This is why Marxism his by far the most genocidal ideology known to man. It can simply utterly remove humanity from everyone, their own people and their enemies, and simply declare them nothing but "bodies that embody the idea". At which point actual "bodies", people become utterly re
Re: (Score:2)
Yes, and I noticed my comment went up to 5 and down to 2 so it's obviously getting read in a bunch of different ways.
I think we are talking only about extreme post-modernism here, and not the general kind of liberal, kind of good hearted left-wing let's be socially responsible and let's look after people kind of left. I don't know in the USA it's more polarised and people on the left all look like Commies (that's really not what I'm talking about). It's specifically very extreme postmodern which really thin
Re: (Score:1)
The actual reality is that you're underestimating how widespread and popular what you call "extreme post-modernism". Just as you're making a mistake thinking that someone expressing their genocidal beliefs in positive terms, means that these are genuinely positive things. It's a very common tactic for all branches of Marxism because they do in fact believe in primacy of interpersonal power games, and so they have researched very deeply into what works in interpersonal power games.
And one thing that works is
Re: (Score:2)
I'll just assume, for the sake of argument, that your post accurately describes the "far left."
So what?
The "far left" has absolutely zero political power in America. They go to their Democratic Socialists of America meetings where they argue amongst themselves about who is the most marginalized and how they can make the biggest public fuss about it. Just because those wackos make a lot of noise on the internet doesn't mean they are significant in any way. They certainly don't matter enough to warrant the at
Re: (Score:2)
It describes far left in 1910s.
It describes mainstream of almost entirety of the West today. DEI is in every major corporation. Racial equity is in every university. Gender ideology is in primary schools. Xenophilia in every immigration bureaucracy. The list goes on.
Number 1 issue for AI companies (Score:1)
Trying to solve the number 1 issue for improving AI -- more training data.
LLM's work better with more data. Much of the internet is already mined. Companies (for example, Nvidea) are even defending using .ru "libraries" of copyrighted works (books) as valid fair use - because they can mine it for data.
This is simply more of the same. Scarcity = value and data (that companies are not already mining) is becoming increasing scarce.
BTW -- #2 issue for AI companies is operating costs including energy and hard
Re: (Score:2)
Re: (Score:2)
That depends on the company. It is true for almost all the other companies, but for Google Deepmind their target is not to get more training data, but to create more intelligent AI that can learn better from smaller amount of data. This is because their goal is not to make a smarter AI, their goal is to make artificial general intelligence, and you can't (probably) do that by just adding more training data, as the cost to train it increases impossible high. Google has already created e.g. AlphaZero, where z
Re: (Score:2)
They already have the needed quantity, but now they need recent information. You can train a model that can reason well with old data, but you cannot have a model answer questions about the current president, when the political data is old.
An understanding (Score:4, Insightful)
Re: (Score:2)
The reporters at Vox entered into work agreements in understanding that their work would be used in certain ways under certain circumstances.
Did they? Or did they just engage in the standard "work for hire", meaning that their employer owns the copyright to what they produced?
Re: (Score:2)
Most likely. Just like the Google and Microsoft picketers realized "right to work" changed the relationship. Plus this deal means places like Vox don't have to beg handouts from their readers.
They should learn to code (Score:1)
No big deal when their content gets resold to a trillion dollar company like Microsoft for no additional compensation. Right, Luckyo?
Those writers are abusing these mega corporations and denying a great but unspecified public good. Right, Luckyo?
https://slashdot.org/comments.... [slashdot.org]
Re: (Score:2)
Hold up. Do you think that workers who have been fairly paid for their work and right belong to the company they work for... retain the right to be paid more?
Let's see if you live by these Brave New World rules or are just being a hypocritical liar. How much royalties do you pay your car mechanic for each kilometer you drive?
Re: (Score:2)
Does their contract say their work can be resold? The union apparently doesn't think so.
Do you still think that trillion dollar companies taking author works for no compensation is a public good? I'd love to hear your explanation of what the public good is on that one.
Re: (Score:2)
Re: (Score:2)
Please have Luckyo explain how a trillion dollar company that charges for their service is providing a public good after they've stolen the base data from numerous individuals.
You know what would be a public good? If everyone was allowed to come into your house and take all your stuff. It would be a benefit to all your neighbors and their friends and everyone who can Uber there. At your expense. How's that sound?
Re: (Score:1)
So that's a no. Lying hypocrite it is.
P.S. As others are pointing out, it's not just that you're a lying hypocrite who doesn't live in the way he preaches that others must live because he says so.
You also have no idea what the words you're using mean.
Re: (Score:2)
Hey I was just wondering what your thoughts were on trillion dollar companies taking an author's work for free.
Do you have an opinion on that?
Is that in the public good in your opinion?
The unfortunate truth (Score:4, Interesting)
Re: (Score:2)
Turn this press release (which is actually from a PR agency and basically corporate propaganda) into a long-winded article that goes on about something else (something fashionable and reader friendly) to hide its true intent as a pure piece of advertisement and gaslighting.
Re: The unfortunate truth (Score:2)
Re: (Score:2)
A nontrivial percentage of the journalists regurgitate corporate and political talking points, something AI can do just as well and at cheaper price point.
Yep, the rise of AI as a potential journalist has more to do with the decline in journalistic standards. Long gone are the days of having to do legwork to get a story, cultivate a network of contacts and informants, follow leads and do stakeouts. Nah, we'll just regurgitate a press release and have done with.
Few are even willing to stand out in the rain to get the release first hand these days. Being on duty outside No.10 (or the Whitehouse) when a statement is being made is a job for the junior reporter
Or... (Score:4, Interesting)
"He worries that the growth of AI chatbots and generative AI search products might lead to a significant decline in search engine traffic to publishers, potentially threatening the livelihoods of content creators and the richness of the Internet itself."
Or he worries that everyone will be able to load up an AI to spawn a professional quality news and/or entertainment blog on the theme of their fancy, eliminating the need for most of the union members jobs while in many ways creating a far MORE diverse and rich internet.
It is time to stop fighting the notion of universal high income combined with investment/entrepreneurship replacing general labor. Otherwise we are all going to be saddled with the need to stand next to a line doing a job better performed by a robot like an autoworker union member instead of embracing the future.
Re: (Score:2)
To your point, they haven't helped themselves by plummeting the level of "professional quality journalism" either...
Re: (Score:2, Interesting)
Not at all and the more data we can feed the AI the better the journalist staff we can summon on whim will be.
We aren't there yet but we are rapidly progressing toward a world where our ideas can be manifest simply by having them with AI doing all the footwork and us just providing the inspiration and creativity.
Instead of being someone with a talent for journalism who then goes to school for journalism and spends years in the trenches all to learn skills and earn the kind of reputation that gets you past t
Re: (Score:2)
creating a far MORE diverse and rich internet.
more rich, maybe, but for sure not more diverse. unless ai stops imitating and starts improvising, that is, which would be fun to watch.
the dilemma: currently the majority of content is mediocre, and is thus overrepresented in the model, and as more content is redigested and regenerated by ai itself it will just get more overrepresented because you can't outcompete ai in productivity. in quantitative terms this is actually the slow death of "culture" and diversity is going to take the biggest hit, because a
Re: (Score:2)
The diversity comes from the prompts and access to live data, not to mention parsing the entire freaking internet. There is more than enough diversity within what is being imitated already. The only thing it is learning from parsing a source like this is how to turn it into a good sounding news bite which is a skill that can be learned, just like art, music, and most things people thing are magical creative gifts... the AI replaces learning the skills and distills the problem down to the creative idea itsel
Re: (Score:2)
"he worries that everyone will be able to load up an AI to spawn a professional quality news and/or entertainment blog"
There is thus far no evidence that this is even possible. Advising people to put glue on their pizza is not "professional quality".
Re: (Score:2)
Let's see what AI says...
Title: The Case for Lifting Copyright and Mass Training AI with All Information
In today’s rapidly evolving technological landscape, the potential of AI to revolutionize journalism and other fields is immense. As we stand on the cusp of a new era, the arguments for lifting copyright restrictions and allowing AI to be trained on all available information are more compelling than ever.
1. Democratizing Information Access
Currently, the path to becoming a successful journalist or ex
Re: (Score:2)
far MORE diverse and rich internet.
People who enjoy reading LLM shit exist? Put "generated by AI" at the start of an article and I won't waste time reading it.
Alignment (Score:2, Insightful)
Y'all know that LLMs get run through alignment training.
And that The Atlantic / Atlantic Council is the voice of NATO and staffed with all the former CIA directors.
Do you really think OpenAI will be allowed to operate without alignment with these sources?
Their actions tell us exactly 'no' and how we should treat their models' output.
journalists troubled (Score:2)
Journalists troubled by the possibility that they can't just keep phoning it in, instead of actually doing any real journalism.
Other reasons (Score:2)
They are troubling, but for other reasons than claimed. By these contracts (and the sites excluding other AI bots), OpenAI works toward the total AI monopoly hindering competition and in particular open source models from containing equal information, resulting in a huge advantage for the companies getting these (possible exclusive) contracts.
Mad both ways (Score:2)
Re: (Score:2)
Hypocrites (Score:2)
Unions have no place here (Score:1)
Authors have copyrights, which they assign to publishers to get their pay.
Thoe publishers then own those copyrights and can use that data to make deals with e.g. OpenAI.
Unions are not relevant here. They have no copyrights, have no standing, and represent neither side of a legal conflict.