YouTube Now Requires Creators To Label AI-Generated Content (cnn.com) 29
Starting Monday, YouTube creators will be required to label when realistic-looking videos were made using artificial intelligence, part of a broader effort by the company to be transparent about content that could otherwise confuse or mislead users. From a report: When a user uploads a video to the site, they will see a checklist asking if their content makes a real person say or do something they didn't do, alters footage of a real place or event, or depicts a realistic-looking scene that didn't actually occur. The disclosure is meant to help prevent users from being confused by synthetic content amid a proliferation of new, consumer-facing generative AI tools that make it quick and easy to create compelling text, images, video and audio that can often be hard to distinguish from the real thing.
Online safety experts have raised alarms that the proliferation of AI-generated content could confuse and mislead users across the internet, especially ahead of elections in the United States and elsewhere in 2024. YouTube creators will be required to identify when their videos contain AI-generated or otherwise manipulated content that appears realistic -- so that YouTube can attach a label for viewers -- and could face consequences if they repeatedly fail to add the disclosure.
Online safety experts have raised alarms that the proliferation of AI-generated content could confuse and mislead users across the internet, especially ahead of elections in the United States and elsewhere in 2024. YouTube creators will be required to identify when their videos contain AI-generated or otherwise manipulated content that appears realistic -- so that YouTube can attach a label for viewers -- and could face consequences if they repeatedly fail to add the disclosure.
The disclosure leads to better future training (Score:3)
Re: (Score:1)
Of course it's far better to train future models on data that are marked as ai-generated or not. Bonus points if it makes users happy, but I don't think that's the main drive behind it.
The main and only drive behind it is youtube trying to make people stop complaining about it.
It's the same reason uploaders have to tag sponsored videos, despite an introduction of them saying "Brought to you by todays sponsor, $blah"
Re: (Score:2)
Until AI videos can maximize engagement, they are shareholder enemy #1.
Tagging someone as AI sounds like a task for AI (Score:2)
As eXistenZ, The Matrix and Inception explored, how many layers deep do you need to be down to lose touch with what is real and what is simulated?
Judge not lest you be judged. :)
Re: (Score:2)
Make no mistake about it, you can definitely lose touch with reality just fine from layer 0, but I hate to break it to you; we're already at least 12 layers down from right here.
Should be required for all content! (Score:2)
Re: (Score:2)
No way around that, then (Score:2)
When a user uploads a video to the site, they will see a checklist asking if their content makes a real person say or do something they didn't do, alters footage of a real place or event, or depicts a realistic-looking scene that didn't actually occur.
Voluntary compliance always hits 100%, so that's fixed then!
Besides the point (Score:1)
This only works in 2 solutions - those who want to be transparent about their own content, and those who use AI so poor that it gets caught without a disclaimer.
In the first case, creators can already disclose their content as AI in the title, description, or as an in video message.
In the second case, by identifying videos that are "caught", you can then develop a training database of bad videos which I'm sure could be
Re: (Score:2)
But they're not trying to avoid detection, they're trying to get money. If they get detected after the fact in violation of the TOS, they risk losing money by getting their account demonetized and banned. They can make throwaway accounts, but those will need to wait a while before any payments, and there's that much longer to be detected.
Political/propaganda/advertisement however would only care about initial detection.
Grey areas (Score:1)
Re: (Score:3)
Label it as AI content, then inform your viewers that the fact you try to educate them about spotting AI means that according to YouTube's guidelines you have to label the video as AI content.
Re: (Score:2)
"This video about how to recognize AI content contains AI content." Well f'ing duh.
Re: (Score:2)
There is no such thing as "incidental" use when it comes to AI. You have to actively seek it out. It doesn't magically appear in your work.
Re: (Score:1)
So based on the summary (Score:2)
What about non-A intelligence? (Score:3)
As someone who enjoys using VFX in some of my videos, where will I stand?
Some of those videos portray events that never actually happened or create an illusion by compositing hand-crafted images or models into real-world scenes.
This is *NOT* AI but for all intents and purposes it has the same effect.
So will VFX artists have an exemption or do they risk being falsely accused under YT's new rules if it's alleged that their videos involve AI rather than just good old hard-work and VFX?
Re: (Score:2)
I think you're good as long as you are not trying to fool people. Recent AI tool considerably lowered the bar to post fake videos of current events, so there is now a risk of "AI kiddies" posting falsely incriminating videos of political opponents, hugely enlarged political rallies, violent events, or change the skin colour of someone, anything else that changes the meaning of news . Asking which tool they used is an easy way to address the issue for now.
Also you can set the AI flag and clarify to your view
Requiring creators? I believe in the Easter Bunny (Score:1)
Requiring creators to label their clickbait and spam videos. How is that going to work in practice?
aI generated scripts? (Score:1)
But who will tell ... (Score:2)
Re:But who will tell ... (Score:4, Interesting)
Actually you raise a good point.
YT swears that humans are involved in the "review" process associated with demonetization or community guideline strikes but there is an overwhelming mountain of evidence that points to the fact that a "manual review" consists of running the content past another AI system (which invariably produces the same result).
Surely, if users of YT are now required to honestly disclose when they use AI then YouTube itself should do likewise and stop lying about it.
How do you define AI? (Score:2)
From reading their description, it appears that things like Toy Story would be caught by that, and, well, any movie that has special effects, or any time anyone uses a green screen.
Still think the opposite makes more sense. (Score:3)
Make all sensors create proof of where the content came from. That "this file came from a Sony ... sensor" or whatever.
I swear there was a company doing something like that, adding metadata to files to prove they're not AI generated. We just need to stop assuming "I saw it, so it must be true!", or in other words... label things that are real not the fakes. Come on Google and Apple!
Taking that a step further... attribute the "real" things to an identity. Today that's for copyright, but I'm hoping later it'll be "source who shared this" (work that we might value, and reward them for or even just acknowledge).
Question (Score:2)
Does it actually require AI to be marked as AI or does it require content that looks real to be marked as not real?
Ie, If I did a realistic scene in blender does that need marking vs a fantasy looking AI generation.