YouTube Adapts Its Policies For the Coming Surge of AI Videos (techcrunch.com) 20
Sarah Perez reports via TechCrunch: YouTube today announced how it will approach handling AI-created content on its platform with a range of new policies surrounding responsible disclosure as well as new tools for requesting the removal of deepfakes, among other things. The company says that, although it already has policies that prohibit manipulated media, AI necessitated the creation of new policies because of its potential to mislead viewers if they don't know the video has been "altered or synthetically created." One of the changes that will roll out involves the creation of new disclosure requirements for YouTube creators. Now, they'll have to disclose when they've created altered or synthetic content that appears realistic, including videos made with AI tools. For instance, this disclosure would be used if a creator uploads a video that appears to depict a real-world event that never happened, or shows someone saying something they never said or doing something they never did.
It's worth pointing out that this disclosure is limited to content that "appears realistic," and is not a blanket disclosure requirement on all synthetic video made via AI. "We want viewers to have context when they're viewing realistic content, including when AI tools or other synthetic alterations have been used to generate it," YouTube spokesperson Jack Malon told TechCrunch. "This is especially important when content discusses sensitive topics, like elections or ongoing conflicts," he noted. [...] The company also warns that creators who don't properly disclose their use of AI consistently will be subject to "content removal, suspension from the YouTube Partner Program, or other penalties." YouTube says it will work with creators to make sure they understand the requirements before they go live. But it notes that some AI content, even if labeled, may be removed if it's used to show "realistic violence" if the goal is to shock or disgust viewers. [...]
Other changes include the ability for any YouTube user to request the removal of AI-generated or other synthetic or altered content that simulates an identifiable individual -- aka a deepfake -- including their face or voice. But, the company clarifies that not all flagged content will be removed, making room for parody or satire. It also says that it will consider whether or not the person requesting the removal can be uniquely identified or whether the video features a public official or other well-known individual, in which case "there may be a higher bar," YouTube says. Alongside the deepfake request removal tool, the company is introducing a new ability that will allow music partners to request the removal of AI-generated music that mimics an artist's singing or rapping voice.
It's worth pointing out that this disclosure is limited to content that "appears realistic," and is not a blanket disclosure requirement on all synthetic video made via AI. "We want viewers to have context when they're viewing realistic content, including when AI tools or other synthetic alterations have been used to generate it," YouTube spokesperson Jack Malon told TechCrunch. "This is especially important when content discusses sensitive topics, like elections or ongoing conflicts," he noted. [...] The company also warns that creators who don't properly disclose their use of AI consistently will be subject to "content removal, suspension from the YouTube Partner Program, or other penalties." YouTube says it will work with creators to make sure they understand the requirements before they go live. But it notes that some AI content, even if labeled, may be removed if it's used to show "realistic violence" if the goal is to shock or disgust viewers. [...]
Other changes include the ability for any YouTube user to request the removal of AI-generated or other synthetic or altered content that simulates an identifiable individual -- aka a deepfake -- including their face or voice. But, the company clarifies that not all flagged content will be removed, making room for parody or satire. It also says that it will consider whether or not the person requesting the removal can be uniquely identified or whether the video features a public official or other well-known individual, in which case "there may be a higher bar," YouTube says. Alongside the deepfake request removal tool, the company is introducing a new ability that will allow music partners to request the removal of AI-generated music that mimics an artist's singing or rapping voice.
what about policies on ads (Score:2)
what about policies on ads
Re:what about policies on ads (Score:5, Informative)
Deepfake or News? YouTube Can't Tell Either (Score:2)
Plus, music partners can now request removal of AI-generated songs. Can't wait for the first AI-creat
Re: (Score:2)
The problem with AI, is that it's a bit too vague.
Does it use ASR? Does that need to be disclosed? Which model/Engine?
Does it use TTS? Does that need to be disclosed? Which model/Engine?
Does it use a LLM? Does that need to be disclosed? Which model/Engine?
If you create an interactive chatbot that basically has a realistic face, and reacts to your input? Does it need to be disclosed that is an AI? What if it starts making sweeping generalizations, or goes rogue like the Tay chatbot?
People do not yet understa
Further indication YouTube is not a 'pipe' (Score:4, Insightful)
Re:Further indication YouTube is not a 'pipe' (Score:5, Interesting)
Re: (Score:2)
Re: (Score:1)
Re:Further indication YouTube is not a 'pipe' (Score:4, Insightful)
1) all social media companies (aka forums) and search companies (aka search engines, including AI search engines) perform implicit copyright infringement on a massive scale all the time. They would not be able to exist without breaking the law constantly. For example, when a comment, song or video is posted on a forum, the forum owner is republishing and modifying a document (aka comment) which doesn't belong to them. Copyright violation! When a search engine collects and processes vast amounts of documents (aka web pages) which don't belong to them. Copyright violation!
2) To allow these companies to *exist* the DMCA protects them from liability for the clear implicit copyright violations they do every single day, but only as long as they they follow the rules of the DMCA. The rules are, roughly: to not edit or censor the documents, and to promptly stop publishing any document where the real copyright holder contacts them asking them to stop. This is inspired by the rules for common carriers (aka phone companies) and also because the users often publish things that don't belong to them, and the tech companies often hoover up things they shouldn't.
3) Youtube is playing a balancing act but is probably not making editorial decisions like a newspaper. If you can prove that they do, then the DMCA protections would fall away for Youtube due to not following the rules, and we could all sue them for the massive implicit copyright violations they commit constantly. Payday! More likely, they would just be told by a judge to stop doing it or else.
Essentially Spam Filtering (Score:2)
This is a monumental task.
YouTube needs to find an effective and *efficient* method to filter out what essentially amounts to spam videos. Otherwise, their expenses for data storage will skyrocket. I've noticed an increasing number of videos featuring robotic voices reciting snippets from Wikipedia, combined with stock photos and videos.
I wonder if they'll eventually implement something like a "view-count deposit".
Like, people pay something like $1 to upload a video, and if that video reaches a set number o
Copyright (Score:2)
AFAIK, at this time AI generated art is not eligible for copyright protection. So, no take-down notices or ownership disputes.
This, by itself, might be enough to put a damper on AI-generated content. Difficult to monetize means little motivation to put it out there in the first place.
Re: (Score:2)
They are not difficult to monetize, they have a different customer. The deepfake of a politician saying false thing is not expected to bring money from commercials but to help an adversary.
So does this include movie trailers that use AI? (Score:3)
Re: (Score:1)
Japan has lots of "vtubers", or virtual YouTubers. Animated CG or drawn characters, often with a computer generated voice.
Tools are available to take webcam footage of your face and recreate the expressions on the animated avatar, and now we have AI voice changers that repeat what you say, complete with inflections, in an anime girl voice.
Japanese people like privacy so vtubers are popular, as they allow the creator to make videos without revealing their identity. There is some controversy though, particula
Nothing but lip service (Score:3)
Auto-tune is AI manipulation (Score:2)
I guess they'll have to flag half the pop videos on YouTube.
Deceptive corporation... (Score:3)
I bet malicious/deceptive actors' videos are more likely to be taken down by an automated erroneous DMCA notice than because someone has failed to responsibly disclose that they've used AI.