Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Youtube AI

YouTube Adapts Its Policies For the Coming Surge of AI Videos (techcrunch.com) 20

Sarah Perez reports via TechCrunch: YouTube today announced how it will approach handling AI-created content on its platform with a range of new policies surrounding responsible disclosure as well as new tools for requesting the removal of deepfakes, among other things. The company says that, although it already has policies that prohibit manipulated media, AI necessitated the creation of new policies because of its potential to mislead viewers if they don't know the video has been "altered or synthetically created." One of the changes that will roll out involves the creation of new disclosure requirements for YouTube creators. Now, they'll have to disclose when they've created altered or synthetic content that appears realistic, including videos made with AI tools. For instance, this disclosure would be used if a creator uploads a video that appears to depict a real-world event that never happened, or shows someone saying something they never said or doing something they never did.

It's worth pointing out that this disclosure is limited to content that "appears realistic," and is not a blanket disclosure requirement on all synthetic video made via AI. "We want viewers to have context when they're viewing realistic content, including when AI tools or other synthetic alterations have been used to generate it," YouTube spokesperson Jack Malon told TechCrunch. "This is especially important when content discusses sensitive topics, like elections or ongoing conflicts," he noted. [...] The company also warns that creators who don't properly disclose their use of AI consistently will be subject to "content removal, suspension from the YouTube Partner Program, or other penalties." YouTube says it will work with creators to make sure they understand the requirements before they go live. But it notes that some AI content, even if labeled, may be removed if it's used to show "realistic violence" if the goal is to shock or disgust viewers. [...]

Other changes include the ability for any YouTube user to request the removal of AI-generated or other synthetic or altered content that simulates an identifiable individual -- aka a deepfake -- including their face or voice. But, the company clarifies that not all flagged content will be removed, making room for parody or satire. It also says that it will consider whether or not the person requesting the removal can be uniquely identified or whether the video features a public official or other well-known individual, in which case "there may be a higher bar," YouTube says. Alongside the deepfake request removal tool, the company is introducing a new ability that will allow music partners to request the removal of AI-generated music that mimics an artist's singing or rapping voice.

This discussion has been archived. No new comments can be posted.

YouTube Adapts Its Policies For the Coming Surge of AI Videos

Comments Filter:
  • what about policies on ads

  • Ah, YouTube, where cat videos meet cutting-edge AI policy. So, they're introducing a "Spot the Bot" disclosure for videos now? That's like asking HAL 9000 to kindly announce, "I'm sorry, Dave, I'm about to fake this." And let's not forget the deepfake removal tool - because nothing screams "efficiency" like a system built by the same folks who recommend conspiracy theories after watching one baking video.

    Plus, music partners can now request removal of AI-generated songs. Can't wait for the first AI-creat
    • by Kisai ( 213879 )

      The problem with AI, is that it's a bit too vague.

      Does it use ASR? Does that need to be disclosed? Which model/Engine?
      Does it use TTS? Does that need to be disclosed? Which model/Engine?
      Does it use a LLM? Does that need to be disclosed? Which model/Engine?

      If you create an interactive chatbot that basically has a realistic face, and reacts to your input? Does it need to be disclosed that is an AI? What if it starts making sweeping generalizations, or goes rogue like the Tay chatbot?

      People do not yet understa

  • by SmaryJerry ( 2759091 ) on Tuesday November 14, 2023 @05:51PM (#64005981)
    YouTube continues to make editorial decisions. These large tech platforms have been compared to being a telephone company and not liable for their user's content or posting, however the more editorial and moderation decisions they continue to make like this puts them closer and closer to being a publication like a newspaper. Newspapers pay content creators to write content that aligns with their managerial decisions. YouTube pays content creators to create content that aligns with their managerial decisions. The phone company would not listen in to your phone call and tell you that you can't imitate someone. Why does YouTube want to take on the job of the government and police? I understand they've had to do this for copyright concerns basically due to movie and music label lobbyists trying to take them down but now it seems they are forcing the job of the police and public onto themselves?
    • by SmaryJerry ( 2759091 ) on Tuesday November 14, 2023 @05:59PM (#64006007)
      YouTube and all tech companies should at minimum have to disclose every single moderation act they do, especially if it involves taking away money from content creators. aka their workers.
      • Because, why? It's their platform. If you don't like it, don't use it. If enough people didn't use it, then the free market will decide their fate.
        • It is to protect the public from being misled. Imagine a website that bans every person who says red is a good color. Everyone who visits the website would be misled to think people don't like red because of YouTubes manipulation. If they have to let the public know who was banned, demonetized, and why, they will be much more accountable to the public who can then know the situation. Now imagine its not red but take into account factors like race, gender, politcal belief. You can see why this would be impor
    • by martin-boundary ( 547041 ) on Tuesday November 14, 2023 @07:38PM (#64006185)
      The reason the tech companies have been doing this goes back to the DMCA. The DMCA protection argument goes something like this:

      1) all social media companies (aka forums) and search companies (aka search engines, including AI search engines) perform implicit copyright infringement on a massive scale all the time. They would not be able to exist without breaking the law constantly. For example, when a comment, song or video is posted on a forum, the forum owner is republishing and modifying a document (aka comment) which doesn't belong to them. Copyright violation! When a search engine collects and processes vast amounts of documents (aka web pages) which don't belong to them. Copyright violation!

      2) To allow these companies to *exist* the DMCA protects them from liability for the clear implicit copyright violations they do every single day, but only as long as they they follow the rules of the DMCA. The rules are, roughly: to not edit or censor the documents, and to promptly stop publishing any document where the real copyright holder contacts them asking them to stop. This is inspired by the rules for common carriers (aka phone companies) and also because the users often publish things that don't belong to them, and the tech companies often hoover up things they shouldn't.

      3) Youtube is playing a balancing act but is probably not making editorial decisions like a newspaper. If you can prove that they do, then the DMCA protections would fall away for Youtube due to not following the rules, and we could all sue them for the massive implicit copyright violations they commit constantly. Payday! More likely, they would just be told by a judge to stop doing it or else.

  • This is a monumental task.
    YouTube needs to find an effective and *efficient* method to filter out what essentially amounts to spam videos. Otherwise, their expenses for data storage will skyrocket. I've noticed an increasing number of videos featuring robotic voices reciting snippets from Wikipedia, combined with stock photos and videos.

    I wonder if they'll eventually implement something like a "view-count deposit".
    Like, people pay something like $1 to upload a video, and if that video reaches a set number o

  • AFAIK, at this time AI generated art is not eligible for copyright protection. So, no take-down notices or ownership disputes.

    This, by itself, might be enough to put a damper on AI-generated content. Difficult to monetize means little motivation to put it out there in the first place.

    • They are not difficult to monetize, they have a different customer. The deepfake of a politician saying false thing is not expected to bring money from commercials but to help an adversary.

  • by rapjr ( 732628 ) on Tuesday November 14, 2023 @08:02PM (#64006225)
    Movies have been created using fake and modified content with real/fake/modified people for a long time. The content in them would certainly seem to fit the description of Youtube's new policies. Also fan films, independent films, advertisements, CGI training videos, ...
    • by AmiMoJo ( 196126 )

      Japan has lots of "vtubers", or virtual YouTubers. Animated CG or drawn characters, often with a computer generated voice.

      Tools are available to take webcam footage of your face and recreate the expressions on the animated avatar, and now we have AI voice changers that repeat what you say, complete with inflections, in an anime girl voice.

      Japanese people like privacy so vtubers are popular, as they allow the creator to make videos without revealing their identity. There is some controversy though, particula

  • by luminate ( 318382 ) on Tuesday November 14, 2023 @08:10PM (#64006239)
    YouTube doesn't even seem to care about all the blatantly fake clickbait thumbnails that have infested the site for years as it is. I'm sure they'll get right on properly moderating AI-generated content. /s
  • I guess they'll have to flag half the pop videos on YouTube.

  • by VeryFluffyBunny ( 5037285 ) on Wednesday November 15, 2023 @02:51AM (#64006681)
    ...requires people to "responsibly disclose" that they're being deceptive when they're being deceptive, but only sometimes.

    I bet malicious/deceptive actors' videos are more likely to be taken down by an automated erroneous DMCA notice than because someone has failed to responsibly disclose that they've used AI.

"The vast majority of successful major crimes against property are perpetrated by individuals abusing positions of trust." -- Lawrence Dalzell

Working...