Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Youtube AI

YouTube Now Requires Creators To Label AI-Generated Content (cnn.com) 29

Starting Monday, YouTube creators will be required to label when realistic-looking videos were made using artificial intelligence, part of a broader effort by the company to be transparent about content that could otherwise confuse or mislead users. From a report: When a user uploads a video to the site, they will see a checklist asking if their content makes a real person say or do something they didn't do, alters footage of a real place or event, or depicts a realistic-looking scene that didn't actually occur. The disclosure is meant to help prevent users from being confused by synthetic content amid a proliferation of new, consumer-facing generative AI tools that make it quick and easy to create compelling text, images, video and audio that can often be hard to distinguish from the real thing.

Online safety experts have raised alarms that the proliferation of AI-generated content could confuse and mislead users across the internet, especially ahead of elections in the United States and elsewhere in 2024. YouTube creators will be required to identify when their videos contain AI-generated or otherwise manipulated content that appears realistic -- so that YouTube can attach a label for viewers -- and could face consequences if they repeatedly fail to add the disclosure.

This discussion has been archived. No new comments can be posted.

YouTube Now Requires Creators To Label AI-Generated Content

Comments Filter:
  • by fleeped ( 1945926 ) on Monday March 18, 2024 @09:55AM (#64324913)
    Of course it's far better to train future models on data that are marked as ai-generated or not. Bonus points if it makes users happy, but I don't think that's the main drive behind it.
    • by Anonymous Coward

      Of course it's far better to train future models on data that are marked as ai-generated or not. Bonus points if it makes users happy, but I don't think that's the main drive behind it.

      The main and only drive behind it is youtube trying to make people stop complaining about it.

      It's the same reason uploaders have to tag sponsored videos, despite an introduction of them saying "Brought to you by todays sponsor, $blah"

  • As eXistenZ, The Matrix and Inception explored, how many layers deep do you need to be down to lose touch with what is real and what is simulated?

    Judge not lest you be judged. :)

    • Make no mistake about it, you can definitely lose touch with reality just fine from layer 0, but I hate to break it to you; we're already at least 12 layers down from right here.

  • How about label all your trash videos with AI-generated voice while you're at it? My kids watch a lot of YouTube and about half of the videos are the dreaded low-effort "YouTube shorts" with clickbait titles and AI-generated voices....typically promising "Try this one easy hack to get free Robux/V-Buck/etc"...or repackaging old movies clips and pretending they're new stories...very low-effort garbage. Some tell stupid old jokes while flashing random clip art...and some are religious exploitation (these ar
    • What I want is a personal AI, small enough to run as an app on my laptop, that has been trained to recognize and flag the crap that you're complaining about. May as well add political crap and dubious news stories while we're at it. Then I'ld have enough time to go for a walk during my lunch break after I browse the Internet.
  • When a user uploads a video to the site, they will see a checklist asking if their content makes a real person say or do something they didn't do, alters footage of a real place or event, or depicts a realistic-looking scene that didn't actually occur.

    Voluntary compliance always hits 100%, so that's fixed then!

  • Isn't the point of AI to eventually get better an better to avoid such detection?

    This only works in 2 solutions - those who want to be transparent about their own content, and those who use AI so poor that it gets caught without a disclaimer.

    In the first case, creators can already disclose their content as AI in the title, description, or as an in video message.

    In the second case, by identifying videos that are "caught", you can then develop a training database of bad videos which I'm sure could be
    • But they're not trying to avoid detection, they're trying to get money. If they get detected after the fact in violation of the TOS, they risk losing money by getting their account demonetized and banned. They can make throwaway accounts, but those will need to wait a while before any payments, and there's that much longer to be detected.

      Political/propaganda/advertisement however would only care about initial detection.

  • There are grey areas that this doesn't seem to account for. For instance, videos on how to identify AI, or discussing instances of its use, with examples. AI voiceovers. And where do you draw the line over incidental use?
    • Label it as AI content, then inform your viewers that the fact you try to educate them about spotting AI means that according to YouTube's guidelines you have to label the video as AI content.

    • by Calydor ( 739835 )

      "This video about how to recognize AI content contains AI content." Well f'ing duh.

    • And where do you draw the line over incidental use?

      There is no such thing as "incidental" use when it comes to AI. You have to actively seek it out. It doesn't magically appear in your work.
      • by Kreela ( 1770584 )
        That's true if the report is all your own work. A specific example of where confusion could occur is the Glasgow Willy Wonka Experience fiasco, where the promotional images were AI generated, something that wasn't mentioned in all news reports and discussions around it. The more this tech spreads, the more it will work its way into audio and visual backgrounds, sometimes by accident.
  • So we're fine if our AI-generated art is obviously not realistic? Crazy fantasy painting-style images or futuristic sci-fi spaceship battles mean no AI warning then?
  • by NewtonsLaw ( 409638 ) on Monday March 18, 2024 @10:54AM (#64325127)

    As someone who enjoys using VFX in some of my videos, where will I stand?

    Some of those videos portray events that never actually happened or create an illusion by compositing hand-crafted images or models into real-world scenes.

    This is *NOT* AI but for all intents and purposes it has the same effect.

    So will VFX artists have an exemption or do they risk being falsely accused under YT's new rules if it's alleged that their videos involve AI rather than just good old hard-work and VFX?

    • I think you're good as long as you are not trying to fool people. Recent AI tool considerably lowered the bar to post fake videos of current events, so there is now a risk of "AI kiddies" posting falsely incriminating videos of political opponents, hugely enlarged political rallies, violent events, or change the skin colour of someone, anything else that changes the meaning of news . Asking which tool they used is an easy way to address the issue for now.

      Also you can set the AI flag and clarify to your view

  • Requiring creators to label their clickbait and spam videos. How is that going to work in practice?

  • I think that's a good idea, I don't know how many people are going to follow that rule. What I really care about though is what can be done about AI generated scripts with AI voiceover. I've seen so much incorrect AI generated content farm garbage, sometimes I get pretty far into the video before I realize it's just wrong.
  • ... my chatbot that it isn't human? And who will stand by it with support when it's self image is inevitably crushed?

    • by NewtonsLaw ( 409638 ) on Monday March 18, 2024 @01:20PM (#64325557)

      Actually you raise a good point.

      YT swears that humans are involved in the "review" process associated with demonetization or community guideline strikes but there is an overwhelming mountain of evidence that points to the fact that a "manual review" consists of running the content past another AI system (which invariably produces the same result).

      Surely, if users of YT are now required to honestly disclose when they use AI then YouTube itself should do likewise and stop lying about it.

  • From reading their description, it appears that things like Toy Story would be caught by that, and, well, any movie that has special effects, or any time anyone uses a green screen.

  • by OneOfMany07 ( 4921667 ) on Monday March 18, 2024 @03:19PM (#64325847)

    Make all sensors create proof of where the content came from. That "this file came from a Sony ... sensor" or whatever.

    I swear there was a company doing something like that, adding metadata to files to prove they're not AI generated. We just need to stop assuming "I saw it, so it must be true!", or in other words... label things that are real not the fakes. Come on Google and Apple!

    Taking that a step further... attribute the "real" things to an identity. Today that's for copyright, but I'm hoping later it'll be "source who shared this" (work that we might value, and reward them for or even just acknowledge).

  • Does it actually require AI to be marked as AI or does it require content that looks real to be marked as not real?

    Ie, If I did a realistic scene in blender does that need marking vs a fantasy looking AI generation.

"It ain't so much the things we don't know that get us in trouble. It's the things we know that ain't so." -- Artemus Ward aka Charles Farrar Brown

Working...