

YouTube's Sneaky AI 'Experiment': Is Social Media Embracing AI-Generated Content? (yahoo.com) 46
The Atlantic reports some YouTube users noticed their uploaded videos have since "been subtly augmented, their appearance changing without their creators doing anything..."
"For creators who want to differentiate themselves from the new synthetic content, YouTube seems interested in making the job harder." When I asked Google, YouTube's parent company, about what's happening to these videos, the spokesperson Allison Toh wrote, "We're running an experiment on select YouTube Shorts that uses image enhancement technology to sharpen content. These enhancements are not done with generative AI." But this is a tricky statement: "Generative AI" has no strict technical definition, and "image enhancement technology" could be anything. I asked for more detail about which technologies are being employed, and to what end. Toh said YouTube is "using traditional machine learning to unblur, denoise, and improve clarity in videos," she told me. (It's unknown whether the modified videos are being shown to all users or just some; tech companies will sometimes run limited tests of new features.)
While running this experiment, YouTube has also been encouraging people to create and post AI-generated short videos using a recently launched suite of tools that allow users to animate still photos and add effects "like swimming underwater, twinning with a lookalike sibling, and more." YouTube didn't tell me what motivated its experiment, but some people suspect that it has to do with creating a more uniform aesthetic across the platform. As one YouTube commenter wrote: "They're training us, the audience, to get used to the AI look and eventually view it as normal."
Google isn't the only company rushing to mix AI-generated content into its platforms. Meta encourages users to create and publish their own AI chatbots on Facebook and Instagram using the company's "AI Studio" tool. Last December, Meta's vice president of product for generative AI told the Financial Times that "we expect these AIs to actually, over time, exist on our platforms, kind of in the same way that [human] accounts do...."
This is an odd turn for "social" media to take. Platforms that are supposedly based on the idea of connecting people with one another, or at least sharing experiences and performances — YouTube's slogan until 2013 was "Broadcast Yourself" — now seem focused on getting us to consume impersonal, algorithmic gruel.
"For creators who want to differentiate themselves from the new synthetic content, YouTube seems interested in making the job harder." When I asked Google, YouTube's parent company, about what's happening to these videos, the spokesperson Allison Toh wrote, "We're running an experiment on select YouTube Shorts that uses image enhancement technology to sharpen content. These enhancements are not done with generative AI." But this is a tricky statement: "Generative AI" has no strict technical definition, and "image enhancement technology" could be anything. I asked for more detail about which technologies are being employed, and to what end. Toh said YouTube is "using traditional machine learning to unblur, denoise, and improve clarity in videos," she told me. (It's unknown whether the modified videos are being shown to all users or just some; tech companies will sometimes run limited tests of new features.)
While running this experiment, YouTube has also been encouraging people to create and post AI-generated short videos using a recently launched suite of tools that allow users to animate still photos and add effects "like swimming underwater, twinning with a lookalike sibling, and more." YouTube didn't tell me what motivated its experiment, but some people suspect that it has to do with creating a more uniform aesthetic across the platform. As one YouTube commenter wrote: "They're training us, the audience, to get used to the AI look and eventually view it as normal."
Google isn't the only company rushing to mix AI-generated content into its platforms. Meta encourages users to create and publish their own AI chatbots on Facebook and Instagram using the company's "AI Studio" tool. Last December, Meta's vice president of product for generative AI told the Financial Times that "we expect these AIs to actually, over time, exist on our platforms, kind of in the same way that [human] accounts do...."
This is an odd turn for "social" media to take. Platforms that are supposedly based on the idea of connecting people with one another, or at least sharing experiences and performances — YouTube's slogan until 2013 was "Broadcast Yourself" — now seem focused on getting us to consume impersonal, algorithmic gruel.
Feh (Score:2)
Best interpretation: They've finally run out of space and are forcing harsher compression.
My interpretation: Sneaky way to individually watermark every single video stream and make archival copies constantly degrade more like it. It attacks copying by making it impossible to truly copy.
Re:Feh (Score:5, Interesting)
Don't be so generous.
I would not be surprised if this move was designed to deliberately blur the line and visual difference between creator-generated and AI-generated content.
"Why?" you might ask?
Well right now Youtube has to hand over more than 50% of its ad revenues to the pesky creators that make the videos people watch on the platform. If they can replace those creators with AI then they get to keep *all* the money -- and that is a very, very large chunk of change -- more than enough to incentivize such a move.
YouTube (and its parent Google) long ago lost any interest in not being evil and now *everything* they do is all about hiking revenues and boosting that bottom line. If you think otherwise then you're sadly deluded.
training data set (Score:2)
Divide short videos into two buckets based on percent chance they are viewed when 'suggested' to a user's feed.
The high view percentage ones ones will be used as examples of good content to generate/mimic with AI /mimic with AI
The others will be used as examples of poor quality content to generate
Re: (Score:2)
YouTube (and its parent Google) long ago lost any interest in not being evil and now *everything* they do is all about hiking revenues and boosting that bottom line. If you think otherwise then you're sadly deluded.
So true.
Porn (Score:2)
A whole crop of "influencers" and porn "stars" will soon be out of the business.
Re: (Score:2)
Within a couple of years 80% of porn will be AI.
I agree, but for porn, it usually looks better when softened with a slight blur. Who wants to see pimples?
YouTube is sharpening images.
A whole crop of "influencers" and porn "stars" will soon be out of the business.
Fine with me.
Re: (Score:1)
You're thinking about it wrong. The pimples are proof it's not fake or AI generated. Anything too airbrushed triggers my uncanny valley response. Similar to how fake boobs are super off-putting.
Re: (Score:1)
This is where big tech is going with their $4,000 augmented reality goggles.
Normalize AI porn, attach electrodes to your willy and all of a sudden you're trapped in cyberspace and unwilling to reenter real life. All you need is a catheter and an intravenous drip and you have the Matrix.
Re: (Score:3)
I agree, but for porn, it usually looks better when softened with a slight blur. Who wants to see pimples?
This is why HD video failed to attract any of the porn market; viewers didn't want to see every blemish on a performer's body. Now it seems as if having the blemishes visible will become a marker of shot-live porn, rather than computer-generated. At which point someone will start 'enhancing' the AI-generated 'performers' to add cosmetic blemishes, at which point the industry will have to find another means of distinguishing real actors from AI-generated ones.
Re: (Score:2)
Maybe they are tring to use "AI" to compress video (Score:5, Informative)
So maybe they are trying an aggressive compression and using a complex upscaling system.
AI like ads - pay extra to avoid it... (Score:2)
I rather expect approach like with ads...
You do not like AI and want original content only? Pay for Premium account - just like ads...
Re: (Score:2)
I rather expect approach like with ads...
You do not like AI and want original content only? Pay for Premium account - just like ads...
And then, pay for a Premium account to get fewer ads that can be skipped. Then, the "fewer" and or the "skippable" will go away. The only thing remaining will be a paid Premium account with lots of non-skippable ads. Free accounts will no longer exist.
Re: (Score:2)
Sure, these tools can use AI. And in fact, before the whole "chatbot" thing, the term AI was known to reference a pretty large scientific field into which one could even fit video compression to some extent.
Still, that's not youtube's place to use them. Compression artifacts are a thing, but "enhancing" does not so
Re: (Score:2)
Re: (Score:2)
People just want the option to turn the crap off. A toggle would be trivially easy to program. Filters change the aesthetic of the video, oftentimes neither creators nor viewers want that new aesthetic.
Maybe you're used to looking at nothing but airbrushed Instagram photos all day, but some of us still have a foot in the real world. There's certain content we don't want to unnecessarily "stylize".
Enshittification (Score:2)
I saw one creator doing a comparison (in a short, these seem to not yet get "enhanced") and side by side, this "improvement" is pretty bad.
Re: (Score:2)
I have seen several YouTube creators do that and complain about it. ... and those inspire others to do the same.
Not new (Score:4, Informative)
This is an odd turn for "social" media to take. Platforms that are supposedly based on the idea of connecting people with one another, or at least sharing experiences and performances — YouTube's slogan until 2013 was "Broadcast Yourself" — now seem focused on getting us to consume impersonal, algorithmic gruel.
This part isn't new, anyway.
YouTube has been UsTube since at least 2020, when censorship became celebrated. (And let's be honest, was it ever really about home movies more than about copyright violations and monetization?)
Facebook hasn't been about seeing mostly friends and family posts for a long, long time.
impersonal, algorithmic gruel (Score:1)
The correct term is "prolefeed"
Some people (Score:1)
>some people suspect that it has to do with creating a more uniform aesthetic across the platform. As one YouTube commenter wrote: "They're training us, the audience, to get used to the AI look and eventually view it as normal."
And some people believe the earth is flat, some people believe that the lunar landings were fake, some people believe that Trump is smart and cares about other things than himself...
Some people need to get off drugs
No, we hate it! (Score:2)
Re: (Score:2)
We are tired or social media telling us what we want.
That "We" must be the royal one. If the societal "we" was truly tired of being told what they want, social media would be either reformed or dead.
Re: (Score:2)
Networking effects prevent that. It's possible, but unlikely.
Youtube isn't social media (Score:2)
I don't understand this recent trend to describe Youtube as social media. Television isn't social media, you aren't interacting with anyone, you are passively consuming.
To be frank, I believe the way sites like twitter, or instagram where people passively follow public figures isn't social media either.
Re: (Score:2)
Re: (Score:2)
Yes, but so are many forums, including Slashdot.
It seems pretty rare nowadays to have forums where there is any multi-turn conversation and people actually listening to each other.
People basically just want a TV set that they can yell at.
Re: (Score:2)
I don't understand this recent trend to describe Youtube as social media. Television isn't social media, you aren't interacting with anyone, you are passively consuming.
YouTube seems social in the sense that it fosters an illusion of community and connection. The comments make the illusion even more compelling.
If old-fashioned TV had had a built-in way of connecting viewers with each other - and of allowing some of those viewers to create TV shows of their own - then it too would have been social media.
I seen lots of AI slop on youtube (Score:2)
Re: (Score:2)
There's also now a whole series of "AI-chick interviews X" videos.
So far I've noticed X = historical man on-the-street (roman, etc), and X = sweary middle-aged Brit on vacation.
They are mildly amusing for the first 5 min, but god help us if this is the future.
Three notes about all of this (Score:3)
One: "image enhancement technology to sharpen content".
Umm... no. I've seen the results of this, and while some elements of a frame are sharpened to the point of being obtrusive, other elements are blurred in a manner strongly reminiscent of the obviously AI-generated videos I've seen.
Two: "They're training us, the audience, to get used to the AI look and eventually view it as normal."
This was exactly my thought when I first experienced one of these videos, and I see the point as being pretty much non-debatable. They want to 'blur the lines' - pun intended - between real and virtual. The important thing to remember is that while they definitely want AI to train us, they also want us to train the AI. Every time we complain in a specific manner about a piece of AI slop, we're training the AI to be harder and harder to detect. Maybe we should just STFU and stop helping our overlords. But we won't.
Three: "YouTube has also been encouraging people to create and post AI-generated short videos using a recently launched suite of tools that allow users to animate still photos and add effects like swimming underwater, twinning with a lookalike sibling, and more".
Of course they're encouraging people to train the AI. They want AI to be as undetectable as possible, as soon as possible. The ultimate goal is to replace humans with computers in ALL creative endeavours in ALL fields. We are, purely and simply, creating and training our replacements.
Perception is more than the gateway to our brains - it can re-wire them. People who wear image-inverting glasses for a few days will suddenly see the world "right-side-up" when wearing them. When they remove the glasses, the world is inverted for a while. But if they keep wearing the glasses for too long, will they lose the ability to go back to normal, natural vision? I don't know the answer to that question. But I strongly suspect that if we keep looking at AI slop for too long, we will lose even more of our ability to differentiate between the real and the virtual.
I believe that our brains are being rewired deliberately, with the intent of controlling us. That's been happening for about a century, perhaps more. But in the past decade the pace has accelerated. The control has become more granular, more powerful, and more intimate. They're starting to control what we perceive on a fundamental, neurological level.
I always thought The Matrix was mere entertainment. Now, I'm beginning to think it might be prophetic.
Re: (Score:2)
Once that level is achieved, YouTube and other platforms will start promoting mostly their own AI-slop over 3rd party content producers. This way they'll keep 100% of the ad and subscription revenues generated. No need to share revenue with creators if there are no creators.
Evidently, this transition will be slow enough for most people, both those watching as well as creators, whose revenues will start slowly stagnating, then decreasing, not to notice.
Re: (Score:2)
Censorship like "1984" (Score:3)
We've seen children, nudity and sex disappear from from art: Soon, everything that isn't white, adult male, Christian, American-speaking AI, will disappear from YouTube.
\o/ (Score:1)
Surely the idea of YouTube is to motivate (using any means necessary) people to produce content which will capture others' attention then BLAM, show them an ad at peak interest and ...profit.
I suppose someone has now considered that th
500+ friends, or 500+ AI chatbots (Score:2)
Youtubes sneaky broken ad blocking (Score:2)