YouTube's Recommender AI Still a Horror Show, Finds Major Crowdsourced Study (techcrunch.com) 81
An anonymous reader shares a report: For years YouTube's video-recommending algorithm has stood accused of fuelling a grab bag of societal ills by feeding users an AI-amplified diet of hate speech, political extremism and/or conspiracy junk/disinformation for the profiteering motive of trying to keep billions of eyeballs stuck to its ad inventory. And while YouTube's tech giant parent Google has, sporadically, responded to negative publicity flaring up around the algorithm's antisocial recommendations -- announcing a few policy tweaks or limiting/purging the odd hateful account -- it's not clear how far the platform's penchant for promoting horribly unhealthy clickbait has actually been rebooted. The suspicion remains nowhere near far enough.
New research published today by Mozilla backs that notion up, suggesting YouTube's AI continues to puff up piles of "bottom-feeding"/low-grade/divisive/disinforming content -- stuff that tries to grab eyeballs by triggering people's sense of outrage, sewing division/polarization or spreading baseless/harmful disinformation -- which in turn implies that YouTube's problem with recommending terrible stuff is indeed systemic; a side effect of the platform's rapacious appetite to harvest views to serve ads. That YouTube's AI is still -- per Mozilla's study -- behaving so badly also suggests Google has been pretty successful at fuzzing criticism with superficial claims of reform. The mainstay of its deflective success here is likely the primary protection mechanism of keeping the recommender engine's algorithmic workings (and associated data) hidden from public view and external oversight -- via the convenient shield of "commercial secrecy." But regulation that could help crack open proprietary AI blackboxes is now on the cards -- at least in Europe.
New research published today by Mozilla backs that notion up, suggesting YouTube's AI continues to puff up piles of "bottom-feeding"/low-grade/divisive/disinforming content -- stuff that tries to grab eyeballs by triggering people's sense of outrage, sewing division/polarization or spreading baseless/harmful disinformation -- which in turn implies that YouTube's problem with recommending terrible stuff is indeed systemic; a side effect of the platform's rapacious appetite to harvest views to serve ads. That YouTube's AI is still -- per Mozilla's study -- behaving so badly also suggests Google has been pretty successful at fuzzing criticism with superficial claims of reform. The mainstay of its deflective success here is likely the primary protection mechanism of keeping the recommender engine's algorithmic workings (and associated data) hidden from public view and external oversight -- via the convenient shield of "commercial secrecy." But regulation that could help crack open proprietary AI blackboxes is now on the cards -- at least in Europe.
Quelle coincidence (Score:4, Insightful)
Are people raging about this? Or are politicians raging about this, facetiously feigning concern of disinformation, so they can hope to slide their opposing politicians' posts into the disinformation, ergo banned, bag?
Keep an eye on those dirtballs.
Re: (Score:2)
More importantly, is it really "AI" that we are talking about or it is simply a pattern matching algorithm?
Re:Quelle coincidence (Score:5, Insightful)
It's an algorithm, not particularly AI - page ranking is pretty well documented in its base mechanics, though of course by now who knows how much clever math is piled up on top of it. The challenge is that the page ranking mechanism promotes things that are widely referenced and clicked on, and the right-wing, in particular, has aggressively gamed the system to promote their content. So they create huge pools of sites that cross-link to each other, for example, creating artificial page rank boosts, and whenever Google adjusts the algorithms to block out one form of manipulation they adapt their tactics, in an ongoing cat-and-mouse game. And since their not going to stop creating propaganda and promoting it however they can, it's not going to go away completely - every time they come up with a new tactic, Google has to figure out how to detect and block that tactic without also blocking legitimate content, which is a pretty challenging task. And since the trolls are well funded and then supported by well-meaning idiots, it's a never-ending battle. A lot like email spammers, but worse.
Re: Quelle coincidence (Score:2)
No, that's not the important question. It doesn't matter at all what word you use to describe it.
Re: (Score:1)
What is the difference between the two?
Re: (Score:3, Interesting)
What really bugs me is when I get off-the-wall recommendations based on things I have said aloud. Which sometimes hit a bit too close to home. When I have *never* used nor authorized Google Voice. All that does is make me paranoid. Enough to tape over the selfie cam and super glue the mics on my Android tablet.
Re: (Score:2)
I hear this complaint a lot but, despite having 3 Google Home devices and 2 Android devices in my home, I have never experienced this phenomenon. The closest thing is a search on Amazon will then continue to roll ads for that thing on Amazon itself and maybe a few other sites that display Amazon ads.
I don't go out of my way to block ads but I do block tracking sites, 3rd party cookies and javascript. I also don't install many apps and don't allow apps permission to devices that they don't need.
Re: (Score:2)
Are people raging about this? Or are politicians raging about this
So, I take it from your question that you didn't read the article [slashdot.org].
Re: (Score:1)
Actually, I remembered that bit as being about YouTube, not FaceBook, but he principle applies.
Re: (Score:2)
Then ask yourself why if the GOP is so great at gaming the algorithms, videos from the right are the ones so quickly removed and right wing users so frequently banned? Maybe the truth is that the Left both games and runs the algorithms and actively distorts what you see so that you obey.
Re: (Score:2)
I don't think it requires any nefarious operatives, I think it's just a hot mess. I browse mostly science and history lectures, trance concerts, archeological excavations, handpan music, and the occasional robotics vid. What recommendations do I get? Trump videos, ancient aliens, and newly released rap videos. My guess is that they've tried to make so many tweaks to the algorithm that now it's just a huge pile of spaghetti code that can fork off in any random direction.
Re: (Score:2)
I've found that archeology is one of the jumping on points for conspiracy theory videos, I think its stems from "ancient aliens" type silliness.
GIGO (Score:2)
get rid of "hate crimes" (Re:GIGO) (Score:1, Flamebait)
Hate speech should not be a crime. We should not even have the concept of a "hate crime". If I say I hate murderers then is that hate speech or a hate crime? No. There's a general agreement that murder is bad so hating people that commit that act isn't a bad thing. Okay, what if I were to express hate people from some nationality, religion, or ethnic group? That's bad, right? Something worthy of a crime? Why? Speech should not be a crime. Not unless it falls into very specific categories. Calling
Hate grabs eyeballs [Re:get rid of "hate crimes"] (Score:2)
Hate speech should not be a crime.
No, but it shouldn't be pushed on people either.
The story is not saying hate speech should be a "crime". The story is about an algorithm that, when you view one video that you ask for, follows that up with another video that is conspiracy theories and detestable garbage... because vile garbage provokes outrage, and outrage glues eyeballs to the video screen.
But I don't think you even read the article [slashdot.org] you're commenting on (e.g., https://www.theguardian.com/te... [theguardian.com])
Re: (Score:2)
The story is not saying hate speech should be a "crime".
That's right, but the comment I replied to suggested using hate speech laws to get people posting hate speech prosecuted and punished. I believe such laws are a bad idea because hate crimes are too subjective and nearly impossible to enforce consistently.
But I don't think you even read the article you're commenting on
I doubt you read the post I replied to. Are you filtering at "+3"?
Re: get rid of "hate crimes" (Re:GIGO) (Score:2)
A hate crime is a crime which has as a goal to suppress a group's basic rights. It has to be a crime in itself first, before it can be a hate crime.
Yes, murder is already illegal, and obviously bad in itself. But murder for the purpose of intimidating other people from exercising their rights, for instance voting, is extra bad.
The state has an additional, important legitimate interest in preventing this, besides the interest it has in preventing murder in general.
Don't conflate this with "hate speech". Hate
Re: (Score:2)
Yep. Same way terrorism is ethically and legally worse than the basic crimes or acts of war employed.
People try to use the term for basically anything they don't like these days, in the hope of capitalising from the stigma associated with it, but the core of the concept is violence aimed at striking fear into the civilian population, as opposed to reducing military capability, which can be more readily defended as "defense".
Re: (Score:2)
Re: (Score:2)
>Hate speech should not be a crime.
Hate speech has the purpose of terrorizing, or suppressing people in and of itself. Or of causing additional harmful action to come to someone or group.
"Hate Crime" is crime which has the additional purposes noted above. So, since the extent of the crime is greater, so should the punishment be. Pretty simple concept.
Re: (Score:2)
Hate speech has the purpose of terrorizing, or suppressing people in and of itself. Or of causing additional harmful action to come to someone or group.
You just defined assault, and we already have laws against that.
https://www.merriam-webster.co... [merriam-webster.com]
Hate speech can be a verbal assault but merely expressing hate is not always assault. The problem has become that hate speech is not enforced consistently, and hate crimes have the same problem of inconsistent enforcement.
"Hate Crime" is crime which has the additional purposes noted above. So, since the extent of the crime is greater, so should the punishment be. Pretty simple concept.
Again, we already have laws against that. To use the example of assault there is simple assault, aggravated assault, and felony assault. Perhaps different jurisdictions define them slightly
Re: (Score:2)
Re:More Fake News (Score:4, Interesting)
On the rare occasion that I visit YouTube I never see any ads, thanks to aggressive adblocking, and I rarely see anything that I would classify as "hate speech, political extremism and/or conspiracy junk/disinformation" because I'm not an idiot and I don't go looking for that sort of thing.
This is just another manufactured controversy.
It's not just a bad algorithm that feeds hate to hateful people, it's just bad at picking out targeted ads. I was watching my usual news vlogs/podcasts when the topic that day was the problems of trans athletes beating women in competition, and the problems of high rates of illegal immigrants. The next advert I saw was in Spanish for AIDS/HIV testing. Because I guess only gay men want to know about transgendered athletes and if I'm concerned about illegal immigrants then I must speak Spanish. After that was an advert for a hardware store, again in Spanish. Because gay Spanish men like to get cheap wood. Or something. When the topic in the news vlog was Navy pilots seeing UFOs then I would have all kinds of conspiracy theory videos suggested to me. When the topic was Texas passing their law protecting conceal-n-carry without needing a permit then I'd see adverts for classes to get a permit to carry a concealed weapon, NRA membership, and self defense lawyers.
The algorithm is hypersensitive. They pick up on the topic under discussion in the videos and then blast that at you, both in the adverts and in the video picks. Sometimes that is great because I'm killing time on watching videos on military history, and I'll get plenty of content to choose from. It's not so great when I just want to see what's in the news. Just because I watched a news vlog which happened to mention Texas laws on conceal-n-carry and firearm suppressors doesn't mean I'm interested in buying a legally questionable suppressor for a .50 caliber pistol. I prefer .45 caliber.
Re: (Score:2)
Re: (Score:2)
The article wasn't talking about targetted ads. The article was talking about how, when you watch one video on youtoob, it automatically loads another video to play next. If you're watching anything with political content, the series of videos it will choose for you will spiral down from "reasonable" to "questionable" to "fringe" to "complete conspiracy nonsense."
Re: (Score:2)
YouTube is building a profile on what interests the viewer to get them to keep watching videos and therefore keep watching adverts. They will use the same algorithm for both. It's easy to see how the algorithm can take this extreme path by seeing what it thinks you are by seeing what it picks for advertisers.
Re: (Score:2)
At least you're getting something peripherally related to what you're viewing, I can watch a video about pruning tomato plants and get recommendations for UFO cultists. I turned autoplay off a long time ago, and keep turning it off every time they enable it again.
Re: (Score:2)
It's also still regularly banning (Score:4, Interesting)
Re: (Score:2)
Re: (Score:2)
I'll tell you what I think the answer is. You don't want to accept that people really believe the things YouTube is trying to suppress, and really don't want to consider the possibility that they're right. So, easier to dismiss them as paid liars saying things nobody really believes. And YouTube is more than willing to help you bel
I agree (Score:3)
If they won't open up their algorithms for review (Score:2)
then hold the Google execs personally responsible when shit hits the fan.
Youtube Serves Gonna Get Your Guns Ads (Score:2)
Re: (Score:2)
So what is the problem here? Recommended videos aren't just based on the previous video watched. It is based on a profile of you. If you are interested in software rights, you might be interested in gun rights, or human rights, or something similar.
That sounds like pretty dysfunctional profiling. If you search for singer Art Garfunkel, you want to watch some wack conspiracy theories about the presidential debates?
Re: (Score:2)
>"some wack conspiracy theories about the presidential debates?"
It wasn't a "wack conspiracy theory", it was actually a thing. Just because you might not like or agree with something, doesn't make it a conspiracy theory nor "wack".
This is the video in question: https://www.youtube.com/watch?... [youtube.com]
And the article cited by the video was named "Kristen Welker, upcoming presidential debate moderator, has deep Democrat ties"
https://nypost.com/2020/10/17/... [nypost.com]
It was, and still is, quite obvious that the overwhelm
Re: (Score:2)
Just because something is true doesn't necessarily make it 'divisive'. Most of the "deplorables" proudly wave their bigotry flag (often literally). I personally believe in calling a racist a racist, or a misogynist a misogynist, YMMV.
By design. (Score:2)
New research published today by Mozilla backs that notion up, suggesting YouTube's AI continues to puff up piles of "bottom-feeding", low-grade, divisive, disinforming content -- stuff that tries to grab eyeballs by triggering people's sense of outrage,
I'm pretty sure it has no idea about outrage or eyeballs but I'm sure it's prioritizing videos that people spend time on and comment on. I don't think the up-votes or down-votes are a significant factor, only how engaged people were with the video. This is Facebook's strategy too a 't' and it looks like YouTube uses it as well.
Re: (Score:2)
Mozilla is irrelevant (Score:2, Insightful)
Used to be a fan, but they have become irrelevant and woke. We should be focusing on the censorship of the tech giants, not being recommended things we disagree with. People should be educated to use their own brains and think for themselves and not be spoon-fed "this is true, we swear" government sanctioned and tech monopoly parroted junk.
Re: (Score:2, Interesting)
If I had points I would mod you up.
First, Mozilla needs to focus on browsers and the like, not website content on Youtube. And second, the three SPECIFIC examples they sited in the article of inappropriate video recommendations were not "hate speech" (even if something like that could be defined, which it can't), were not misinformation, and were at least somewhat relevant (although far from great recommendations). What did the three have in common? They were more conservative rather than leftist. OMG H
Maybe people are not interested in (Score:1)
So called authoritative sources they want entertainment
Forget Recommendations ... (Score:2)
I'm just trying to relocate youtube videos from 7 years ago. They still exist. I have relied on Google's search engine for ages. Search criteria are quite specific. Yet it cannot relocate those videos. Move over to MS Bing - same search criteria. Bingo! No problem.
Who'd a thunk it? I might make Bing my default search engine when it comes to searching for youtube videos.
Anti-vax vids are literally slaughtering elderly (Score:1)
people like my mom and her boyfriend.
Some elderly people don't have the wits left to tell nonsense from truth, and the emotional fear mongering of the nuts can make it impossible to get life saving medicine to youtubes victims.
All the libertarians and god knows what else in this comment section need to get a sense of perspective. Youtube is literally killing my family!
If you want the real horror show .... (Score:1)
go into https://www.youtube.com/feed/t... [youtube.com]
On the other hand: "Squirrel!" (Score:2)
2009: A squirrel video is posted.
2010: Nothing
etc.
2020: Nothing
2021: Youtube AI recommends squirrel video to all users resulting in 3.7 million views.
Very shallow algorithm in my experience (Score:2)
It does four things.
It recommends random old videos.
It recommends videos trending popular with everyone.
It recommends several things in line with one of the last 10 videos you viewed.
It recommends youtube products to you.
If you watch a covid video, suddenly you have a dozen covid videos.
If you watch about 10 videos on other subjects, it forgets you ever watched a covid video.
I found the Netflix algorithm was similarly shallow. I have a *wide* range of tastes. It couldn't keep track of what I liked. So I'
Re: (Score:2)
You're right, that's what it does.
But most of all - it focuses on content that can relate to the most profitable ads.
Netflix however, was a total chaos (at least for me) in the beginning, after 6 years with Netflix, it's actually starting to "get me".
I tend to watch a lot of 80s movies and shows, and a certain genre of stand up comedy.
There are also certain actors I like and some I just don't like at all.
This is were youtube and netflix goes their separate ways, netflix actually succeed after a few years le
Re: (Score:1)
for me, when I could give 1-5 ratings, Netflix got me. Once it went to "thumbs up/thumbs down", it became painfully unable to get me any more.
the funny thing about youtube is that I literally don't see or remember them even without an adblocker.
And *forced* videos that won't let me skip, just make me leave the video.
I'm probably thinking of getting an adblocker tho because the wasted time (even if I don't recall the ads) is getting to be too much.
Hmm (Score:2)
Re: (Score:2)
Search is broken (Score:2)
I am more concerned about the search mechanism. It is considerably shorter than it used to be and it is no longer possible to find anything in the long tail. I know the approximate names of videos from ten years ago and I can no longer find them using search. You also cannot find any low subscriber content that is being promoted by high subscriber channels. The whole thing has become an amplifier for a very limited set of content creators. You can no longer sort your subscription list so it is even tiresome
Re: (Score:2)
Re: (Score:2)
How? It's profitable, and profit is the true religion of the 21st century.
Re: (Score:2)
Not in my feed (Score:2)
How's that different from TV? (Score:2)
And let's not forget that the system bases its recommendations largely on prior viewing history, so mine isn't anything like what's described. Yesterday, it was all British comedy, music I listen to and scientists. Unless Monty Python, Neil Degras Tyson and Mr. Bungle count as ""bottom-feeding" /low-grade /divisive /disinforming content" [quote edited because the ascii art filters are complete shit], I've no idea
Re: (Score:2)
You're lucky, I have no idea why it spits out the recommendations that I get. I have **never** clicked on an 'ancient aliens' or Illuminati video, but there they are! Was watching videos on rooting mulberry plants from cuttings and the next recommendations were rap videos and anti-GMO foods.
Re: (Score:2)
No, it's perfect. (Score:2)
Actually, YouTube's recommender is perfect. Pretty much every single item it recommends is one that I have watched. Previously.
I actually noticed this myself (Score:2)
Truly useless. (Score:1)
Re: (Score:1)