AI Can't Protect Us From Deepfakes, Argues New Report (theverge.com) 36
A new report from Data and Society raises doubts about automated solutions to deceptively altered videos, including machine learning-altered videos called deepfakes. Authors Britt Paris and Joan Donovan argue that deepfakes, while new, are part of a long history of media manipulation -- one that requires both a social and a technical fix. Relying on AI could actually make things worse by concentrating more data and power in the hands of private corporations. The Verge reports: As Paris and Donovan see it, deepfakes are unlikely to be fixed by technology alone. "The relationship between media and truth has never been stable," the report reads. In the 1850s when judges began allowing photographic evidence in court, people mistrusted the new technology and preferred witness testimony and written records. By the 1990s, media companies were complicit in misrepresenting events by selectively editing out images from evening broadcasts. In the Gulf War, reporters constructed a conflict between evenly matched opponents by failing to show the starkly uneven death toll between U.S. and Iraqi forces. "These images were real images," the report says. "What was manipulative was how they were contextualized, interpreted, and broadcast around the clock on cable television."
Today, deepfakes have taken manipulation even further by allowing people to manipulate videos and images using machine learning, with results that are almost impossible to detect with the human eye. Now, the report says, "anyone with a public social media profile is fair game to be faked." Once the fakes exist, they can go viral on social media in a matter of seconds. [...] Paris worries AI-driven content filters and other technical fixes could cause real harm. "They make things better for some but could make things worse for others," she says. "Designing new technical models creates openings for companies to capture all sorts of images and create a repository of online life."
Today, deepfakes have taken manipulation even further by allowing people to manipulate videos and images using machine learning, with results that are almost impossible to detect with the human eye. Now, the report says, "anyone with a public social media profile is fair game to be faked." Once the fakes exist, they can go viral on social media in a matter of seconds. [...] Paris worries AI-driven content filters and other technical fixes could cause real harm. "They make things better for some but could make things worse for others," she says. "Designing new technical models creates openings for companies to capture all sorts of images and create a repository of online life."
Why is it weak AI or nothing?!? (Score:2, Interesting)
Re: Why is it weak AI or nothing?!? (Score:1)
Re: Why is it weak AI or nothing?!? (Score:4, Funny)
Re: (Score:2)
well the BEST FAKES are still done with actual lookalike actors..
anyhow, ai based detecting is going to just be used for training the ai based face swapping duh. it's kinda obvious.
and people don't believe real to be real even (Score:2)
and just replying to myself but people don't believe real to be real either. just look at flat earthers showing a photo showing that a mountain is partially beyond the horizon claiming that it proves the earth as flat as they can see part of the mountain..
or the thing from last usa presential election where they claimed the people(and/or hillary) at hillarys rally were cgi as they couldn't understand how perspective works.
Re: Why is it weak AI or nothing?!? (Score:2)
The state of the art here is moving incredibly quick, like recently Iâ(TM)ve played with StyleGAN and even though the 1024x1024 heads it generates donâ(TM)t look real for photos theyâ(TM)ll essily pass as talking heads in a movie when scaled down. If they had better textures youâ(TM)d soon have photoswaps. Itâ(TM)s not so much CGI as style vampires combining real looks to a new real look.
Re: (Score:1)
Re: (Score:2)
Re: (Score:2)
Authentication will not help. Or do you somehow believe people will only watch videos authenticated by the people in them?
Re: (Score:2)
Re: (Score:2)
Well, maybe. The kicker is identity. For example, banned cheaters in multiplayer games often just get a new copy and continue their behavior under a different name.
Epstein leaks are coming (Score:1)
Re: (Score:2)
Maybe that's what he was trying to pay MIT to develop: plausible deniability for any record of his actions, by way of destroying any faith in such a record.
Re: (Score:2)
Soon to loose video evidence. (Score:3)
Re: (Score:2)
The new standard will probably be multiple angles taken by multiple people who would be unlikely to collude to fake things.
Re: (Score:2)
The new standard will probably be multiple angles taken by multiple people who would be unlikely to collude to fake things.
A) Way too much effort. B) How do you ensure "unlikely to collude"?
Re: (Score:2)
People with no meaningful connection to each other, aside from being in the same place at the same time, are unlikely to collude. People that don't like each other but who both take video of the same event -- and it matches -- are unlikely to collude. The more such pairings you can come up with, the more unlikely "they're all deepfakes" becomes. Even if they all take video, and deepfake their own with the same faces, the alterations made from the various angles will be inconsistent with each other even thou
Re: (Score:2)
These are characteristics you cannot realistically ensure and that are easy to fake. For example, "people that don't like each other" are very simple to generate artificially. People with no meaningful connection are a bit harder, but all it takes is to make this a service and the connection is just not accessible to the verifier.
Sounds good in theory, unlikely to work if an attacker invests a bit of effort.
Re: (Score:2)
Then we're going to get video pushed into cryptographically signed containers, and anything that can't have its chain of custody verified will be invalid in court -- just like evidence the government already collects. Alas, there is no solution to the "people will believe what they want to believe" problem. People will continue to see firefighters and angels in clouds when really it's a Space Marine gunning down a harpie.
Re: (Score:1)
Re: (Score:2)
The law is not about truth. The law is about sticking it to small people and those that angered those with real power. Hence no problem if some that an honest person would call "innocent" get sent to prison. Happens all the time.
Qualifications? (Score:2)
Well, if a report by a Sociology and Science Studies person and a Media Studies/Information Studies writes a report about AI and deepfakes, it must be true. I will wait for prognostications from someone who actually develops AI and/or deepfakes to write these alarmist articles.
Re: (Score:2)
In five years they'll have established themselves as some sort of expert just like all the law enforcement and compliance people who have taken over information security.
I'd like you to welcome my guest tonight, he's an expert in artificial intelligence, had articles in 5 national magazines, he's just published a new book entitled "AI and Purple".
Duh, it's obvious (Score:2)
1) Train an AI to generate deep fakes.
2) Train an AI to detect deep fakes.
3) Re-train the first AI until it can also pass the second test.
Trying to build a deep fake detector will just enable better deep fakes.
Oh NOES (Score:2)
Re: (Score:2)
When the surveillance system is controlled by the state, you cannot claim that it is a deep fake.
Likely, judges will assume that the states surveillance system is reliable.
How is that news? (Score:2)
What idiot thought it could?
Re: (Score:1)
Some very smart people people sell AI as a universal snake oil i mean cure to solve all the problems.
Buy our AI and stay ahead of the curve
Re: (Score:2)
The idiots at those companies that did, I call idiots.
Next question?
Re: (Score:2)
Well, the proper term for these people is not "idiot", because they know they cannot deliver. The term for these people is "liars".
Re: (Score:2)
What idiot thought it could?
Those idiots that think we already have strong AI and it is actually much smarter than humans.
Of course it can''t, AI itself is a deep fake. (Score:2, Insightful)
There is no "AI", it is all lies, damned lies and statistics.
Re: (Score:2)
There is no "AI", it is all lies, damned lies and statistics.
Not true. I know several guys named Al. There are even some famous Al s: Al Gore, Al B Sure, Al Bundy, Al Pacino...
ignore (Score:1)