Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI Media Social Networks Technology

AI Can't Protect Us From Deepfakes, Argues New Report (theverge.com) 36

A new report from Data and Society raises doubts about automated solutions to deceptively altered videos, including machine learning-altered videos called deepfakes. Authors Britt Paris and Joan Donovan argue that deepfakes, while new, are part of a long history of media manipulation -- one that requires both a social and a technical fix. Relying on AI could actually make things worse by concentrating more data and power in the hands of private corporations. The Verge reports: As Paris and Donovan see it, deepfakes are unlikely to be fixed by technology alone. "The relationship between media and truth has never been stable," the report reads. In the 1850s when judges began allowing photographic evidence in court, people mistrusted the new technology and preferred witness testimony and written records. By the 1990s, media companies were complicit in misrepresenting events by selectively editing out images from evening broadcasts. In the Gulf War, reporters constructed a conflict between evenly matched opponents by failing to show the starkly uneven death toll between U.S. and Iraqi forces. "These images were real images," the report says. "What was manipulative was how they were contextualized, interpreted, and broadcast around the clock on cable television."

Today, deepfakes have taken manipulation even further by allowing people to manipulate videos and images using machine learning, with results that are almost impossible to detect with the human eye. Now, the report says, "anyone with a public social media profile is fair game to be faked." Once the fakes exist, they can go viral on social media in a matter of seconds. [...] Paris worries AI-driven content filters and other technical fixes could cause real harm. "They make things better for some but could make things worse for others," she says. "Designing new technical models creates openings for companies to capture all sorts of images and create a repository of online life."

This discussion has been archived. No new comments can be posted.

AI Can't Protect Us From Deepfakes, Argues New Report

Comments Filter:
  • Don't use AI. There are tons of software and hardware authentication methods [airccse.org]. We need to move toward trusted sources because soon it will be possible to do deepfakes in near real time. Pasting over a single face is actually kinda easy, but the day will come when you don't need even a real body or scene but it can all be artificially and autonomously created in a realistic way.
    • I'm going to call BS on this claim. The best fakes out there still hit the uncanny valley hard... for example what we saw recently in Star Wars. And that was done by a team of experienced experts with millions of dollars of rendering hardware tuning every single frame, and took months. The run of the mill fakes are no better, and often much worse, than the average photoshopped hackjob.
      • by burtosis ( 1124179 ) on Wednesday September 18, 2019 @10:08PM (#59211396)
        And 30 years ago even the best teams with expensive computers couldn't do what a single armchair educated person on an old laptop downloading a grad students work off github can do. This is less about today and more about the future.
      • by gl4ss ( 559668 )

        well the BEST FAKES are still done with actual lookalike actors..

        anyhow, ai based detecting is going to just be used for training the ai based face swapping duh. it's kinda obvious.

        • and just replying to myself but people don't believe real to be real either. just look at flat earthers showing a photo showing that a mountain is partially beyond the horizon claiming that it proves the earth as flat as they can see part of the mountain..

          or the thing from last usa presential election where they claimed the people(and/or hillary) at hillarys rally were cgi as they couldn't understand how perspective works.

      • The state of the art here is moving incredibly quick, like recently Iâ(TM)ve played with StyleGAN and even though the 1024x1024 heads it generates donâ(TM)t look real for photos theyâ(TM)ll essily pass as talking heads in a movie when scaled down. If they had better textures youâ(TM)d soon have photoswaps. Itâ(TM)s not so much CGI as style vampires combining real looks to a new real look.

      • Yes, but this was cinema "quality" - when you fake averange mobile camera footage, the quality requirements are a lot less. Also, that Star Wars film is ancient (= older than a few months) and technology improves fast...
    • by Bomazi ( 1875554 )
      If you haven't seen it, you might want to check out the movie Eyeborgs [imdb.com]. It takes place in a world in which this technology is available, and explores the consequences. It is not a masterpiece, but it is quite fascinating.
    • by gweihir ( 88907 )

      Authentication will not help. Or do you somehow believe people will only watch videos authenticated by the people in them?

      • I'm assuming it will revert back to the days of people and reputations. People who don't lie and are proven again and again to tell the truth are more believable while no reputation tend to be only half believed if at all and trashed reputation are assumed liars. You could use a form of encryption based in hardware to show for example that video off a cellphone hadn't been altered, or you could attach a reputation to a studio and have the studio work be verifiable.
        • by gweihir ( 88907 )

          Well, maybe. The kicker is identity. For example, banned cheaters in multiplayer games often just get a new copy and continue their behavior under a different name.

  • Must prepare an excuse related to the formidable deepfakes.
    • by Mal-2 ( 675116 )

      Maybe that's what he was trying to pay MIT to develop: plausible deniability for any record of his actions, by way of destroying any faith in such a record.

      • I would think that would be the last thing that he'd want. It certainly appears that Epstein was blackmailing some very rich and/or powerful and plausible deniability would greatly reduce his bargaining position.
  • by bjwest ( 14070 ) on Wednesday September 18, 2019 @11:53PM (#59211526)
    As soon as deep fakes are undetectable, video evidence is gone. All someone has to do is deny it's them in the video, and it's right back to who ever has the most convincing argument.
    • by Mal-2 ( 675116 )

      The new standard will probably be multiple angles taken by multiple people who would be unlikely to collude to fake things.

      • by gweihir ( 88907 )

        The new standard will probably be multiple angles taken by multiple people who would be unlikely to collude to fake things.

        A) Way too much effort. B) How do you ensure "unlikely to collude"?

        • by Mal-2 ( 675116 )

          People with no meaningful connection to each other, aside from being in the same place at the same time, are unlikely to collude. People that don't like each other but who both take video of the same event -- and it matches -- are unlikely to collude. The more such pairings you can come up with, the more unlikely "they're all deepfakes" becomes. Even if they all take video, and deepfake their own with the same faces, the alterations made from the various angles will be inconsistent with each other even thou

          • by gweihir ( 88907 )

            These are characteristics you cannot realistically ensure and that are easy to fake. For example, "people that don't like each other" are very simple to generate artificially. People with no meaningful connection are a bit harder, but all it takes is to make this a service and the connection is just not accessible to the verifier.

            Sounds good in theory, unlikely to work if an attacker invests a bit of effort.

            • by Mal-2 ( 675116 )

              Then we're going to get video pushed into cryptographically signed containers, and anything that can't have its chain of custody verified will be invalid in court -- just like evidence the government already collects. Alas, there is no solution to the "people will believe what they want to believe" problem. People will continue to see firefighters and angels in clouds when really it's a Space Marine gunning down a harpie.

    • Its not all that black and white https://www.crime-scene-invest... [crime-scen...igator.net] Read about many cases in the past where they had video evidence of some crime and they 'enhanced'/'sharpened' screenshots with "photoshop".. and it did count as an evidence. Sounds sketchy and open to misinterpretations to me.
      • by gweihir ( 88907 )

        The law is not about truth. The law is about sticking it to small people and those that angered those with real power. Hence no problem if some that an honest person would call "innocent" get sent to prison. Happens all the time.

  • Well, if a report by a Sociology and Science Studies person and a Media Studies/Information Studies writes a report about AI and deepfakes, it must be true. I will wait for prognostications from someone who actually develops AI and/or deepfakes to write these alarmist articles.

    • In five years they'll have established themselves as some sort of expert just like all the law enforcement and compliance people who have taken over information security.

      I'd like you to welcome my guest tonight, he's an expert in artificial intelligence, had articles in 5 national magazines, he's just published a new book entitled "AI and Purple".

  • 1) Train an AI to generate deep fakes.

    2) Train an AI to detect deep fakes.

    3) Re-train the first AI until it can also pass the second test.

    Trying to build a deep fake detector will just enable better deep fakes.

  • We'll have to go back to 30-50 years ago where we didn't have reliable omnipresent surveillance. Whatever will we do????
    • by Baki ( 72515 )

      When the surveillance system is controlled by the state, you cannot claim that it is a deep fake.
      Likely, judges will assume that the states surveillance system is reliable.

  • What idiot thought it could?

  • There is no "AI", it is all lies, damned lies and statistics.

    • There is no "AI", it is all lies, damned lies and statistics.

      Not true. I know several guys named Al. There are even some famous Al s: Al Gore, Al B Sure, Al Bundy, Al Pacino...

  • Ultimately, the only way out of the rapidly approaching distopian hell, I think, is educated and critically thinking population. The trend though, is for dumbing down the masses, I guess it has to get worse before it starts to get better... We can play games [hudgames.com] and ignore this problem

"I am, therefore I am." -- Akira

Working...