Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Open Source Google

Google Offers Its AI Watermarking Tech As Free Open Source Toolkit (arstechnica.com) 13

An anonymous reader quotes a report from Ars Technica: Back in May, Google augmented its Gemini AI model with SynthID, a toolkit that embeds AI-generated content with watermarks it says are "imperceptible to humans" but can be easily and reliably detected via an algorithm. Today, Google took that SynthID system open source, offering the same basic watermarking toolkit for free to developers and businesses. The move gives the entire AI industry an easy, seemingly robust way to silently mark content as artificially generated, which could be useful for detecting deepfakes and other damaging AI content before it goes out in the wild. But there are still some important limitations that may prevent AI watermarking from becoming a de facto standard across the AI industry any time soon.

Google uses a version of SynthID to watermark audio, video, and images generated by its multimodal AI systems, with differing techniques that are explained briefly in this video. But in a new paper published in Nature, Google researchers go into detail on how the SynthID process embeds an unseen watermark in the text-based output of its Gemini model. The core of the text watermarking process is a sampling algorithm inserted into an LLM's usual token-generation loop (the loop picks the next word in a sequence based on the model's complex set of weighted links to the words that came before it). Using a random seed generated from a key provided by Google, that sampling algorithm increases the correlational likelihood that certain tokens will be chosen in the generative process. A scoring function can then measure that average correlation across any text to determine the likelihood that the text was generated by the watermarked LLM (a threshold value can be used to give a binary yes/no answer).

This discussion has been archived. No new comments can be posted.

Google Offers Its AI Watermarking Tech As Free Open Source Toolkit

Comments Filter:
  • by alvinrod ( 889928 ) on Thursday October 24, 2024 @04:13PM (#64891929)
    This prevents bad actors from using Google's tools to generate their fake content, but it's not going to stop any actors at the nation state level who will have their own. The absence of the watermark on their fakes will lend credibility to their authenticity because if they were fake they would obviously have the watermark that all the good companies are using. Everyone here knows people who think that way.

    The only solution to this problem is to train people to be incredibly skeptical of anything that isn't accompanied by one or preferably multiple human sources who will attest to the veracity of what is being presented. Everyone here also knows humans will never work that way. The next one hundred years are going to be an interesting time for humanity.
    • This is easy - just require all AI tools to set the equivalent of the IP Header Evil Bit [ietf.org] into the media meta data.

      • by allo ( 1728082 )

        An AI Evil bit has sometimes 0.5 and sometimes 1.5 bits, because you know neural networks are only approximations.

    • Oh it doesnt need to even be a nation state. You can download some pretty malicious models straight off popular hosting sites like civitai designed for creepy deepfakes with no rails against generating pretty fucked up content and host it localy. These tools offer no protection against those sorts of uses, because folks just wont include them.

    • This prevents bad actors from using Google's tools to generate their fake content, but it's not going to stop any actors at the nation state level who will have their own. The absence of the watermark on their fakes will lend credibility to their authenticity because if they were fake they would obviously have the watermark that all the good companies are using. Everyone here knows people who think that way.

      I'm also wondering what prevents people from using this to mark actual real content as AI generated.

  • These image "AI watermarks" are already here and many people do not even notice them, taking AI images as real ;)

    It seems that the text-based ones are more subtle indeed, however with a known algorithm it may be possible for a human to incorporate them in their written text for false positives.
    The paper in Nature covers to some extent spoofing and other limitations of this watermarking approach (stealing, scrubbing, paraphrasing).

  • by i kan reed ( 749298 ) on Thursday October 24, 2024 @04:14PM (#64891933) Homepage Journal

    Google shuts its popular "AI watermarking tool" with no explanation.

  • It is a trap! (Score:4, Insightful)

    by gweihir ( 88907 ) on Thursday October 24, 2024 @04:14PM (#64891937)

    If I understand this right, this is not actually easy to verify for anybody. It seems verification requires the model that generated the output. That way, Google gets data on who tries to verify a watermark. That does not sound good.

  • The fun ain't going to last.
    Everything dumped in the public domain will be sunset within a few years.
    Just ask Gemeni!

Hackers are just a migratory lifeform with a tropism for computers.

Working...