Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Open Source Google

Google Offers Its AI Watermarking Tech As Free Open Source Toolkit (arstechnica.com) 8

An anonymous reader quotes a report from Ars Technica: Back in May, Google augmented its Gemini AI model with SynthID, a toolkit that embeds AI-generated content with watermarks it says are "imperceptible to humans" but can be easily and reliably detected via an algorithm. Today, Google took that SynthID system open source, offering the same basic watermarking toolkit for free to developers and businesses. The move gives the entire AI industry an easy, seemingly robust way to silently mark content as artificially generated, which could be useful for detecting deepfakes and other damaging AI content before it goes out in the wild. But there are still some important limitations that may prevent AI watermarking from becoming a de facto standard across the AI industry any time soon.

Google uses a version of SynthID to watermark audio, video, and images generated by its multimodal AI systems, with differing techniques that are explained briefly in this video. But in a new paper published in Nature, Google researchers go into detail on how the SynthID process embeds an unseen watermark in the text-based output of its Gemini model. The core of the text watermarking process is a sampling algorithm inserted into an LLM's usual token-generation loop (the loop picks the next word in a sequence based on the model's complex set of weighted links to the words that came before it). Using a random seed generated from a key provided by Google, that sampling algorithm increases the correlational likelihood that certain tokens will be chosen in the generative process. A scoring function can then measure that average correlation across any text to determine the likelihood that the text was generated by the watermarked LLM (a threshold value can be used to give a binary yes/no answer).

Google Offers Its AI Watermarking Tech As Free Open Source Toolkit

Comments Filter:
  • This prevents bad actors from using Google's tools to generate their fake content, but it's not going to stop any actors at the nation state level who will have their own. The absence of the watermark on their fakes will lend credibility to their authenticity because if they were fake they would obviously have the watermark that all the good companies are using. Everyone here knows people who think that way.

    The only solution to this problem is to train people to be incredibly skeptical of anything that i
  • These image "AI watermarks" are already here and many people do not even notice them, taking AI images as real ;)

    It seems that the text-based ones are more subtle indeed, however with a known algorithm it may be possible for a human to incorporate them in their written text for false positives.
    The paper in Nature covers to some extent spoofing and other limitations of this watermarking approach (stealing, scrubbing, paraphrasing).

  • by i kan reed ( 749298 ) on Thursday October 24, 2024 @05:14PM (#64891933) Homepage Journal

    Google shuts its popular "AI watermarking tool" with no explanation.

  • If I understand this right, this is not actually easy to verify for anybody. It seems verification requires the model that generated the output. That way, Google gets data on who tries to verify a watermark. That does not sound good.

  • The fun ain't going to last.
    Everything dumped in the public domain will be sunset within a few years.
    Just ask Gemeni!

"If you don't want your dog to have bad breath, do what I do: Pour a little Lavoris in the toilet." -- Comedian Jay Leno

Working...