Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI United States

California Lawmakers Push for Watermarks on AI-Made Photo, Video (bloomberglaw.com) 116

California lawmakers are drawing up multiple plans to require watermarks on content created by AI to curb the abuses within the emerging technology, which has affected sectors from political races to the stock market. From a report: At least five lawmakers have promised or are considering different proposals that would require AI companies to implement some type of verification that a video, photo, or written work was made by the technology. The activity comes as advanced AI has rapidly evolved to create realistic images or audio on an unprecedented level. Advocates worry the technology could be ripe for abuse and lead to a wider proliferation of deepfakes, where a person's likeness is digitally manipulated to typically misrepresent them -- with it already being used in the presidential race. But such measures are likely to face scrutiny by the tech sector.

Amid a pivotal election year and an online world full of disinformation, the ability to know what's real or not is crucial, said Drew Liebert, director of the California Initiative for Technology and Democracy. The harm from AI is already happening, with Liebert noting the aftermath of an AI-generated photo that went viral in May of last year that falsely portrayed another terrorist attack in the US. "The famous photograph now that was put on the internet that alleged that the Pentagon was attacked, that actually caused momentarily a [$500 billion] dollar loss in the stock market," he said. The loss would not as been as severe, he said, "if people would have been able to instantly determine that it was not a real image at all."
Ask Slashdot:Could a Form of Watermarking Prevent AI Deep Faking?
This discussion has been archived. No new comments can be posted.

California Lawmakers Push for Watermarks on AI-Made Photo, Video

Comments Filter:
  • Politicians... (Score:5, Insightful)

    by bradley13 ( 1118935 ) on Friday January 26, 2024 @04:04PM (#64190530) Homepage

    ...attempting to regulate that which they do not understand. Probably based on which lobbyist wrote the biggest check.

    Pardon my cynicism...

    • by rsilvergun ( 571051 ) on Friday January 26, 2024 @05:06PM (#64190812)
      Seriously I mean it feels like you're fishing for karma here with a phrase like that because anytime you attack politicians you can pretty much be guaranteed you're going to get a few mod points thrown your way but seriously what is it about this that they don't understand?

      From my standpoint it seems pretty cut and dry to me that AI image generation makes it super easy to spread misinformation and outright lies and while we all focus on the obvious political issues I keep getting ads for Joe Rogan telling me to buy all sorts of things on YouTube which are utterly nonsensical and obvious scams.

      You can make the argument that the scams are going to run the ads anyway but if it's not legal for them to run the ads without a watermark that makes it much easier to go after them. That's usually what these kind of laws are for it's to create something where there's a much more clear-cut crime being committed. Sort of like how we got Al Capone on tax evasion.

      That said I don't think this is a perfect solution I'm not even convinced it's a good idea. But I'm equally not convinced it's a bad idea.
      • by pete6677 ( 681676 ) on Friday January 26, 2024 @05:38PM (#64190912)

        We need criminal penalties for disinformation spreaders, amirite?

        • by Archangel Michael ( 180766 ) on Friday January 26, 2024 @05:50PM (#64190942) Journal

          It depends on what "disinformation" is being spread. amirite?

          My team can spread it.
          Your team can't.

          • Re: (Score:2, Insightful)

            by taustin ( 171655 )

            No, it depends on who gets to define what is and isn't disinformation.

            California Democrats (and I've lived here a long time) remind me, more and more, of Stalin editing photographs as all his "old buddies" fell out of favor, one by one.

            • Re: (Score:2, Insightful)

              I live in CA as well. I understand. Trust me.

              • Re: (Score:2, Funny)

                by taustin ( 171655 )

                If you live in California, I would never trust you.

                And you would never trust me.

                Because we both know better.

                • Does it help that I don't want to live here, but wife won't move as her family is here.

            • No, it depends on who gets to define what is and isn't disinformation.

              That'd be whichever of our two major parties is presently in power. That's why it's not a great idea to grant the government any new powers that you wouldn't feel comfortable being wielded by both parties.

        • Re: (Score:2, Informative)

          by Anonymous Coward

          We need criminal penalties for disinformation spreaders, amirite?

          "i like pancakes"

          "oh so you hate waffles?"

          "no bitch, that's a whole new sentence, wtf are you talking about?"

        • Yes, and we already do for the most part. It's called fraud when you impersonate someone else and making it a crime to do it in this specific way is a win for us all. If you don't think it should be a crime, give me your likeness so I can run ads in your area pretending to be you.

          • Disinformation isn't necessarily fraud.

            • Right, but we're specifically talking about disinformation distributed by AI fakes trying to pass as the real person.

              • But what if you made an AI fake and were open about the fact that it is a fake? What if you made an AI fake of Elvis eating your ice cream, and said that "Fake Elvis" loves our ice cream? I wonder. It could be considered impersonation. I really don't know how all of this is going to shake out.
                • That's literally what this article is about. If you're going to deepfake someone, you must be open about it instead of committing fraud. Even Howard Stern makes up goofy names for the impersonated characters that get on air, and that's why I've never confused David Letterman with Evil Dave Letterman.

        • We have those (Score:4, Insightful)

          by rsilvergun ( 571051 ) on Friday January 26, 2024 @06:30PM (#64191058)
          There's a wide variety of laws against false advertising. There's also laws against lible.

          When you start lying about news events it does get kind of dicey though. But I do think for example that we should crack down on the anti-vax crowd using existing laws about making false claims regarding medical. The same should be brought to bear against homeopaths and other scam artists.
          • If you did that, then the word "superfood" would be illegal, in addition to any claim that may be true but hasn't yet been proven, or medical claims that have been "proven" but aren't actually true.

            For example, under that regime it would be a crime for me to claim that you have the mental capacity to use a spell checker.

      • by aldousd666 ( 640240 ) on Friday January 26, 2024 @07:16PM (#64191156) Journal
        No, the part they don't understand is how watermarking works (or doesn't.) You can add a watermark, store a local copy of it, digitally sign it and embed it into the image. And, 2 seconds later, someone can run it through a virtual meat slicer and remove all traces of all of that and the image won't look any different whatsoever. It'll just not have a watermark. Besides, any digital signature wouldn't match anyway, even if you change a single pixel or save it as a jpeg, or whatever. Just uploading an image to facebook gets it compressed, so the next person to share it has already destroyed any crypto trail it may have had. They could try to require them to maintain some central registry of all generated images or something, but that isn't actually practical. So yeah, they don't understand. This isn't a new issue. People are already responsible for the shit they post online and if they share copyrighted materials today, they get in trouble. Not the Adobe for making photoshop.
      • From my standpoint it seems pretty cut and dry to me that AI image generation makes it super easy to spread misinformation and outright lies and while we all focus on the obvious political issues

        So the fix is to give people a false sense of security? That is, if the watermark isn't there, it's ok to turn your brain off and just take for granted that it isn't a deepfake? If not, then what the fuck is the point?

        Oh wait, you always turn your brain off anyways, just like you're doing right now.

        • That's exactly what it is! By requiring the watermark, the politicians appear to be doing "something". Who cares that it's insanely easy to undo the watermark and will hardly, if ever, be enforced.

          We got lots of laws like that. This will just be another one.

      • Misinformation is free expression, comrade.

      • ...but if it's not legal for them to run the ads without a watermark that makes it much easier to go after them.

        Not if they are doing it from outside California or even the US and from a country where it is perfectly legal to use AI images that do not have watermarks. Good luck to California if they think they can enforce their laws on say an EU website from a company with no physical presence in California or even the US and unless they plan to wall themselves off from the internet those images will still be visible to Californians.

    • And good luck with watermarking, those that want to create questionable things will of course circumvent it.

  • by cayenne8 ( 626475 ) on Friday January 26, 2024 @04:06PM (#64190540) Homepage Journal
    It seems this is just NOW getting high attention, with the recent release of some rather "compromising" shots someone generated of Taylor Swift. That seems to have gotten a lot of attention last couple days.

    These laws are all good and fine for companies that are commercial....BUT how could they possibly try to enforce this on the open source stuff like StableDiffusion?

    And....laws like this seem, on their face, to really grind against the 1st amendment right off to bat.

    • I don't think Taylor Swiftâ(TM)s complaint about AI-generated nudes is about proving they aren't real, so if this was prompted by the Taylor Swift controversy it would be quite a knee-jerk reaction. Certainly not outside the realm of possibility for legislators, though.
      • by taustin ( 171655 )

        This is California legislators. There are only two reactions they're capable of, both involving jerking. On involves jerking the knees, the other involves jerking something else.

    • That's just in the news because it's salacious but politicians have been considering these kind of laws for some time now. It's also entirely possible that none of these proposals are meant to be put into law and are just a warning signal to the AI companies to get there act together and self-regulate similar to what we did with the video game industry back in the day
    • by Anonymous Coward

      The thing is... this is an election year. Other countries with extensive psy-ops operations going on in the US would be falling over themselves if they got the US Congress to ban AI development, while theirs continued. Scoring regulations that are all but useless would be a victory for them. Billions are being spent, and it may not be Swifties, but many other people who want to tear apart the US development of AI, so their AI, which doesn't have any guardrails or IP protection can develop in peace. Reme

      • I wish we had a mass exodus here. Instead, what, maybe a million or two? Out of 39 million? Big deal. For real change, we need 10 million to GFTO. That would actually make a difference on housing, which is the single biggest part of our affordability problem.

        P.S. I'm working on trying to be one of the leavers, so really, I'll be contributing to that positive trend. I just can't do it today or even this year, but hopefully within two. California, you are most welcome!

    • These laws are all good and fine for companies that are commercial....BUT how could they possibly try to enforce this on the open source stuff like StableDiffusion?

      In exactly the same way the rules apply to commercial software. I don't see why commercial vs. open source should make any difference as far as the law is concerned here.

  • like watermarks can't be removed, spoofed, etc
    • by Somervillain ( 4719341 ) on Friday January 26, 2024 @04:14PM (#64190578)

      like watermarks can't be removed, spoofed, etc

      Nothing can stop abuse, but you can make it less convenient. At least commercial companies will be more mindful of how their product is being used (assuming laws like this actually get passed). We can't stop AI abuse...but we should force anyone making money off AI-related content to label it as AI-generated. That will deter those who want to run a respectable business from enabling the worse AI will bring.

      You want to make some deepfake speeches of Biden saying the polls are closed?...well...at least you'll have to setup your own hardware now....or find some unregulated (KGB) clusters if this were a law in respectable nations. We can't stop Iran or Russia from doing shitty things with generative AI, but you can at least stop my anti-vaxxer conspiracy theory loving dumbass cousin.

      • by Linux Torvalds ( 647197 ) on Friday January 26, 2024 @04:29PM (#64190648)

        but you can at least stop my anti-vaxxer conspiracy theory loving dumbass cousin.

        By validating his conspiracy theories?

      • Nothing can stop abuse, but you can make it less convenient.

        How does it stop the abuse though? If having the watermark becomes a standard for fake video then people will just add it to real video and claim something real was actually faked and the real videos just use watermark-removing software.

        If anything this will be easier to do than making a fake video and we end up with the same problem - nobody is sure what's true.

        • Nothing can stop abuse, but you can make it less convenient.

          How does it stop the abuse though? If having the watermark becomes a standard for fake video then people will just add it to real video and claim something real was actually faked and the real videos just use watermark-removing software. If anything this will be easier to do than making a fake video and we end up with the same problem - nobody is sure what's true.

          There are 2 categories of these actors...legit companies trying to do "good" and people creating tools for criminals. If these laws were broadly passed, Open AI, MS, Google, Meta, etc would comply. They want Generative AI to be the next technological revolution for productivity...like the jump to webapp from VBasic Client/Server Apps in the 90s. There's little money in Fake News and Election Fraud...compared to what they can make with automating workflows for big-spender companies. They don't want to be

          • Generative AI is presently pretty costly to run, so anyone who wants to violate rules has to setup their own cluster...sure, the KGB and Iran will do that...but I don't think the average 4chan loser has access to those resources....so you want to make a deepfake video of Nanci Pelosi pegging her husband?...you need to setup your own cluster...the average Slashdot user could probably do that, if they wanted to swallow the expense...but my dumbass cousin couldn't.

            AI image models are far more accessible than even small 7B LLMs due to relatively low VRAM requirements. These models are absolutely tiny in the 2 to 6 GB range. Any kid with a mid-range gaming PC can run this shit quite comfortably. Until recently I was using a 7 year old GPU without any tensor cores. Modern kits are quite extensive with workflows for training up LoRAs, graph editors, controlnets...etc.

            but my dumbass cousin couldn't.

            Anyone with a gaming PC very much could.

      • We can't stop AI abuse...but we should force anyone making money off AI-related content to label it as AI-generated. That will deter those who want to run a respectable business from enabling the worse AI will bring.

        You want to make some deepfake speeches of Biden saying the polls are closed?...well...at least you'll have to setup your own hardware now....or find some unregulated (KGB) clusters if this were a law in respectable nations. We can't stop Iran or Russia from doing shitty things with generative AI, but you can at least stop my anti-vaxxer conspiracy theory loving dumbass cousin.

        I strongly disagree. The presence of what is effectively an evil bit may be viewed as a legitimate indicator of something meaningful making this plan far worse than the status quo.

    • by taustin ( 171655 )

      Or that state laws can't be enforce anywhere else.

  • I didn't see that one coming! There's nothing we can do to stop or prevent the horrors AI will bring. However, we can make it illegal to push AI content without labeling it. It won't stop the worst actors obviously, but it will reduce harm caused from respectable businesses actually making money on their product.
    • I think I'm gonna go puke.

    • by cirby ( 2599 )

      ...and the response will be "Let's all buy software from places that don't give a crap about California law!"

      • ...and the response will be "Let's all buy software from places that don't give a crap about California law!"

        Yeah...reminds me of the generative AI porn...it suuucks!!!!...not remotely convincing....honestly horrifying. You pick the parameters for your perfect woman...it gets them all wrong...pick Latina Milf and she ends up Black with Japanese facial features and looking 20 years too young....and she has 6 fingers going in directions no hand can go into...because generative AI suuuucks...and it sucks even more when you do it poorly, like the generative AI porn sites. With today's tech, Generative AI is very exp

    • by DarkOx ( 621550 ) on Friday January 26, 2024 @04:50PM (#64190750) Journal

      Wrong - it will provide covers to the very worst actors to do whatever they please.

      There is a choice here. That choice is one

      where educated people know not believe their lying eyes when it comes to what they see online (just like what they read online today really) because there is a sea of crap out there, and they wait until some reputable news agency vets it.

      where everyone continues to think that because its video it has to be real, I mean if it wast it would have AI watermark right? and Nobody can easily make AI videos without a water mark; therefore the youtube video of Biden doing coke with Hunter must be real!

      Actual threat actors, you know the like Chinese and Russian intelligence, probably organized crime will be free to pump out whatever misinformation they like. While Joe Public that just wants to make a video about his powerwashing company will be tared with some 'Possibly Fake Video' disclaimer. This is a not a good strategy!

    • by chas.williams ( 6256556 ) on Friday January 26, 2024 @06:07PM (#64190998)
      I am hoping that this is sarcasm. You do know that the assumption will be that is something isn't watermarked, that it isn't AI. That's not the contrapositive, but good luck explaining that to the masses.
    • by Logger ( 9214 )

      Nerd card revoked. This only serves to make rubes more trusting of content without watermarks, but does not increase the trustworthiness of non-watermarked material.

    • but it will reduce harm caused from respectable businesses actually making money on their product.

      If a watermark is needed to distinguish whether it's a fake or not, then these businesses were not respectable to begin with.

  • I see the attraction of the argument, but suspect it wouldn't pass first amendment scrutiny, all things being "equal" anyway.
    It's too late, anyway. If anyone with a few thousand bucks can run one of these things (albeit slowly) in their server closet, and it's open source to get started, regulation would be an unfunny joke.

    • but suspect it wouldn't pass first amendment scrutiny,

      In what way? No one is preventing you from making the stuff. All that's being said is it must be marked as not real/original/whatever. Watermarking, just like performing a fact check on someone's lies, does not violate free speech.
      • Re:too late (Score:4, Informative)

        by The Cat ( 19816 ) on Friday January 26, 2024 @05:06PM (#64190814)

        Compelled speech is a violation of the First Amendment. Not only does it infringe on free speech it also infringes on the freedom of the press.

        • Compelled speech is a violation of the First Amendment. Not only does it infringe on free speech it also infringes on the freedom of the press.

          No speech is being compelled. As the OP stated, fact checking a lie is not a violation of the First Amendment. The lie is still there for everyone to see. All that has happened is someone pointed out that lie.

          The same here. No one is preventing you from posting an AI picture of someone or something. All that is being done is notifying people it's a computer generated picture.

          • by taustin ( 171655 )

            This isn't equivalent to removing the lie, it's the equivalent to forcing the liar to label it a lie.

            If you don't see the difference, you're part of the problem.

            (Not that it would be enforceable anyway.)

            • This isn't equivalent to removing the lie, it's the equivalent to forcing the liar to label it a lie.

              And? How is that a bad thing? What you're suggesting is companies shouldn't have to put labels on their products which say, "Not life size" or "Simulated color".

              If you don't want lies to be called out then a video of the orange criminal showing him kissing Putin's hand would be fair game. Granted, that is highly believable to begin with, but you get the point.

        • Compelled speech is a violation of the First Amendment. Not only does it infringe on free speech it also infringes on the freedom of the press.

          I'm as close to a free-speech absolutist as you're likely to find (check my posting history), but I don't agree at all with this interpretation. In fact, this law reminds me of the famous quote by Louis Brandeis: "the remedy [for problematic speech] is more speech, not enforced silence".

          I've suggested many times that this is the kind of solution social media should use to cope with "problematic" speech (posts by Russian bots, so-called "fake" news stories, etc). Don't hide the posts or ban the posts-- jus

  • technical solution (Score:5, Insightful)

    by Local ID10T ( 790134 ) <ID10T.L.USER@gmail.com> on Friday January 26, 2024 @04:20PM (#64190612) Homepage

    This is a technical solution to a social issue. It will not solve the problem.

    • But good luck getting away with that. First and foremost there's a sizable number of politicians who don't want critical thinking skills and the ability to evaluate claims taught to students.

      Moreover parents are often extremely upset when those skills are taught to their kids because most parents have a whole bunch of sacred cows they don't want to see criticized and if you give a kid what is the rhetorical and intellectual equivalent of a wrecking ball they're going to turn it on pretty much everything
    • The problem with this isn't the technology, but the fact that journalists are willing to lie in order that they may sell advertising. A truth-in-journalism act would result in not in deepfakes being published, but rather ignored, by the press at large.

      The problem is not the technology, but the fact that journalists are willing to lie with whatever means at their disposal.

  • About as useful as (Score:5, Insightful)

    by xanthos ( 73578 ) <[xanthos] [at] [toke.com]> on Friday January 26, 2024 @04:26PM (#64190632)
    the Evil Bit [wikipedia.org]
  • I think the cat is out of the bag now. Not only have commercial companies been selling the product for years now, but there are open source models you can use. Not to mention watermarks are alarmingly easy to replace.

    • I think the cat is out of the bag now. Not only have commercial companies been selling the product for years now, but there are open source models you can use.

      No doubt. This reminds me of the gun control debate: making laws about this only ensures law-abiding people add watermarks. Criminals and people with nefarious purposes will just work around the law.

      Not to mention watermarks are alarmingly easy to replace.

      I'm assuming the watermarks aren't visual marks, like traditional photographers might add. The only thing which makes sense is some sort of stenographic, cryptographically-signed watermark. That you can't remove by, say, resampling with the GIMP.

      • by Anonymous Coward

        I'm assuming the watermarks aren't visual marks, like traditional photographers might add. The only thing which makes sense is some sort of stenographic, cryptographically-signed watermark. That you can't remove by, say, resampling with the GIMP.

        That's assuming a lot. It's certainly possible to steganographically watermark an image, but it's impossible to maintain an accurate, cryptographic watermark signature through any number of potential transformations that an adversary could use. This watermark proposal would have almost zero practical outcome and be practically unenforceable.

        • That you can't remove by, say, resampling with the GIMP.

          That's assuming a lot. It's certainly possible to steganographically watermark an image, but it's impossible to maintain an accurate, cryptographic watermark signature through any number of potential transformations that an adversary could use. This watermark proposal would have almost zero practical outcome and be practically unenforceable.

          Sorry, I wasn't clear. That was my point about GIMP, that removing a cryptographic watermark would almost certainly be easy. But I'm not a cryptographer so maybe there's some way to embed a hidden watermark that you couldn't remove with a simple smudge filter.

          • by Anonymous Coward

            so maybe there's some way to embed a hidden watermark that you couldn't remove with a simple smudge filter.

            Not really, no.
            The law proposes that the AI creator tools provide a way to verify if some image was AI generated. No matter how clever or secret the watermarking scheme, a verification tool provides the necessary feedback to an adversary to keep applying filters to an image until the watermark is defeated.

  • this law will go nowhere. forced speech isn’t a thing here in America.

    • Re: (Score:2, Interesting)

      by arbiter1 ( 1204146 )
      1a doesn't apply as its not forced speech. The law is worthless since it only in cali which likely all AI stuff will be done outside it.
      • 1a doesn't apply as its not forced speech.

        It's not? In what way is forcing a creator to state that an image is AI-generated, under threat of fines, imprisonment, or other state-imposed punishment, NOT compelled speech? Please, be specific.

  • Really? Their state is on fire and they have nothing better to do than to fight something that is literally unfightable. Go CA! This will definitely help your street poo problem.

    • Re: (Score:3, Funny)

      by Anonymous Coward

      This will definitely help your street poo problem.

      Nice way to admit you living in a conservative news bubble. Why didn't you mention woke or trans?

  • Rather than requiring that AI content to be watermarked, which is impossible to enforce, why can't original content providers just regularly watermark their own legitimate content?
    • by taustin ( 171655 )

      Because that will not facilitate the powers that be creating their own fake stuff without the watermark, and expect everyone to believe it's real.

      And I'll bet these politicians have an exactly that conversation. They certainly don't give a damn about their constituents.

    • Bingo! It's like requiring all fake money to have a watermark, but not real money. But we are talking about California politicians, so you cannot expect logic or pragmatism. I heard they saw a study once which concluded that people with bachelor degrees make more money on average than people without, so some politicians suggested that they give away bachelor degree with every high school diploma, solving the state's poverty problem. I sure hope that is an urban legend, but having lived there for a while, I
  • ..... There is precisely zero way to actually enforce it.

    While sure you can make the widely used tools that generate such works put watermarks in, there is no way that you can actually force someone to use one specific tool to do so, unless you also outlaw open source.

  • by The Cat ( 19816 ) on Friday January 26, 2024 @05:08PM (#64190828)

    This speech is approved. This is not.

    Gee, what could go wrong?

  • by Tablizer ( 95088 ) on Friday January 26, 2024 @05:20PM (#64190872) Journal

    So Photoshopping the hell out of something is fine, but use AI and it needs an e-sticker?

    That's like arresting left-handed pick-pockets but not right-handed ones because left pockets are trendy.

    • Sure you could hire someone to Photoshop a nude Taylor Swift, or impersonate Joe Biden telling people not to vote, but that will cost you money or time. Right now you can do those things with no skill, quickly, for free--subsidized by venture capital.

      When I read the headline, I thought this was "Evil Bit" style stupid move. But I think it will buy time for society to figure out what it takes to adjust before the required computing power is common in people's vibes.

    • So Photoshopping the hell out of something is fine, but use AI and it needs an e-sticker?

      That's like arresting left-handed pick-pockets but not right-handed ones because left pockets are trendy.

      Photoshop requires skill. AI requires little skill or effort. A talented archer can be very lethal, but any idiot with a gun can cause mass casualties...hence why we regulate guns more carefully than bows and arrows. I am much more worried about AI-generated deepfakes than talented photoshoppers.

  • Enforcement of setting the Evil Bit [ietf.org] ensured perfect security on the internet, no packets with unlawful intent have been seen ever since.

    Has anyone ever told these lawmakers that nobody gives a fuck about their laws outside of their jurisdiction? Or inside, for that matter?

    The likelihood of a law being observed sinks dramatically as the chance of getting caught breaking it drops near zero.

    • For those who cannot obey the law, a bit of vigilante action is advised. Mark-them-up ... track-them-down ... punch-them-out.  EOF.
      • Good luck finding the guy in Generistan. The police there will laugh in your face and tell you to fuck off, they have real crime to take care of.

  • by ctilsie242 ( 4841247 ) on Friday January 26, 2024 @05:52PM (#64190950)

    We have seen this before. Ages ago, when the RIAA wasn't able to stop Diamond Rio from making their MP3 player (although the victory for Diamond was Pyhrric), there was a whole think tank created, SDMI, which was pushing watermarking. The issue? Watermarking could be removed, and it could be removed while passing the "golden ears" test.

    What is to keep someone from using some AI based program to un-watermark?

  • Somewhere between difficult and impossible in practice

  • I'll take the extra steps to use generative AI that doesn't watermark the output.
  • by CrappySnackPlane ( 7852536 ) on Friday January 26, 2024 @06:53PM (#64191116)

    Watermarks only lull people into a false sense of security, it would be trivial to find and remove any watermarks (even simply re-compressing the JPG or MKV would likely work).

    What would be useful is AI that can tell you why it's doing what it's doing. And not just "sorry, but I'm protecting you from problematic ideas and possibly-offensive language". Let us see the AI piecing together the answer step by step, so we can see the word "scunthorpe" pop up, and then "answer contains string '*cunt*' --> [100 - vulgar/offensive] terminating request". We shouldn't have to fight tooth and nail for this right.

    • it would be trivial to find and remove any watermarks (even simply re-compressing the JPG or MKV would likely work).

      I may be giving them too much credit and assuming they're requesting "invisible" watermarks. If what they want is along the lines of a logo bug, then not only will that never, ever fly, but it'd be just as trivial to remove - for instance, request the AI generate a video in letterboxed 4:3, then just crop off the top and bottom (and logo bug) when reencoding to proper widescreen.

  • There are major problems with this kind of law.

    First, all those entities who can independently create AI content aren't going to abide by such a law. This includes creators in other countries and other subversive and propagandist elements. Why on earth would they decide to make content with watermarks when their intent to propagate disinformation and sow distrust? If anything a law demanding that AI art creators add a watermark will actually make it even easier to stoke the disruption of authenticity.
  • The proper way to do this is have signed certs in photo equipment that can sign the meta data with the image to prevent people from claiming something is misinformation. At this point it should be assumed all images are manipulated or AI generated.

  • By forcing watermarks on AI generations you are default declaring anything with a watermark as fake and everything without a watermark as genuine. Gee, I wonder what kind of nightmare that will lead to.
    • Whatever JapeChat / AI makes is fake ... by definition.  Watermark those falsies to-the-gills.  Those who pimp-the-ride for AI fakes also ought to be ... branded so-2-speak.  BRANDED .. haha.   Are you the kind of joyboi who like making fakes ?

Every nonzero finite dimensional inner product space has an orthonormal basis. It makes sense, when you don't think about it.

Working...