California Lawmakers Push for Watermarks on AI-Made Photo, Video (bloomberglaw.com) 116
California lawmakers are drawing up multiple plans to require watermarks on content created by AI to curb the abuses within the emerging technology, which has affected sectors from political races to the stock market. From a report: At least five lawmakers have promised or are considering different proposals that would require AI companies to implement some type of verification that a video, photo, or written work was made by the technology. The activity comes as advanced AI has rapidly evolved to create realistic images or audio on an unprecedented level. Advocates worry the technology could be ripe for abuse and lead to a wider proliferation of deepfakes, where a person's likeness is digitally manipulated to typically misrepresent them -- with it already being used in the presidential race. But such measures are likely to face scrutiny by the tech sector.
Amid a pivotal election year and an online world full of disinformation, the ability to know what's real or not is crucial, said Drew Liebert, director of the California Initiative for Technology and Democracy. The harm from AI is already happening, with Liebert noting the aftermath of an AI-generated photo that went viral in May of last year that falsely portrayed another terrorist attack in the US. "The famous photograph now that was put on the internet that alleged that the Pentagon was attacked, that actually caused momentarily a [$500 billion] dollar loss in the stock market," he said. The loss would not as been as severe, he said, "if people would have been able to instantly determine that it was not a real image at all." Ask Slashdot:Could a Form of Watermarking Prevent AI Deep Faking?
Amid a pivotal election year and an online world full of disinformation, the ability to know what's real or not is crucial, said Drew Liebert, director of the California Initiative for Technology and Democracy. The harm from AI is already happening, with Liebert noting the aftermath of an AI-generated photo that went viral in May of last year that falsely portrayed another terrorist attack in the US. "The famous photograph now that was put on the internet that alleged that the Pentagon was attacked, that actually caused momentarily a [$500 billion] dollar loss in the stock market," he said. The loss would not as been as severe, he said, "if people would have been able to instantly determine that it was not a real image at all." Ask Slashdot:Could a Form of Watermarking Prevent AI Deep Faking?
Politicians... (Score:5, Insightful)
...attempting to regulate that which they do not understand. Probably based on which lobbyist wrote the biggest check.
Pardon my cynicism...
What's not to understand? (Score:5, Informative)
From my standpoint it seems pretty cut and dry to me that AI image generation makes it super easy to spread misinformation and outright lies and while we all focus on the obvious political issues I keep getting ads for Joe Rogan telling me to buy all sorts of things on YouTube which are utterly nonsensical and obvious scams.
You can make the argument that the scams are going to run the ads anyway but if it's not legal for them to run the ads without a watermark that makes it much easier to go after them. That's usually what these kind of laws are for it's to create something where there's a much more clear-cut crime being committed. Sort of like how we got Al Capone on tax evasion.
That said I don't think this is a perfect solution I'm not even convinced it's a good idea. But I'm equally not convinced it's a bad idea.
Re:What's not to understand? (Score:5, Funny)
We need criminal penalties for disinformation spreaders, amirite?
Re:What's not to understand? (Score:5, Insightful)
It depends on what "disinformation" is being spread. amirite?
My team can spread it.
Your team can't.
Re: (Score:2, Insightful)
No, it depends on who gets to define what is and isn't disinformation.
California Democrats (and I've lived here a long time) remind me, more and more, of Stalin editing photographs as all his "old buddies" fell out of favor, one by one.
Re: (Score:2, Insightful)
I live in CA as well. I understand. Trust me.
Re: (Score:2, Funny)
If you live in California, I would never trust you.
And you would never trust me.
Because we both know better.
Re: (Score:2)
Does it help that I don't want to live here, but wife won't move as her family is here.
Re: What's not to understand? (Score:2)
He'd be correct if he qualified that to include only people that Californians elect into office.
Re: (Score:2, Insightful)
California politicians are elected by an overwhelming majority of California voters. CA is one state where there's no reason to suspect any kind of voter fraud because it simply isn't needed.
So no, I don't trust CA politicians, and I equally don't trust the people who elect them.
Re: (Score:2)
It's really weird though because in all of my time being here, I haven't yet met anybody who is anywhere near as insane as the politicians here are. I guess there was this one time where some fat guy dressed in pink with painted nails and a mohawk accosted my mom at the grocery store because he didn't like her shirt or something, which I guess is normal in some neighborhoods? Though technically I didn't really meet him.
Re: (Score:2)
How people behave in public compared to have they behave on the Internet or vote is well documented. Just like the Internet, people tend to hangout with more like minded people and want group-think.
Most people do not like to be questioned about their beliefs (politics is a belief) because most of them haven't REALLY thought out why they feel the way they do about something. So unless they are surrounded by "safe" friends they aren't going to be spouting their more crazy ideas in general public where someone
Re: (Score:2)
What is it you're so afraid of that you feel the need to post anonymously?
Re: (Score:2)
No, it depends on who gets to define what is and isn't disinformation.
That'd be whichever of our two major parties is presently in power. That's why it's not a great idea to grant the government any new powers that you wouldn't feel comfortable being wielded by both parties.
Re: (Score:2, Informative)
We need criminal penalties for disinformation spreaders, amirite?
"i like pancakes"
"oh so you hate waffles?"
"no bitch, that's a whole new sentence, wtf are you talking about?"
Re: (Score:2)
Yes, and we already do for the most part. It's called fraud when you impersonate someone else and making it a crime to do it in this specific way is a win for us all. If you don't think it should be a crime, give me your likeness so I can run ads in your area pretending to be you.
Re: What's not to understand? (Score:2)
Disinformation isn't necessarily fraud.
Re: (Score:2)
Right, but we're specifically talking about disinformation distributed by AI fakes trying to pass as the real person.
Re: (Score:2)
Re: (Score:2)
That's literally what this article is about. If you're going to deepfake someone, you must be open about it instead of committing fraud. Even Howard Stern makes up goofy names for the impersonated characters that get on air, and that's why I've never confused David Letterman with Evil Dave Letterman.
We have those (Score:4, Insightful)
When you start lying about news events it does get kind of dicey though. But I do think for example that we should crack down on the anti-vax crowd using existing laws about making false claims regarding medical. The same should be brought to bear against homeopaths and other scam artists.
Re: We have those (Score:2)
If you did that, then the word "superfood" would be illegal, in addition to any claim that may be true but hasn't yet been proven, or medical claims that have been "proven" but aren't actually true.
For example, under that regime it would be a crime for me to claim that you have the mental capacity to use a spell checker.
Re:What's not to understand? (Score:5, Insightful)
Re: (Score:2)
Finally a clear analysis.
Re: What's not to understand? (Score:2)
From my standpoint it seems pretty cut and dry to me that AI image generation makes it super easy to spread misinformation and outright lies and while we all focus on the obvious political issues
So the fix is to give people a false sense of security? That is, if the watermark isn't there, it's ok to turn your brain off and just take for granted that it isn't a deepfake? If not, then what the fuck is the point?
Oh wait, you always turn your brain off anyways, just like you're doing right now.
Re: (Score:2)
That's exactly what it is! By requiring the watermark, the politicians appear to be doing "something". Who cares that it's insanely easy to undo the watermark and will hardly, if ever, be enforced.
We got lots of laws like that. This will just be another one.
Re: What's not to understand? (Score:1)
Misinformation is free expression, comrade.
Geographic Boundaries (Score:3)
...but if it's not legal for them to run the ads without a watermark that makes it much easier to go after them.
Not if they are doing it from outside California or even the US and from a country where it is perfectly legal to use AI images that do not have watermarks. Good luck to California if they think they can enforce their laws on say an EU website from a company with no physical presence in California or even the US and unless they plan to wall themselves off from the internet those images will still be visible to Californians.
Re: Politicians... (Score:2)
And good luck with watermarking, those that want to create questionable things will of course circumvent it.
Swifties tipped the scales? (Score:5, Informative)
These laws are all good and fine for companies that are commercial....BUT how could they possibly try to enforce this on the open source stuff like StableDiffusion?
And....laws like this seem, on their face, to really grind against the 1st amendment right off to bat.
Re: Swifties tipped the scales? (Score:2)
Re: (Score:2)
This is California legislators. There are only two reactions they're capable of, both involving jerking. On involves jerking the knees, the other involves jerking something else.
Re: (Score:3)
Re: (Score:1)
The thing is... this is an election year. Other countries with extensive psy-ops operations going on in the US would be falling over themselves if they got the US Congress to ban AI development, while theirs continued. Scoring regulations that are all but useless would be a victory for them. Billions are being spent, and it may not be Swifties, but many other people who want to tear apart the US development of AI, so their AI, which doesn't have any guardrails or IP protection can develop in peace. Reme
Re: (Score:2)
I wish we had a mass exodus here. Instead, what, maybe a million or two? Out of 39 million? Big deal. For real change, we need 10 million to GFTO. That would actually make a difference on housing, which is the single biggest part of our affordability problem.
P.S. I'm working on trying to be one of the leavers, so really, I'll be contributing to that positive trend. I just can't do it today or even this year, but hopefully within two. California, you are most welcome!
Re: (Score:2)
These laws are all good and fine for companies that are commercial....BUT how could they possibly try to enforce this on the open source stuff like StableDiffusion?
In exactly the same way the rules apply to commercial software. I don't see why commercial vs. open source should make any difference as far as the law is concerned here.
They never learn (Score:1)
Yeah, but at least they just made it harder... (Score:5, Insightful)
like watermarks can't be removed, spoofed, etc
Nothing can stop abuse, but you can make it less convenient. At least commercial companies will be more mindful of how their product is being used (assuming laws like this actually get passed). We can't stop AI abuse...but we should force anyone making money off AI-related content to label it as AI-generated. That will deter those who want to run a respectable business from enabling the worse AI will bring.
You want to make some deepfake speeches of Biden saying the polls are closed?...well...at least you'll have to setup your own hardware now....or find some unregulated (KGB) clusters if this were a law in respectable nations. We can't stop Iran or Russia from doing shitty things with generative AI, but you can at least stop my anti-vaxxer conspiracy theory loving dumbass cousin.
Re:Yeah, but at least they just made it harder... (Score:5, Insightful)
but you can at least stop my anti-vaxxer conspiracy theory loving dumbass cousin.
By validating his conspiracy theories?
Re: (Score:3)
"safe and effective"
million plus fake deaths? (Score:2)
"safe and effective"
Because all those million plus deaths before the vaccine were totally faked...just a big conspiracy for those money-loving scientists to get grant money??...or are you saying the death toll never fell and has remained consistent and it's a big George Soros-funded conspiracy to hide them all?
Re: (Score:2)
"It was the wet market, not lab leak."
Make the Real Seem Fake (Score:2)
Nothing can stop abuse, but you can make it less convenient.
How does it stop the abuse though? If having the watermark becomes a standard for fake video then people will just add it to real video and claim something real was actually faked and the real videos just use watermark-removing software.
If anything this will be easier to do than making a fake video and we end up with the same problem - nobody is sure what's true.
Because respectable tools will force a watermark (Score:2)
Nothing can stop abuse, but you can make it less convenient.
How does it stop the abuse though? If having the watermark becomes a standard for fake video then people will just add it to real video and claim something real was actually faked and the real videos just use watermark-removing software. If anything this will be easier to do than making a fake video and we end up with the same problem - nobody is sure what's true.
There are 2 categories of these actors...legit companies trying to do "good" and people creating tools for criminals. If these laws were broadly passed, Open AI, MS, Google, Meta, etc would comply. They want Generative AI to be the next technological revolution for productivity...like the jump to webapp from VBasic Client/Server Apps in the 90s. There's little money in Fake News and Election Fraud...compared to what they can make with automating workflows for big-spender companies. They don't want to be
Re: (Score:2)
Generative AI is presently pretty costly to run, so anyone who wants to violate rules has to setup their own cluster...sure, the KGB and Iran will do that...but I don't think the average 4chan loser has access to those resources....so you want to make a deepfake video of Nanci Pelosi pegging her husband?...you need to setup your own cluster...the average Slashdot user could probably do that, if they wanted to swallow the expense...but my dumbass cousin couldn't.
AI image models are far more accessible than even small 7B LLMs due to relatively low VRAM requirements. These models are absolutely tiny in the 2 to 6 GB range. Any kid with a mid-range gaming PC can run this shit quite comfortably. Until recently I was using a 7 year old GPU without any tensor cores. Modern kits are quite extensive with workflows for training up LoRAs, graph editors, controlnets...etc.
but my dumbass cousin couldn't.
Anyone with a gaming PC very much could.
Re: (Score:2)
We can't stop AI abuse...but we should force anyone making money off AI-related content to label it as AI-generated. That will deter those who want to run a respectable business from enabling the worse AI will bring.
You want to make some deepfake speeches of Biden saying the polls are closed?...well...at least you'll have to setup your own hardware now....or find some unregulated (KGB) clusters if this were a law in respectable nations. We can't stop Iran or Russia from doing shitty things with generative AI, but you can at least stop my anti-vaxxer conspiracy theory loving dumbass cousin.
I strongly disagree. The presence of what is effectively an evil bit may be viewed as a legitimate indicator of something meaningful making this plan far worse than the status quo.
Re: (Score:3)
Or that state laws can't be enforce anywhere else.
Re: (Score:1)
Over 99.99% of the population is incapable of removing a watermark from an image.
Maybe they could use AI to remove the watermark.
Re: (Score:2)
Unless the watermark is imposed over most of the image, dead center, you just crop it out. Anyone with even a little bit of ability to use photoshop could accomplish this. Far more then your 10% idea.
Holy Shit, technology legislation that makes sense (Score:2, Interesting)
Re: (Score:1)
I think I'm gonna go puke.
Re: (Score:2)
...and the response will be "Let's all buy software from places that don't give a crap about California law!"
If you like 6 mangled fingers on your image (Score:2)
...and the response will be "Let's all buy software from places that don't give a crap about California law!"
Yeah...reminds me of the generative AI porn...it suuucks!!!!...not remotely convincing....honestly horrifying. You pick the parameters for your perfect woman...it gets them all wrong...pick Latina Milf and she ends up Black with Japanese facial features and looking 20 years too young....and she has 6 fingers going in directions no hand can go into...because generative AI suuuucks...and it sucks even more when you do it poorly, like the generative AI porn sites. With today's tech, Generative AI is very exp
Re:Holy Shit, technology legislation that makes se (Score:5, Insightful)
Wrong - it will provide covers to the very worst actors to do whatever they please.
There is a choice here. That choice is one
where educated people know not believe their lying eyes when it comes to what they see online (just like what they read online today really) because there is a sea of crap out there, and they wait until some reputable news agency vets it.
where everyone continues to think that because its video it has to be real, I mean if it wast it would have AI watermark right? and Nobody can easily make AI videos without a water mark; therefore the youtube video of Biden doing coke with Hunter must be real!
Actual threat actors, you know the like Chinese and Russian intelligence, probably organized crime will be free to pump out whatever misinformation they like. While Joe Public that just wants to make a video about his powerwashing company will be tared with some 'Possibly Fake Video' disclaimer. This is a not a good strategy!
Re:Holy Shit, technology legislation that makes se (Score:4, Insightful)
Re: (Score:1)
Nerd card revoked. This only serves to make rubes more trusting of content without watermarks, but does not increase the trustworthiness of non-watermarked material.
Re: Holy Shit, technology legislation that makes s (Score:2)
but it will reduce harm caused from respectable businesses actually making money on their product.
If a watermark is needed to distinguish whether it's a fake or not, then these businesses were not respectable to begin with.
too late (Score:2)
I see the attraction of the argument, but suspect it wouldn't pass first amendment scrutiny, all things being "equal" anyway.
It's too late, anyway. If anyone with a few thousand bucks can run one of these things (albeit slowly) in their server closet, and it's open source to get started, regulation would be an unfunny joke.
Re: (Score:1)
In what way? No one is preventing you from making the stuff. All that's being said is it must be marked as not real/original/whatever. Watermarking, just like performing a fact check on someone's lies, does not violate free speech.
Re:too late (Score:4, Informative)
Compelled speech is a violation of the First Amendment. Not only does it infringe on free speech it also infringes on the freedom of the press.
Re: (Score:3)
Compelled speech is a violation of the First Amendment. Not only does it infringe on free speech it also infringes on the freedom of the press.
No speech is being compelled. As the OP stated, fact checking a lie is not a violation of the First Amendment. The lie is still there for everyone to see. All that has happened is someone pointed out that lie.
The same here. No one is preventing you from posting an AI picture of someone or something. All that is being done is notifying people it's a computer generated picture.
Re: (Score:3)
This isn't equivalent to removing the lie, it's the equivalent to forcing the liar to label it a lie.
If you don't see the difference, you're part of the problem.
(Not that it would be enforceable anyway.)
Re: (Score:2)
This isn't equivalent to removing the lie, it's the equivalent to forcing the liar to label it a lie.
And? How is that a bad thing? What you're suggesting is companies shouldn't have to put labels on their products which say, "Not life size" or "Simulated color".
If you don't want lies to be called out then a video of the orange criminal showing him kissing Putin's hand would be fair game. Granted, that is highly believable to begin with, but you get the point.
Re: (Score:2)
Compelled speech is a violation of the First Amendment. Not only does it infringe on free speech it also infringes on the freedom of the press.
I'm as close to a free-speech absolutist as you're likely to find (check my posting history), but I don't agree at all with this interpretation. In fact, this law reminds me of the famous quote by Louis Brandeis: "the remedy [for problematic speech] is more speech, not enforced silence".
I've suggested many times that this is the kind of solution social media should use to cope with "problematic" speech (posts by Russian bots, so-called "fake" news stories, etc). Don't hide the posts or ban the posts-- jus
technical solution (Score:5, Insightful)
This is a technical solution to a social issue. It will not solve the problem.
the social solution is to teach critical thinking (Score:1)
Moreover parents are often extremely upset when those skills are taught to their kids because most parents have a whole bunch of sacred cows they don't want to see criticized and if you give a kid what is the rhetorical and intellectual equivalent of a wrecking ball they're going to turn it on pretty much everything
Re: (Score:3)
The problem with this isn't the technology, but the fact that journalists are willing to lie in order that they may sell advertising. A truth-in-journalism act would result in not in deepfakes being published, but rather ignored, by the press at large.
The problem is not the technology, but the fact that journalists are willing to lie with whatever means at their disposal.
About as useful as (Score:5, Insightful)
Missed the bus by almost 18 months (Score:2)
I think the cat is out of the bag now. Not only have commercial companies been selling the product for years now, but there are open source models you can use. Not to mention watermarks are alarmingly easy to replace.
Re: (Score:2)
I think the cat is out of the bag now. Not only have commercial companies been selling the product for years now, but there are open source models you can use.
No doubt. This reminds me of the gun control debate: making laws about this only ensures law-abiding people add watermarks. Criminals and people with nefarious purposes will just work around the law.
Not to mention watermarks are alarmingly easy to replace.
I'm assuming the watermarks aren't visual marks, like traditional photographers might add. The only thing which makes sense is some sort of stenographic, cryptographically-signed watermark. That you can't remove by, say, resampling with the GIMP.
Re: (Score:1)
I'm assuming the watermarks aren't visual marks, like traditional photographers might add. The only thing which makes sense is some sort of stenographic, cryptographically-signed watermark. That you can't remove by, say, resampling with the GIMP.
That's assuming a lot. It's certainly possible to steganographically watermark an image, but it's impossible to maintain an accurate, cryptographic watermark signature through any number of potential transformations that an adversary could use. This watermark proposal would have almost zero practical outcome and be practically unenforceable.
Re: (Score:2)
That you can't remove by, say, resampling with the GIMP.
That's assuming a lot. It's certainly possible to steganographically watermark an image, but it's impossible to maintain an accurate, cryptographic watermark signature through any number of potential transformations that an adversary could use. This watermark proposal would have almost zero practical outcome and be practically unenforceable.
Sorry, I wasn't clear. That was my point about GIMP, that removing a cryptographic watermark would almost certainly be easy. But I'm not a cryptographer so maybe there's some way to embed a hidden watermark that you couldn't remove with a simple smudge filter.
Re: (Score:1)
so maybe there's some way to embed a hidden watermark that you couldn't remove with a simple smudge filter.
Not really, no.
The law proposes that the AI creator tools provide a way to verify if some image was AI generated. No matter how clever or secret the watermarking scheme, a verification tool provides the necessary feedback to an adversary to keep applying filters to an image until the watermark is defeated.
1st amendment violation (Score:2)
this law will go nowhere. forced speech isn’t a thing here in America.
Re: (Score:2, Interesting)
Re: (Score:2)
1a doesn't apply as its not forced speech.
It's not? In what way is forcing a creator to state that an image is AI-generated, under threat of fines, imprisonment, or other state-imposed punishment, NOT compelled speech? Please, be specific.
Worry about your Zombie Apocalypse Instead CA (Score:2, Insightful)
Really? Their state is on fire and they have nothing better to do than to fight something that is literally unfightable. Go CA! This will definitely help your street poo problem.
Re: (Score:3, Funny)
This will definitely help your street poo problem.
Nice way to admit you living in a conservative news bubble. Why didn't you mention woke or trans?
Re: (Score:1)
Perhaps they have it backards? (Score:1)
Re: (Score:3)
Because that will not facilitate the powers that be creating their own fake stuff without the watermark, and expect everyone to believe it's real.
And I'll bet these politicians have an exactly that conversation. They certainly don't give a damn about their constituents.
Re: (Score:2)
While I appreciate the intent behind this law... (Score:4, Funny)
While sure you can make the widely used tools that generate such works put watermarks in, there is no way that you can actually force someone to use one specific tool to do so, unless you also outlaw open source.
Obvious (Score:3)
This speech is approved. This is not.
Gee, what could go wrong?
OK to lie with Photoshop but not AI? (Score:3)
So Photoshopping the hell out of something is fine, but use AI and it needs an e-sticker?
That's like arresting left-handed pick-pockets but not right-handed ones because left pockets are trendy.
Re: OK to lie with Photoshop but not AI? (Score:3)
Sure you could hire someone to Photoshop a nude Taylor Swift, or impersonate Joe Biden telling people not to vote, but that will cost you money or time. Right now you can do those things with no skill, quickly, for free--subsidized by venture capital.
When I read the headline, I thought this was "Evil Bit" style stupid move. But I think it will buy time for society to figure out what it takes to adjust before the required computing power is common in people's vibes.
Guns are controlled, bows and arrows aren't (Score:3)
So Photoshopping the hell out of something is fine, but use AI and it needs an e-sticker?
That's like arresting left-handed pick-pockets but not right-handed ones because left pockets are trendy.
Photoshop requires skill. AI requires little skill or effort. A talented archer can be very lethal, but any idiot with a gun can cause mass casualties...hence why we regulate guns more carefully than bows and arrows. I am much more worried about AI-generated deepfakes than talented photoshoppers.
Also in the news (Score:2)
Enforcement of setting the Evil Bit [ietf.org] ensured perfect security on the internet, no packets with unlawful intent have been seen ever since.
Has anyone ever told these lawmakers that nobody gives a fuck about their laws outside of their jurisdiction? Or inside, for that matter?
The likelihood of a law being observed sinks dramatically as the chance of getting caught breaking it drops near zero.
Re: (Score:1)
Re: (Score:2)
Good luck finding the guy in Generistan. The police there will laugh in your face and tell you to fuck off, they have real crime to take care of.
SDMI and watermarks... (Score:3)
We have seen this before. Ages ago, when the RIAA wasn't able to stop Diamond Rio from making their MP3 player (although the victory for Diamond was Pyhrric), there was a whole think tank created, SDMI, which was pushing watermarking. The issue? Watermarking could be removed, and it could be removed while passing the "golden ears" test.
What is to keep someone from using some AI based program to un-watermark?
Great idea in theory (Score:2)
Somewhere between difficult and impossible in practice
Won't use it. (Score:2)
Watermarks are pointless (Score:3)
Watermarks only lull people into a false sense of security, it would be trivial to find and remove any watermarks (even simply re-compressing the JPG or MKV would likely work).
What would be useful is AI that can tell you why it's doing what it's doing. And not just "sorry, but I'm protecting you from problematic ideas and possibly-offensive language". Let us see the AI piecing together the answer step by step, so we can see the word "scunthorpe" pop up, and then "answer contains string '*cunt*' --> [100 - vulgar/offensive] terminating request". We shouldn't have to fight tooth and nail for this right.
Just to add (Score:2)
it would be trivial to find and remove any watermarks (even simply re-compressing the JPG or MKV would likely work).
I may be giving them too much credit and assuming they're requesting "invisible" watermarks. If what they want is along the lines of a logo bug, then not only will that never, ever fly, but it'd be just as trivial to remove - for instance, request the AI generate a video in letterboxed 4:3, then just crop off the top and bottom (and logo bug) when reencoding to proper widescreen.
This seems unenforceable and dangerous (Score:2)
First, all those entities who can independently create AI content aren't going to abide by such a law. This includes creators in other countries and other subversive and propagandist elements. Why on earth would they decide to make content with watermarks when their intent to propagate disinformation and sow distrust? If anything a law demanding that AI art creators add a watermark will actually make it even easier to stoke the disruption of authenticity.
This is the opposite of what they should be pushin (Score:1)
The proper way to do this is have signed certs in photo equipment that can sign the meta data with the image to prevent people from claiming something is misinformation. At this point it should be assumed all images are manipulated or AI generated.
Terrible idea (Score:2)
Re: (Score:1)
Re: (Score:2)
Drives the Internet Karens nuts.
"There ought to be a law"
no there doesn't and it won't stop anything if there was