The EU's AI Act Could Have a Chilling Effect On Open Source Efforts (techcrunch.com) 45
An anonymous reader quotes a report from TechCrunch: Proposed EU rules could limit the type of research that produces cutting-edge AI tools like GPT-3, experts warn in a new study. The nonpartisan think tank Brookings this week published a piece decrying the bloc's regulation of open source AI, arguing it would create legal liability for general-purpose AI systems while simultaneously undermining their development. Under the EU's draft AI Act, open source developers would have to adhere to guidelines for risk management, data governance, technical documentation and transparency, as well as standards of accuracy and cybersecurity.
If a company were to deploy an open source AI system that led to some disastrous outcome, the author asserts, it's not inconceivable the company could attempt to deflect responsibility by suing the open source developers on which they built their product. "This could further concentrate power over the future of AI in large technology companies and prevent research that is critical to the public's understanding of AI," Alex Engler, the analyst at Brookings who published the piece, wrote. "In the end, the [E.U.'s] attempt to regulate open-source could create a convoluted set of requirements that endangers open-source AI contributors, likely without improving use of general-purpose AI."
In 2021, the European Commission -- the EU's politically independent executive arm -- released the text of the AI Act, which aims to promote "trustworthy AI" deployment in the EU as they solicit input from industry ahead of a vote this fall, EU. institutions are seeking to make amendments to the regulations that attempt to balance innovation with accountability. But according to some experts, the AI Act as written would impose onerous requirements on open efforts to develop AI systems. The legislation contains carve-outs for some categories of open source AI, like those exclusively used for research and with controls to prevent misuse. But as Engler notes, it'd be difficult -- if not impossible -- to prevent these projects from making their way into commercial systems, where they could be abused by malicious actors. "The road to regulation hell is paved with the EU's good intentions," said Oren Etzioni, founding CEO of the Allen Institute for AI. "Open source developers should not be subject to the same burden as those developing commercial software. It should always be the case that free software can be provided 'as is' -- consider the case of a single student developing an AI capability; they cannot afford to comply with EU regulations and may be forced not to distribute their software, thereby having a chilling effect on academic progress and on reproducibility of scientific results."
Instead, Etzioni argues that EU regulators should focus on specific applications of AI. "There is too much uncertainty and rapid change in AI for the slow-moving regulatory process to be effective. Instead, AI applications such as autonomous vehicles, bots, or toys should be the subject of regulation."
If a company were to deploy an open source AI system that led to some disastrous outcome, the author asserts, it's not inconceivable the company could attempt to deflect responsibility by suing the open source developers on which they built their product. "This could further concentrate power over the future of AI in large technology companies and prevent research that is critical to the public's understanding of AI," Alex Engler, the analyst at Brookings who published the piece, wrote. "In the end, the [E.U.'s] attempt to regulate open-source could create a convoluted set of requirements that endangers open-source AI contributors, likely without improving use of general-purpose AI."
In 2021, the European Commission -- the EU's politically independent executive arm -- released the text of the AI Act, which aims to promote "trustworthy AI" deployment in the EU as they solicit input from industry ahead of a vote this fall, EU. institutions are seeking to make amendments to the regulations that attempt to balance innovation with accountability. But according to some experts, the AI Act as written would impose onerous requirements on open efforts to develop AI systems. The legislation contains carve-outs for some categories of open source AI, like those exclusively used for research and with controls to prevent misuse. But as Engler notes, it'd be difficult -- if not impossible -- to prevent these projects from making their way into commercial systems, where they could be abused by malicious actors. "The road to regulation hell is paved with the EU's good intentions," said Oren Etzioni, founding CEO of the Allen Institute for AI. "Open source developers should not be subject to the same burden as those developing commercial software. It should always be the case that free software can be provided 'as is' -- consider the case of a single student developing an AI capability; they cannot afford to comply with EU regulations and may be forced not to distribute their software, thereby having a chilling effect on academic progress and on reproducibility of scientific results."
Instead, Etzioni argues that EU regulators should focus on specific applications of AI. "There is too much uncertainty and rapid change in AI for the slow-moving regulatory process to be effective. Instead, AI applications such as autonomous vehicles, bots, or toys should be the subject of regulation."
Jeezeeebusssfucckkk! (Score:5, Insightful)
Re:Jeezeeebusssfucckkk! (Score:5, Funny)
Don't worry, it's on the list. Right after they get done with the Unicode support
Re: (Score:1)
Now as for Europe and AI, what does it really matter? Europe isnt going to develop world-changing tech. They have not been relevant since the 80s in tech, and that was limi
Re: (Score:2)
If you use FireFox on mobile, you can use plugins like uBlock Origin to block those ads. It even blocks ads on YouTube.
Going mobile doesn't need to mean giving up control. Use mobile websites instead of apps with a browser like FireFox and that power is yours again. You can even write your own plugins. It's great.
Re: Jeezeeebusssfucckkk! (Score:2)
Re: Jeezeeebusssfucckkk! (Score:1)
Re: Jeezeeebusssfucckkk! (Score:2)
Re: Jeezeeebusssfucckkk! (Score:1)
Re: (Score:3)
Dude! It's like only 10 stories apart [slashdot.org]!!!!!! You need some de-duplicator software here.
They tried, they implemented an AI algorithm called BeauHD but due to EU rules they weren't allowed to train it with a dataset related to editing text or looking for dupes.
Re: (Score:2)
If only they could invent an AI for detecting dupes... Or alternatively just read the front page now and then.
Re: (Score:2)
What they need is editors that actually read and enjoy the content of this site, so they'd recognize dupes. It's been suggested before. I still like this Slashdot, but man, do I ever miss OG Slashdot sometimes.
Host it in the US then... (Score:1)
Not hard to fix. Host the project in the US on GitHub, or in China, on Gitee. If the EU decides to do a pogrom on open source AI, use a handle, and perhaps check code in through TOR or a VPN. The EU has so many surveillance laws (iPredator in Sweden, for example) that many users use VPNs anyway.
Re: (Score:2)
It's not a bad idea, but I think it misses the reason people develop open-source software. They do it for status in the community and doing it under a handle that does not connect back to a person will not provide the motivating status.
I may be reading this wrong, but it looks like an open-source developer would be liable if they contributed a few lines that were incorporated into an AI program via a make file. I hope this is not retroactive.
Adding "Not for use in the EU" as a proposed modification to the
Define AI (Score:3)
Re: (Score:2)
Where do you see room for ambiguity? The term "AI" covers a pretty broad set of topics, but so does "mathematics". The line is pretty clear in either case. What do you find confusing?
The line is as clear as mud (Score:2)
Define AI? Is a conventional expert system AI? What about something that uses linear regression, is that AI? Or do you think it just applies to ANNs? What about perceptrons?
Re: (Score:2)
Define AI? Is a conventional expert system AI? What about something that uses linear regression, is that AI? Or do you think it just applies to ANNs? What about perceptrons?
Obviously anything that makes decisions. So any computer program should be considered AI and regulated as such. (Me, sarcastic? Really?)
Re: (Score:2, Informative)
Is a conventional expert system AI?
Yes. Expert systems have always been considered AI. Not only were they the subject of much of early AI research, they revitalized the field in the 1980s.
What about something that uses linear regression, is that AI?
Yes. Machine learning is considered to be a subset of AI.
Or do you think it just applies to ANNs?
Just because some people don't know anything about the subject doesn't mean that its boundaries aren't well-understood.
What about perceptrons?
Obviously.
Like I said, there isn't any ambiguity here.
Re: (Score:2)
So basically you define AI by how the software is written, not what it does. Got it.
Re: (Score:1)
What are you babbling about?
This isn't my opinion. As I've already explained, what is and is not AI is well-understood. After all, "what is AI?" is not a philosophical question! Everything that is considered AI was invented by AI researchers. The text of the bill in question, which you didn't bother to read, actually does a pretty good job of outlining the boundaries.
There is absolutely no need for laypersons to makeup their own pretend definitions based on what they believe from science fiction.
Re: (Score:2)
"Everything that is considered AI was invented by AI researchers"
You need to stop babbling and start thinking. Tic-tac-toe was considered AI back in the 50s despite the fact that you can write the algorithm in 50 lines of code or less. I doubt anyone would consider it AI now.
Re: (Score:2)
You're confusing the field of AI and AI technology with the science fiction version of AI.
Tic-tac-toe, chess, go ... those were problems that we once thought (by some people) to require intelligence. That turned out not to be the case (it didn't take long), but the same is true for every problem to which we apply AI today. None of them require intelligence. Not a single one. If you're looking for human-like or superhuman intelligence, you won't find it in the field of AI. You won't find intelligence o
Re: (Score:2)
The English text of the proposal [europa.eu] includes:
Re: (Score:2)
I'm pretty sure the definition is "You have money. Give it to us or we'll take it."
Groundhog day (Score:2)
Are you drunk or something?
"nonpartisan think tank" (Score:2)
A "nonpartisan think tank" is like calling your country a "democratic republic" [wikipedia.org], isn't it?
Re: (Score:2)
A "nonpartisan think tank" is like calling your country a "democratic republic" [wikipedia.org], isn't it?
You mean like the USA is a democratic republic because it is a republic that practices representative democracy?
https://en.wikipedia.org/wiki/... [wikipedia.org]
Or were you just hoping that nobody would scroll up and read the whole entry from the beginning?
Beaurocrats without a clue (Score:2)
Why are they singling out AI software other than because its currently a "thing"? There's plenty of conventional open source software that could be used in a critical system (eg linux, gnu tools) but I don't see the EU calling for regulation and safety cases for them?
Sounds like the technoluddites have heard the scare stories about AI and have decided the "better regulate than sorry" approach is the way to go. Idiots.
Software has no quality guarantee and never did (Score:2)
Re: (Score:2)
Taking something and giving control to software does not absolve you of responsibility. The EU is worried about AI making hiring, firing, promotion, prison sentencing, and other decisions that will directly impact your live in a completely unaccountable way. It is a very real problem and AI sentencing is already being used in the USA and so far the AIs look pretty racist because of how they are trained.
This is a real problem that needs to be solved so that we can integrate AI into our societies in an accoun
Re: (Score:2)
Re: (Score:2)
There is another post in response to this article which quotes what the EU defined as AI and the definition they used seems pretty good. You should look at that for what the EU defined.
My point is I agree with that the EU is actually trying to do and it is a very real problem. I remember reading a research paper about an AI that Amazon built to make hiring decisions and it turned out to be REALLY racist in hiring because of the training set they used. However, because it was an AI making the decisions they
Back to front (Score:2)
That seems to be completely back to front. If there's "too much ... rapid change in AI for the slow-moving regulatory process to be effective" then it won't be able to keep up with new applications, so the cor
Re: (Score:1)
the correct thing to do is to regulate the core technology and automatically cover any new applications that come along.
Allright, call me when you have you draft regulation of fire, metalurgy, electromagnetism and writting. It will basically cover any harm someone can do to someother else!
Comment removed (Score:4, Insightful)
Thank goodness ... (Score:2)
It's all just fancy search and pattern matching. Or theoretically these regulations should apply to deployments of grep [slashdot.org] as well.