Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
EU Open Source AI Technology

The EU's AI Act Could Have a Chilling Effect on Open Source Efforts, Experts Warn (techcrunch.com) 68

Proposed E.U. rules could limit the type of research that produces cutting-edge AI tools like GPT-3, experts warn in a new study. From a report: The nonpartisan think tank Brookings this week published a piece decrying the bloc's regulation of open source AI, arguing it would create legal liability for general-purpose AI systems while simultaneously undermining their development. Under the E.U.'s draft AI Act, open source developers would have to adhere to guidelines for risk management, data governance, technical documentation and transparency, as well as standards of accuracy and cybersecurity. If a company were to deploy an open source AI system that led to some disastrous outcome, the author asserts, it's not inconceivable the company could attempt to deflect responsibility by suing the open source developers on which they built their product.

"This could further concentrate power over the future of AI in large technology companies and prevent research that is critical to the public's understanding of AI," Alex Engler, the analyst at Brookings who published the piece, wrote. "In the end, the [E.U.'s] attempt to regulate open-source could create a convoluted set of requirements that endangers open-source AI contributors, likely without improving use of general-purpose AI." In 2021, the European Commission -- the E.U.'s politically independent executive arm -- released the text of the AI Act, which aims to promote "trustworthy AI" deployment in the E.U. As they solicit input from industry ahead of a vote this fall, E.U. institutions are seeking to make amendments to the regulations that attempt to balance innovation with accountability. But according to some experts, the AI Act as written would impose onerous requirements on open efforts to develop AI systems. The legislation contains carve-outs for some categories of open source AI, like those exclusively used for research and with controls to prevent misuse. But as Engler notes, it'd be difficult -- if not impossible -- to prevent these projects from making their way into commercial systems, where they could be abused by malicious actors.

This discussion has been archived. No new comments can be posted.

The EU's AI Act Could Have a Chilling Effect on Open Source Efforts, Experts Warn

Comments Filter:
  • Seems unlikely (Score:4, Interesting)

    by AmiMoJo ( 196126 ) on Tuesday September 06, 2022 @10:31AM (#62856846) Homepage Journal

    We had all this panicking when GDPR was new, but the EU generally designs these things to avoid these kinds of problems.

    The open source developers might be responsible for the data they use to train it. Well, they already are. Copyright and GDPR cover training data sets. Can't just go scrape the web for photos to train your AI.

    If it ends up biased, unless you advertised it as being production ready and not just a work-in-progress, you aren't going to be liable if some company deploys it. Responsible AI developers will include warnings with their code anyway.

    Basically what the EU wants is for AI to be used responsibly, and where it interacts with citizens it should be accountable too.

    • Re:Seems unlikely (Score:5, Interesting)

      by gweihir ( 88907 ) on Tuesday September 06, 2022 @10:47AM (#62856908)

      Exactly. The only reason to incite some fake panic here is if you are a commercial enterprise that does not want to comply with these minimal requirements and hence pushes some fantasies about catastrophic damage.

      • Re: (Score:2, Interesting)

        by codebase7 ( 9682010 )

        The only reason to incite some fake panic here is if you are a commercial enterprise that does not want to comply with these minimal requirements and hence pushes some fantasies about catastrophic damage.

        The same can be said of the EU themselves. Current "AI" is nowhere near the level of intelligence needed to bring about the world ending / human slaves / dystopia that the plebs are screaming from the rooftops about. Yet, that's exactly the excuse these politicians are using to regulate. Why should the regulation be allowed when it's cause is false? The field is too under developed for the people working on it to even fathom the requirements for building and maintaining a "safe" and "trustworthy" AI. Much

        • Re:Seems unlikely (Score:5, Insightful)

          by HiThere ( 15173 ) <[ten.knilhtrae] [ta] [nsxihselrahc]> on Tuesday September 06, 2022 @12:33PM (#62857256)

          You are correct that "current AI is is nowhere near the level of intelligence needed to bring about ...".

          Unfortunately, this is a research area. We don't have any idea whether some minimal change could cause a simple loop containing an AI and a person to be able to do as you suggest.

          To pick a plausible example, there are AI voice mimics and AI image editors. If the voice mimics improve just a bit, they might be able to sound not readily distinguishable from some particular person and if the AI image editors improve a bit, they might be able to do realistic videos and incorporate the output from the voice mimic. This could allow one person to imitate any selected other person for which there is sufficient evidence over any remote communication.
          I presume that the military would develop ways to ensure that their lines were secure, but I don't assume the same about financial institutions or political institutions. Or international relations. Or news organizations.

          We've already got problems deciding which sources are trustworthy, if it became "Which of these contradictory communications apparently from the same source is trustworthy?", I don't think anybody could manage. I'd expect a huge rash of wars, financial crisises, divorces, legal battles, etc. It might well be that just this minor improvement of works already in process would cause civilization to collapse. It would certainly make existing problems a lot worse.

          Of course, it would have its benefits. Movie studios wouldn't need to hire actor or and the directors wouldn't need much in the way of screenwriters. And it would become really easy to build targeted ads.

          And this is just one development off the top of my head, and only extrapolating items which have been in the news within the last couple of days. (And not extrapolating them all that far.)

          • As far as trusting video/audio, hardware manufactures need to start adding secure encoding into the video frames that can be verified through a Certificate Authority. Forging a false video would require hacking the hardware.

            Furthermore: The user of the camera could insert their own chip card and then their data can be combined with the internal manufacturer's data to store the photographer's information as well as verify that the video hasn't been edited after leaving the hardware.
            • verified through a Certificate Authority

              Which will be compromised the second the practice becomes widespread, if it isn't already used for HTTPS. It will also drive a bunch of manufacturers out of business. After the CAs are in place, the government types will mandate that only footage from certain cameras with keys endorsed by a specific set of CAs can be used as evidence in court. That legislation will also just so happen to reject any footage made by the average citizen with their phone. As most devices won't have a valid key, and those that

          • Unfortunately, this is a research area. We don't have any idea whether some minimal change could cause a simple loop containing an AI and a person to be able to do as you suggest.

            "Because something might happen, we should ban it and do nothing." Not exactly a compelling argument for all of civilization. You're more than welcome to retreat into your panic cave if it makes you feel better. Just beware of Ug's fire, it's hot.

            We've already got problems deciding which sources are trustworthy, if it became "Which of these contradictory communications apparently from the same source is trustworthy?", I don't think anybody could manage. I'd expect a huge rash of wars, financial crisises, divorces, legal battles, etc. It might well be that just this minor improvement of works already in process would cause civilization to collapse. It would certainly make existing problems a lot worse.

            If that's the case then, clearly our civilizations have lost so much trust between their individual members that such things need to happen to alleviate the fear that grips everyone so tightly. That or people need to stop trying to swindle / hustle each other con

        • by gweihir ( 88907 )

          Sure. But artificial stupidity can always be used as a component in a process that also uses people and then all these concerns become valid again. As to "safe" and "trustworthy" overall, that is a research question and if things go wrong, a question for the courts. And quite a bit of it will be in a grey area, sure. But product safety in a new area needs a while to evolve and there is no morally acceptable reason to not start that process as soon as the potential for serious problems has become clear enoug

        • > Current "AI" is nowhere near the level of intelligence needed to bring about the world ending / human slaves / dystopia that the plebs are screaming from the rooftops about.

          Musk and Hawking come to mind, not screaming but with a wide reach. Who's screaming ?

          "Will AI Take Your Job? Fear of AI and AI Trends for 2022" https://www.tidio.com/blog/ai-... [tidio.com]

          • Figure of speech. Although if you want actual screaming, you can probably find it in some youtube clip. iRobot might be a good place to start looking. :)
        • by noodler ( 724788 )

          . Current "AI" is nowhere near the level of intelligence needed to bring about the world ending / human slaves / dystopia that the plebs are screaming from the rooftops about. Yet, that's exactly the excuse these politicians are using to regulate.

          Please show where the actual proposal discusses the 'world ending / human slaves / dystopia'.
          Also, you're referring to 'these politicians'. Which politicians are they? Do you know the name of even a single EU politician without looking it up?

          To me your post reads like a brainless US biased rant about a topic you know nothing about.

          • Please show where the actual proposal discusses the 'world ending / human slaves / dystopia'.

            If you had bothered to finish reading that sentence I wrote, you'd see that I was specifically referring to the proletariat with that comment. Not the actual proposal itself. In addition, I said that comment was the justification used by politicians for their actions, without providing evidence of them doing so (an assumption based on the public's mass hysteria about AI), not that they included such language in the actual proposal.

            To me your post reads like a brainless US biased rant about a topic you know nothing about.

            *Checks site domain name* You do realize this is a US based tech news websit

      • Exactly. The only reason to incite some fake panic here is if you are a commercial enterprise that does not want to comply with these minimal requirements and hence pushes some fantasies about catastrophic damage.

        You've got it back asswards, regulatory burdens don't favor the small guys and are crafted with help from the big guys. Even in the most innocent sense, so I don't mean some big corporate lobbyist and back room dealings, I mean your lawmakers or regulatory bodies do the right thing and reach out to business leaders and solicit opinions from the public, they craft some regulations that everyone is in agreement, it will cost something to implement but not too much, it provides some protection to the public,

        • by gweihir ( 88907 )

          This is Europe. Not everybody is corrupt here and not everybody prays to the god of money.

          And yes, I do audits in a regulated industry and see what is actually going on. The regulators are a lot more tolerant for small guys and when you grow, they get more and more strict. Sure, on paper everybody has to follow the same rules, but what counts is who gets a regulatory warning (or worse) and on what.

    • Either that or the people pointing out the bugs in the law are helping to make sure that the final release delivers the intended result.

    • 90% of software out there is already in a state of eternal beta. And when it bugs out morons say, "it's your fault for using beta software". If there's ANY legal onus to declaring your AI "production ready" software companies will simply not do so. Welcome to the world of every single AI coming with 600 pages of boilerplate about how if it destroys the world to make paperclips you agree to "defend and indemnify" them. And the whole idea of guaranteed safe AI is silly.

      The only way to know it's going to be s
    • Copyright and GDPR don't really cover these things. There's no law against "don't train an AI with this" because AI didn't exist when these laws were written. It's also rather silly for a debate. Humans can look at these images and "train" off them, but computers can't, because asking if there's any fundamental difference there (there is not) would make us humans uncomfortable.

      What the laws do work perfectly fine for is profiting off copied images. Whether a human or an AI copied someone's art too closel
    • The open source developers might be responsible for the data they use to train it. Well, they already are. Copyright and GDPR cover training data sets. Can't just go scrape the web for photos to train your AI.

      Don't give the MAFIAA et al any ideas. Did you see one of our movies, read one of our books, or listen to our music? All subsequent works are a derivative work as your brain is now inextricably linked to our works.

      It makes no sense to apply this to AI if it doesn't apply to humans. If your training data is ingested and the original works are not actually stored, it should not be considered infringement. It doesn't mean that the AI can't generate infringing works, but it shouldn't be a default assumpton

  • by gweihir ( 88907 ) on Tuesday September 06, 2022 @10:34AM (#62856858)

    Let's be serious here: Things like GPT-3 are always payed-for research and development. They are not done by some people on the weekend as a hobbyist project. There is, of course, FOSS that is done that way, and some of it is even excellent and widely used, but not in the AI space. Hence complying with these minimal requirements is not actually a problem.

    As to legal liability, that is pretty mich nonsense as well. Sure, commercial enterprises that publish this type of FOSS could indeed become liable, but that is pretty much limited to them giving assurances that their product does something specific and then it turns out it does not or messes it up. Again, no projects done by people in their spare-time will be affected.

    • >Let's be serious here: Things like GPT-3 are always payed-for research and development. They are not done by some people on the weekend as a hobbyist project.

      OpenAI was trotting out DALL-E 2 as the most exclusive and exciting image generation AI in history, and bragging about how hard it'd be for regular people to even begin to ask permission to pay for access. Stable Diffusion then blew it away. That was made by people on the weekend as a hobbyist project.

      GPT-J 6B was made by people on the weekend as a

  • by The Evil Atheist ( 2484676 ) on Tuesday September 06, 2022 @10:52AM (#62856932)

    it's not inconceivable the company could attempt to deflect responsibility by suing the open source developers on which they built their product

    Sue them for what? The millions of dollars they haven't made because you haven't paid them anything to use their free code?

    Any company that sues open source developers would find themselves blacklisted from using other code.

    • Any company that sues open source developers would find themselves blacklisted from using other code.

      That's not how open source works, dumbass. Tell me did you go fumbling through every OSS license the code you use touches to check if you or your company was "blacklisted"? Where's the OSS global blacklist registry? Oh, that's right there isn't one, and any developer that claims to be OSS but includes a "The following individuals / groups are not granted any rights under this license" section in their code is rightly denounced as the frauds they are.

      • Except that IS how open source works, dickhead.

        You're SUPPOSED to look at the licence of ANY third party code or software before you include in your product, retard.

        Anybody, it doesn't matter if you're a single developer, or a large corporation, is under potential legal trouble if they breech the licence.

        What, did you think open source just meant public domain? Are you that retarded?
        • What, did you think open source just meant public domain? Are you that retarded?

          Well, fucker did you think that your complete lack of nuance meant that I didn't read the license at all? Sorry to disappoint you fuckwad. Go take your bullshit elsewhere.

      • by chefren ( 17219 )
        Unless you are given permission by the OSS license, you cannot use the code or you will be in breach of copyright unless you can negotiate another license with the copyright holder. If you can't be bothered to read the licenses, then go ahead and write your own code instead.
        • Unless you are given permission by the OSS license, you cannot use the code or you will be in breach of copyright unless you can negotiate another license with the copyright holder.

          Show me an OSS license (endorsed by the FSF) that bans people from using it. I'll wait. Oh, that's right you can't. Because OSS licenses aren't meant to ban people from using the material they cover. If they do, it's not an OSS license, and I'd doubt that many would want to use the licensed material in question even if they weren't currently banned under it.

          If you can't be bothered to read the licenses, then go ahead and write your own code instead.

          *Checks own username.* Really? What makes you think I don't?

          • You guys are both right to some extent.
            https://www.gnu.org/licenses/g... [gnu.org]
            There are things in the license that says what you can and can't do without violating the license and, hence, not be allowed to use it without violating copyright.

            For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights.

            But also, the copyright doesn't deny the original author's right:

            Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it.

            Is there a black list? No. But the original author can take it down or release it under another license, update to break their own code, etc.

    • Any company that sues open source developers would find themselves blacklisted from using other code.

      Literally not possible to do. But thanks for trying.

      • Literally possible to do. There are tons of code with licences that says "this code cannot be used for such and such a purpose". And some licences revoke any rights upon breech of the agreements.

        But thanks for trying. Idiot.
    • by kbg ( 241421 )

      Any company that sues open source developers would find themselves blacklisted from using other code.

      Right, just like SCO vs Linux really cared about pissing of all open source developers and all the big players in the software business. When the suing company is just a zombie shell company for Microsoft you can drag out that legal case forever with no repercussions.

      • Because it's going so well for SCO.
        • It is for Microsoft, who hasn't ended up on your so called blacklist. You've said a lot of dumb things here on slashdot, but this one really takes the cake.

          • I don't understand why you cunts can only imagine some global blacklist that all open source developers have subscribed to. I said no such thing.

            Maybe I've said a lot of dumb things. But one thing's clear. Cunts like you on here don't know how to fucking read. So maybe they appear dumb from your perspective because you've read a strawman into them, instead of taking into account things like CONTEXT. Who I was replying to. What point I was replying to. What article I was replying to. etc etc etc.

            Nerds
          • It is for Microsoft, who hasn't ended up on your so called blacklist.

            Microsoft definitely have ended up on some developers' blacklist. Remember, I never said, nor implied some global blacklist.

            Also, Microsoft hasn't sued open source developers for bugs in open source that Microsoft used that affected Microsoft. So they wouldn't end up on a developer's blacklist for that reason I listed.

            If you understood THE FUCKING CONTEXT, which is about commercial AI developers trying to deflect blame by suing the developers of open source THEY THEMSELVES USED. My comment, in THE FUC

            • Cheese you can never admit when you're wrong or just silently slip away when you're wrong, can you? Like all of those times you say shit about rust that's incredibly wrong, rather than research it you just go repeat it again.

              We all saw the context. We know exactly what you said here. You made another one of your gaffes, just own it.

    • Sue them for what?

      There's a general legal principal that if you cause harm then you can be sued. If you drive carelessly and cause damage or injury, you can be sued for whatever you might happen to have even if you were not driving in the course of your employment.

      You'd already be potentially liable as an open source developer if your drone control software, or robot control software caused harm.

      All that's happening here is that the EU is defining harms that must be avoided when AI systems are deployed. No software developer

  • ...is what exactly?

    People seem to have this odd belief that there's this magical thing called "AI", rather than just "code that does shit we didn't necessarily predict because we figured out how to use an RNG".

    I get it, an infinite number of monkeys on an infinite number of typewriters with an infinite amount of time will eventually create the full works of shakespeare, but that's not intelligence, it's a statistical artifact.

    • by stwrtpj ( 518864 )
      Pretty much this. The term "AI" is so loosey-goosey that there really is no hard and fast rule as to what is AI and what isn't. We casually refer to game code that controls your opponents as "AI", but is it really, or just a bunch of clever algorithms?
      • by HiThere ( 15173 )

        To be fair, we don't know what "natural intelligence" is either. It may also just be a bunch of clever algorithms. (To draw an analogy from assembler, sitting at the program counter doesn't give you insight into the rest of the code. And it could be that "sitting at the program counter" is what consciousness is. [It might also be something very different, but just TRY to prove it either way.])

    • Most programs don't generalize from limited data. Most programs don't plan. Most programs don't try to optimize some utility function. Most programs don't learn from the past. You put enough of those concepts together, and you get something that can reasonably be studied in a course on AI. (not saying these things "are" AI, just that there's a difference between writing a web-app and a program that can learn).

      • by HiThere ( 15173 )

        Programs that can learn are easy. Look up Sameul's Checkers program http://www.incompleteideas.net... [incompleteideas.net]
        It's trying to learn in a complex environment that's difficult.

        In the 1960's Scientifice American carried an article by, I think, Martin Gardner about a "computer" that could learn to play tic-tac-toe. I built one out of matchboxes and jujubees (and glue, paper, and crayons). It worked. IIRC it was a base 3 computer, as each move choice was broken into 3 alternatives. Each matchbox contained 3 colors of

      • by narcc ( 412956 )

        It's a lot easier than you think. Most approaches to machine learning conceptually very simple. Writing a program than 'learns' is absolutely within reach of a high school student with basic programming skills and a little free time.

        For example: feed-forward neural networks are incredibly simple to understand and they can be trained using a simple evolutionary algorithm, which is also easy to understand. It might not be as efficient as back propagation, but it is significantly easier to both understand a

      • I think you're anthropomorphizing algorithms.

        I can write a program that gets an average from an existing dataset, and tells me that it predicts the next item will be the average of the existing dataset - "generalize from limited data".

        I can write a program that takes a number, and breaks it down into its factors, to build a "plan" to create that initial number by multiplying the factors.

        I can write a program that times out after 5 seconds, rather than 30 seconds, "optimizing" its utility when a backend goes

  • All these warnings come from people who haven't released an Open Source Al?

  • by Anonymous Coward
    There is no AI, theres only algorithms. Enough with the buzzwords and fearmongering.
  • The article seems to make as if preventing open source competition is a side effect. But then they go onto providing examples where this would be one of the main effects of the proposal.

    in a recent example, Stable Diffusion, an open source AI system that generates images from text prompts, was released with a license prohibiting certain types of content. But it quickly found an audience within communities that use such AI tools to create pornographic deepfakes of celebrities.

    Currently, any commercial AI can

    • by HiThere ( 15173 )

      If the law says what you say, then the only answer is to move outside of Europe, and stay away from them. That was the solution adopted by encryption back when the US was being silly.

  • by Huitzil ( 7782388 ) on Tuesday September 06, 2022 @02:02PM (#62857644)
    The general population in EU strongly favors government intervention in privacy. Major laws have been passed to strictly regulate the collection and use of data and governments have massive leeway in enforcement due to the specificity of the laws and their massive detachment from real world applications.

    GDPR, specifically, dramatically raised the bar and codified how companies should categorize data, and ensure that people are aware of the data points being collected from them - and where a data point is not required for day to day operations - people have the right to opt into the data collection. In addition, GDPR requires that people can request data about their online activities to be 'forgotten' in cases where there is no need for a record of activity (like a purchase).
    This all sounds like a HUGE WIN for consumers - but HOLY moly did the bar raise for companies. There is now a need to create and implement sophisticated data agents that can respond to individual requests, plus features have to be build in sites and apps to ensure compliance on a maze of requirements. The problem here is that only big companies can truly stay compliant - it's super hard to run a website operation (ie. a blog) and spend a large amount of time maintaining compliance to GDPR. So what happens? Sites shut down, features are never released, and consumers sign their life every time they visit a site. It is SO DIFFICULT to stay compliant, that only large companies can afford an army of lawyers that can interpret the law, and audit the operation for strict adherence.

    This my friends, is what I call - monopolistic populism.
    • Re: (Score:2, Insightful)

      Sorry but your post is utter bullshit - its trivial to maintain compliance with the GDPR, it only becomes onerous when the companies wants (note, wants, not needs) conflict with what they are allowed to do under law. And the point of the GDPR is to force the company to change its wants, not restrict its needs.

      The GDPR should be a non-issue for anyone running a blog, because you shouldn't be collecting most personal information in the first place, and that information that you do collect should be easily ha

      • Remember, data is a comparative advantage. Sure, it is easy to stay compliant by not collecting data. Which is what the small companies will have to do. The big corporations will collect data and gain an edge. The small company cannot compete by operating in an environment that lacks clarity and constraints their ability to understand their customers. So sure, call it FUD - but look how concentration is happening in this space. Impossible to innovate in data without an army of lawyers.
    • by narcc ( 412956 )

      It's super hard to run a website operation (ie. a blog) and spend a large amount of time maintaining compliance to GDPR. So what happens?

      Users win. It's really easy to run a website without collecting a ton of data tied to individual users.

      Sites shut down, features are never released, and consumers sign their life every time they visit a site.

      Spreading fear, uncertainty, and doubt I see.

    • by noodler ( 724788 )

      but HOLY moly did the bar raise for companies.

      Companies that have been abusing these new toys unhindered for far too long.

  • What guarantee do you get with the commercial product, a refund!
  • .... you don't have a unambiguous definition of what that thing actually is?
  • PERHAPS I'M READING INTO THIS ... but the first thing that came to mind while reading this article was how open source in AI would open up transparency or accountability to the developer if such an AI did something catastrophic. The code could be examined for ill-intent or even shady routines thereby exposing the developer or the company responsible. Closed source seems to alleviate transparency and accountability, or at least slowing down any investigation with reverse engineering. I do understand, however

Every nonzero finite dimensional inner product space has an orthonormal basis. It makes sense, when you don't think about it.

Working...