Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI Open Source

Big Tech Isn't Prepared for AI's Next Chapter: Open Source (slate.com) 37

Security guru Bruce Schneier and CS professor Jim Waldo think big tech has underestimated the impact of open source principles on AI research: In February, Meta released its large language model: LLaMA. Unlike OpenAI and its ChatGPT, Meta didn't just give the world a chat window to play with. Instead, it released the code into the open-source community, and shortly thereafter the model itself was leaked. Researchers and programmers immediately started modifying it, improving it, and getting it to do things no one else anticipated. And their results have been immediate, innovative, and an indication of how the future of this technology is going to play out. Training speeds have hugely increased, and the size of the models themselves has shrunk to the point that you can create and run them on a laptop. The world of A.I. research has dramatically changed.

This development hasn't made the same splash as other corporate announcements, but its effects will be much greater. It will wrest power from the large tech corporations, resulting in both much more innovation and a much more challenging regulatory landscape. The large corporations that had controlled these models warn that this free-for-all will lead to potentially dangerous developments, and problematic uses of the open technology have already been documented. But those who are working on the open models counter that a more democratic research environment is better than having this powerful technology controlled by a small number of corporations...

[B]uilding on public models like Meta's LLaMa, the open-source community has innovated in ways that allow results nearly as good as the huge models — but run on home machines with common data sets. What was once the reserve of the resource-rich has become a playground for anyone with curiosity, coding skills, and a good laptop.

Bigger may be better, but the open-source community is showing that smaller is often good enough. This opens the door to more efficient, accessible, and resource-friendly LLMs.

Low-cost customization will foster rapid innovation, the article argues, and "takes control away from large companies like Google and OpenAI." Although this may have one unforeseen consequence...

"Now that the open-source community is remixing LLMs, it's no longer possible to regulate the technology by dictating what research and development can be done; there are simply too many researchers doing too many different things in too many different countries."

Thanks to long-time Slashdot reader mrflash818 for submitting the article
This discussion has been archived. No new comments can be posted.

Big Tech Isn't Prepared for AI's Next Chapter: Open Source

Comments Filter:
  • License (Score:5, Informative)

    by brunes69 ( 86786 ) <slashdot@keirsGI ... minus herbivore> on Sunday June 04, 2023 @10:41AM (#63575307)

    LLAMA and every single thing derrived from it and touched by it are tainted by LLAMA's "research only" license.

    No one can use any of this stuff for anything truely useful. It is great for acedemia and hobbiest developers but no one can release any commcercial products using it.

    • Re:License (Score:4, Interesting)

      by geekmux ( 1040042 ) on Sunday June 04, 2023 @10:58AM (#63575341)

      No one can use any of this stuff for anything truely useful. It is great for acedemia and hobbiest developers but no one can release any commcercial products using it.

      In a world manipulated and limited by Greed, this should come as no surprise to anyone anymore. They don't call them Patent War Chests for nothing.

      Go ahead Innovation. Try and do your thing in the 21st Century. See how quickly Greed comes for your throat because they already own your idea in some related way. Or they'll just crush on you legally until you can't afford to exist anymore. Even "open source" will come to mean you're too poor to legally win anyway.

      • Yep (Score:4, Insightful)

        by Brain-Fu ( 1274756 ) on Sunday June 04, 2023 @12:02PM (#63575455) Homepage Journal

        The root cause of greed is pack survival. We evolved from pack animals, not ants, so we don't have the natural selflessness of ants. We have a sliding scale of strength for our inclination to greed, as the strange balances that our ancestors needed to find in order to survive were not fixed.

        So, some people are naturally greedier than others. And all people are subject to environmental influences that push them further in one direction or another. These effects are intrinsic to the human condition. There is no eliminating them.

        We can lament greed until we are blue in the face. And we would be right to do so. But that won't eliminate it. Nor can we legislate it away. The ONLY thing we can do is find a way to survive in a world populated by greedy people. This includes both building a legal framework that pits greed against greed for an overall societal benefit, and individual strategies for thriving within this framework.

        It does mean you must compete to survive. That may be unhappy news, but it has always be true, and will continue to be true in the forseeable future.

        As always, the bottom line is: adapt or die.

        • by jhoegl ( 638955 )
          Meh, you can use these things to make money, you do it via advertisements and data mining.

          Key point? Linux.
        • > The root cause of greed is pack survival. We evolved from pack animals

          But humans haven't evolved from pack animals:

          Mammal tree(s) - https://research.amnh.org/pale... [amnh.org]

          Pack animals - https://en.wikipedia.org/wiki/... [wikipedia.org]

          • by Anonymous Coward

            > The root cause of greed is pack survival. We evolved from pack animals

            But humans haven't evolved from pack animals:

            Mammal tree(s) - https://research.amnh.org/pale... [amnh.org]

            Pack animals - https://en.wikipedia.org/wiki/... [wikipedia.org]

            I'm not quite sure why you think prehistoric humans running around in loincloths, gathering together to literally hunt and survive against much larger predators, wouldn't be considered "pack" survival.

            We carved up this entire planet into countries. Countries that are carved up into States. States that have counties. All of which have their own sets of laws and rules at every level. Rules and laws that were born from humans surviving over hundreds of years and documenting what works for every "pack" in t

            • > I'm not quite sure why you think prehistoric humans running around in loincloths, gathering together to literally hunt and survive against much larger predators, wouldn't be considered "pack" survival.

              Human's are of course a very social primate, I'd even say the most social animal on many measures, but they're not usually called 'pack' animals or part of a species we usually refer to as living in a packs.

              But I get the equivocation, the terms social and pack can cover a lot of common ground.

        • But that won't eliminate it. Nor can we legislate it away. The ONLY thing we can do is find a way to survive in a world populated by greedy people. This includes both building a legal framework that pits greed against greed for an overall societal benefit, and individual strategies for thriving within this framework.

          That "legal framework" you speak of is represented by Morals and Ethics, which Greed N. Corruption wakes up every morning and slaps in the face. We built a legal framework to fight against Greed. Didn't do a damn thing in the end. Mandatory training for morals and ethics in business is now considered THE corporate joke with one hell of a punch line.

          It does mean you must compete to survive. That may be unhappy news, but it has always be true, and will continue to be true in the forseeable future.

          As always, the bottom line is: adapt or die.

          Watching "modern" humans brag about how they evolved over generations while also validating that not a damn thing has actually changed, is probably the purest

          • The sad reality is, we're living in an age where we have the option of noticing that greed is killing us, but none of us is smart enough, strong enough, or clever enough to figure out how to fight it. When you're fighting for survival, as our ancestors had to every day, it's tough to think of greed as a bad thing. It's much easier when we're sitting at a desk, typing on a keyboard, with plenty of time between passive dangers like driving / riding to and from work, to think about how selfish, greedy, and utt

            • The sad reality is, we're living in an age where we have the option of noticing that greed is killing us, but none of us is smart enough, strong enough, or clever enough to figure out how to fight it. When you're fighting for survival, as our ancestors had to every day, it's tough to think of greed as a bad thing. It's much easier when we're sitting at a desk, typing on a keyboard, with plenty of time between passive dangers like driving / riding to and from work, to think about how selfish, greedy, and utterly, annoyingly stupid we are as an entire race now that we've overcome the "kill or be killed" wildness we came from.

              We are adolescent as a race. Old enough to know better, too angry and short-sighted to care.

              "...Good times, create soft men. And soft men, create hard times." - G. Michael Hopf

              Couple that with the fact that a 60-year old man in 1950 had the same level of testosterone as a 30-year old man does today (no thanks to "modern" feminism selling masculinity as toxic), and we start to understand why relying on the Silent Majority to stand up and fight back one day, isn't going to happen no matter how bad things get.

              And if you're living in America, it's even more concerning when you consider these probl

              • "...Good times, create soft men. And soft men, create hard times." - G. Michael Hopf

                Couple that with the fact that a 60-year old man in 1950 had the same level of testosterone as a 30-year old man does today (no thanks to "modern" feminism selling masculinity as toxic), and we start to understand why relying on the Silent Majority to stand up and fight back one day, isn't going to happen no matter how bad things get.

                And if you're living in America, it's even more concerning when you consider these problems are very American in nature. Other nuclear-armed countries armed with a significant capacity to attack and destroy, were never that short-sighted or ignorant about the necessary nature of masculinity.

                We are a very all or nothing type culture. While there certainly have been plenty of examples of toxic masculinity to deal with over the years, the concept that all masculinity was toxic by default has really done a number on us, and looks to continue to do so.

        • > We can lament greed until we are blue in the face. And we would be right to do so. But that won't eliminate it. Nor can we legislate it away. The ONLY thing we can do is **find a way to survive in a world populated by greedy people**. This includes both building a legal framework that pits greed against greed for an overall societal benefit, and individual strategies for thriving within this framework. It does mean you must compete to survive. That may be unhappy news, but it has always be true, and wi

    • by Anonymous Coward
      Who said anything about commercially releasing products? Not that what you said will stop that ANYWAY. The article posits the open source nature of people using it and its expansion, not primarily for any commercial purpose.
    • If you figure out a way to make money, you can afford to negotiate another license.

    • Re:License (Score:5, Interesting)

      by bettodavis ( 1782302 ) on Sunday June 04, 2023 @11:56AM (#63575443)
      There are permissive open source models that aren't LLaMa-based, now even exceeding its capabilities, and that can be used for whatever you want right now

      For example Falcon-40B ( https://huggingface.co/tiiuae/... [huggingface.co] ).

      The domain of homebrewed and locally hosted LLMs is just beginning, in what will be an explosion of creativity, akin to the personal computer revolution, vs the centralized multiuser mainframes.

      And that's precisely what motivates the self-serving FUD and lobbying of the big players.
    • Comment removed based on user account deletion
    • by jma05 ( 897351 )

      What the LLAMA leak did was create a free experimenter community. You no longer needed an institutional affiliation. The skills the community developed with it, even if the model itself isn't free for for-profit products, will carry over when a totally unencumbered model is released. You can be sure that there will be a future model trained with public money will be released from EU or elsewhere with research tax dollars.

      Falcon has a better license and so will LLAMA as Yann LeCun indicated. We should reason

  • The tech seems potentially very dangerous, but at least there are signs it will be democratized. From the article; "the open-source community has lapped the major corporations and has an overwhelming lead on them". In a sense this is good because otherwise the big corps would own and control all the tech for their own ends. But I think we can kiss the idea of responsible governance goodbye.

    It is easy to image that at some point there will be applications based on LMM's that are aggressively hostile to the i

    • Re: (Score:1, Troll)

      AI's big impact comes when it's hooked into a system of power. Without that it's a thought experiment, with limited impacts on the world at large. Little positive impact will come from "democratizing" it. The main impact of LLM proliferation will be any low-level scammer being able to set up an army of scambots. The little guy isn't capable of wiring his AI into the banking system, the military command structure, the police, the social-media backend, or the educational system. He can only use it on the othe

      • Hackers are already getting into those systems. Sometimes all it takes is a successful phishing attack, or spotting an unpatched vulnerability. I've got a couple of cloud servers, they are being scanned constantly.

        But beyond that, there may very well be all kinds of unforeseen harmful things that a maliciously trained and motivated LLM could unleash. Businesses will be created that can do it for hire, or try to defend you from it.

        Is it better that only big tech and nation states will be able to do it? I'm n

  • by micheas ( 231635 ) on Sunday June 04, 2023 @03:22PM (#63575803) Homepage Journal

    A single training run of a LLM typically runs about $100,000,000 (in US dollars)

    Unless you work and Amazon or Google and have access to unlimited compute how does open source matter?

    Possibly if federated learning takes off for training LLMs where we could take something like the SETI project and get hundreds of thousands of people to contribute compute power to the effort it could work, but looking at how poorly performing many LLMs perform that have been trained with tens of millions of dollars of compute in a training run and it starts to look idealist.

    OpenAI will give you a private model for $100,000 plus additional costs.

    Open source might take over, but it's going to be in a way more like how Stalman thought about it in the 1970s than how people currently think of open source software.

    • by cj* ( 149112 )

      $100m for a run, where getting it right took hundreds of runs?

      Do you have a source for that number?

      • by sfcat ( 872532 )
        He said, 100k...not 100m. And that's just one run of a really big model. You might need several runs. You need 3 things to make this type of model, AI researchers, data and lots of compute. These researchers only have 1 of those 3, that's why nothing will happen here.
    • A single training run of a LLM typically runs about $100,000,000 (in US dollars)

      Unless you work and Amazon or Google and have access to unlimited compute how does open source matter?

      Possibly if federated learning takes off for training LLMs where we could take something like the SETI project and get hundreds of thousands of people to contribute compute power to the effort it could work, but looking at how poorly performing many LLMs perform that have been trained with tens of millions of dollars of compute in a training run and it starts to look idealist.

      Training costs are falling dramatically due to rapid hardware and software advancements. Those with largest pockets will always be able to push state of the art yet it won't be long assuming current trends before there are open source models exceeding capabilities of GPT-4.

      Open source might take over, but it's going to be in a way more like how Stalman thought about it in the 1970s than how people currently think of open source software.

      I very much doubt it. There is massive global interest in collaborating.

    • by AmiMoJo ( 196126 )

      Can you not take a trained AI and then modify it with additional training? That would rely on the donation of a decent model to start with, but at least it could be tailored to our needs.

  • Besides an excellent cheating device AI in honesty does nothing not already done when it comes to cash extraction engines like Google,Amazon and anything that makes a cut on a purchase. Now some dumb businesses can get a chance at seeing what their competitors do. But at the end of the day, Consumers will have the same amount of cash to spend. I believe AI's win will be 1) telling folk who is the cheapest, and 2, who not to do business with, and 3) Tricking people to use a dumbed down AI because that is bad
  • Licensing problems aside, open-source LLMs is where the innovation will happen, IMO. Corporations are pretty crappy in how well they can innovate, compared to FOSS.

Marvelous! The super-user's going to boot me! What a finely tuned response to the situation!

Working...