Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Open Source

OpenAI Readies New Open-Source AI Model (reuters.com) 42

OpenAI is preparing to release a new open-source language model to the public, The Information reported on Monday, citing a person with knowledge of the plan. Reuters: OpenAI's ChatGPT, known for producing prose or poetry on command, has gained widespread attention in Silicon Valley as investors see generative AI as the next big growth area for tech companies. In January, Microsoft announced a multi-billion dollar investment in OpenAI, deepening its ties with the startup and setting the stage for more competition with rival Alphabet's Google.
This discussion has been archived. No new comments can be posted.

OpenAI Readies New Open-Source AI Model

Comments Filter:
  • It is not intelligence, it is a Chinese Room (qv) faking of intelligence.

    But people are calling it AI.

    So ... don't we need a new term for the actually attempts to build something actually intelligent ( and where AI research has been failing for decades and is still failing ) ?
    • Oh, and how is that snippet in any way a SUMMARY of the story?!
      • Did you read the story? It shouldn't even be a story. It's literally just a "yup, this is happening" statement, with no further information.

        I'd be extremely curios how crippled the open source version will be so that they can justify selling the closed source version. I'm assuming it'll be pretty "last generation" until the next iteration of ChatGPT comes along and they can push out the hobbled old version for the open source crowd.

      • The report said, "OpenAI is unlikely to release a model that is competitive with GPT."

        How about you?
    • It is not intelligence, it is a Chinese Room (qv) faking of intelligence. But people are calling it AI. So ... don't we need a new term for the actually attempts to build something actually intelligent ( and where AI research has been failing for decades and is still failing ) ?

      It's too late. AI was grabbed by the marketing departments to demark this "somewhat more complicated than a query" LLM model. When/if we ever reach anything approaching "real" AI, they'll coin a new term for it. And be as stupid as they always are and call it something extro-idiotic, like "AI+" or "AI 2.0" or something.

      • The industry standard term for this kind of work is "AI", and has been since the 1950s. Eventually the goal is "general-purpose AI', which is the term you're looking for, greytree. Industry standard is that any application that can make even a poor attempt at the Turing Test is AI. It may be substandard IQ, but it is AI.

        • Except that, back in the 50s, it was noted that an obvious disproof of the Turing Test as a test of intelligence was the Chinese Room argument. That I mentioned above.
          • by narcc ( 412956 ) on Tuesday May 16, 2023 @04:10PM (#63526661) Journal

            The Chinese Room Argument didn't exist until 1980. I couldn't have been "noted" as "an obvious disproof of the Turing Test" in the 1950s by anyone but a time-traveler. As a "disproof" of the Turing Test, it works well enough, but it's an attack from the side. (Searle is attacking computationalism, not behaviorism.) It also comes way too late. Weizenbaum had put the final nail in the coffin back in the 60s with ELIZA, much to his annoyance.

            The Turing Test was hardly something that needed to be 'disproved' anyway, even in 50s. I don't know that anyone, even Turing, thought it was unassailable. You'll find no end of contemporaneous criticism about it. Remember, Turing thought the question "can machines think" was meaningless and proposed the imitation game as an alternative.

            I think it's fair to call programs like ChatGPT "Chinese Rooms" though it's worth pointing out that Searle's description affords the Chinese Room more computational power than a transformer model can manage.

            Oh, the term you're looking for is AGI or "Artificial General Intelligence". If you're interested in how we got stuck with such a misleading term in the first place, Pamela McCorduck has an excellent history in her book Machines Who Think. Don't let the provocative title put you off. The blame for that lies with the publisher, not the author.

        • by Shaitan ( 22585 )

          General-purpose AI isn't it either. A general purpose AI is just shifting the problem by a dimension. When people talk about AI they are and always have been talking about an artificial sentient lifeform. Simply increasing the complexity of tasks that a mindless automaton can solve doesn't get you there because it does not and never will have an inner monologue or internal observer. That is also what everyone (outside the last few years when they've been chasing money or cross domain programmers) dream abou

          • > does not and never will have an inner monologue or internal observer
              That is one hypothesis. The other is that humans don't have an inner monologue other than the feedback loop of states. It may very well be that artificial sentient lifeform is indistinguishable from advanced chatbot precisely because human sentient lifeform is nothing more than a biological implementation of an advanced chatbot, and it is from that hypothesis that chatbots are named AI.

            • by Shaitan ( 22585 )

              "That is one hypothesis. The other is that humans don't have an inner monologue other than the feedback loop of states."

              Indeed, something that is difficult to establish to an external observer but instantly and innately debunked when one is sentient. Well at least one guess at what you are saying is. Yes we are actually sentient and have a real inner monologue. But do not mistake me, I'm not saying it is impossible to develop sentience artificially nor am I saying that the type on networks we are building c

          • Simple solution, we just use the Mass Effect term of Virtual Intelligence
            • by Shaitan ( 22585 )

              Along these lines I'd be fine with letting them keep Artificial Intelligence... even though they aren't building an Intelligence and to run with Artificially Sentient as the new term for the real thing. As long as the change was very public and very heavily broadcast so essentially EVERYONE understands (including everyone who might cough up funding) that AI systems, including General AI, are not AS and have no potential to become AS.

              No more new terms (or old terms newly popularized) being adopted to capture

      • by neoRUR ( 674398 )

        AI++
        I think that works.

      • by mark-t ( 151149 )
        They don't need a new term. This so called "fake" AI is still AI in the sense that it does mimic intelligent behavior, even if it does so through entirely unintelligent means.
      • I thought there was already an established distinction (in modern subculture anway) - AI refers to anything that appears "AI" like, whereas AGI (Artificial General Intelligence) refers to something more akin to a sentient like behaviour.

    • by dmomo ( 256005 ) on Tuesday May 16, 2023 @02:16PM (#63526375)

      So you might say that in a sense, the intelligence is "artificial"?

      • by Shaitan ( 22585 )

        No, something that is artificial is real and made by humans. This intelligence doesn't exist.

        • by tilk ( 637557 )

          And yet it does things we would consider impossible just a few years ago.

          • by Shaitan ( 22585 )

            So does everything invented in the past few years. Or rather it does things SOME considered impossible a few years ago. The thing I thought impossible for this technology a few years ago is still impossible, sentience, aka intelligence. I have a frog which outperforms the potential of this technology and that thing is damn is almost as close to a robot as a lifeform gets.

    • by ranton ( 36917 ) on Tuesday May 16, 2023 @02:25PM (#63526393)

      So ... don't we need a new term for the actually attempts to build something actually intelligent ( and where AI research has been failing for decades and is still failing ) ?

      We already have that term. It is Artificial General Intelligence. The term has been used since the 90's. Another term for what you are describing is Strong AI.

      • by Shaitan ( 22585 )

        AGI just attempts to shift the complexity, the same thing we have but covering more topics. That isn't the same as an attempt to create an actual artificial sentient intelligence.

        • That isn't the same as an attempt to create an actual artificial sentient intelligence.

          Intelligent != Sentient. Ant colonies exhibit intelligent behaviours when viewed as a single entity, yet they don't have sentience as we understand it; it can be the same for automata built on silicon.

          A system that adapts to its environment through complex, context-aware processes is intelligent even if it doesn't have an "inner voice" that translates those processes into words. We would do well to separate both concepts, because they are not the same, and one does not entail the other.

          • by Shaitan ( 22585 )

            "Intelligent != Sentient"

            Yes, it does. An intelligence is a sentient being. It doesn't matter how many terms people within the field use to try to get around the fact they don't have a clue how to make one. They do not define 'artificial intelligence' as those words already mean something. An artificial intelligence is an artificial sentient being, an intelligence, produced by artificial means. Everything else produced by the field is not AI but rather the failed attempts to produce an AI.

            "A system that ada

          • by ranton ( 36917 )

            Intelligent != Sentient. Ant colonies exhibit intelligent behaviours when viewed as a single entity, yet they don't have sentience as we understand it;

            I think you are confusing sentience with sapience. An ant is a sentient being. When most people use the word sentient they really mean sapient, which includes not only the ability to feel and perceive things but also the ability to think and acquire wisdom. While there are few but multiple animals which could arguably be considered sapient, nearly all (and possibly all) animals are sentient.

            • I'm not confusing those terms, I'm using them deliberately. An ant is a sentient being, but an ant colony is not.

    • by ceoyoyo ( 59147 ) on Tuesday May 16, 2023 @03:26PM (#63526519)

      You don't really need to advertise your ignorance of a major computer science field in every story.

      • by Shaitan ( 22585 )

        AI isn't the property of a major computer science field. It is a general term used by the public and the disparity in concepts is intentional in order to aid in funding of development of failed efforts and marketing of the products of those failed attempts.

        • by ceoyoyo ( 59147 )

          The term "artificial intelligence" was coined at a specific time, by specific people, with a specific meaning. It is the name of a major computer science field. Natural language processing is a classic AI problem.

          Even if the public's ignorance were generally relevant, this is a tech site.

          • by Shaitan ( 22585 )

            No, the notion of an artificial intelligence dates back to antiquity. Artificial means manmade and an intelligence is a sentient being. Artificial intelligence is not a 'term' but rather a plain english description of something people have tried to build.

            "It is the name of a major computer science field."

            Yes, a field founded on the pursuit of and work done by those who were trying to build an artificial intelligence. It also isn't a closed field, like all technology fields it is a public pursuit entirely po

      • You don't really need to advertise what a hateful cunt you are in an irrelevant and pointless reply to every comment.
    • Nuh-UH! ChatGPT uses the letter "L" all the time.
    • In other news, the internet is just a fax machine, amirite?

    • by mark-t ( 151149 )

      An artificial system that can mimic intelligence is still AI, even if it has no actual intelligence to speak of. Computer chess programs beat grandmasters, but these chess programs do not think either, but they are also called AI. This is not a misapplication of the term, AI is a field of ongoing research, and the fact that it doesn't necessarily refer to something that might "actually" be intelligent is irrelevant. Eliza was AI too.

      Clearly, "AI" covers a spectrum that includes what one might also c

  • dump this on Freshmeat!
  • by Pinky's Brain ( 1158667 ) on Tuesday May 16, 2023 @03:35PM (#63526553)

    I doubt ClosedAI is at all impressed by open source efforts, including LLama. Claude however is a much bigger threat.

    Going half in on being commercial was a temporary solution at best. Sure they can pay researchers a little more, but the charter prevents them from tying them down the way commercial companies can. Anthropic got billions for walking out the door with their knowhow, they can't pay researchers enough to prevent that and NDAs, trade secrets and patents are poorly compatible with their charter.

    • by Shaitan ( 22585 )

      They wouldn't need to if they dropped the commercial angle and simply opened up what they are working on. There are plenty of companies that would be happy to pay their researchers just to keep working on the open solution.

  • Guess things are moving along a little quicker than before
  • They realize they have no moat (read the leaked paper). 1. encourage the government to "regulate" the industry so they can keep their monopoly. 2. release an "open source" model that inexplicitly neuters the open source community

Kleeneness is next to Godelness.

Working...