Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI Open Source

Sam Altman: OpenAI Has Been On the 'Wrong Side of History' Concerning Open Source (techcrunch.com) 37

An anonymous reader quotes a report from TechCrunch: To cap off a day of product releases, OpenAI researchers, engineers, and executives, including OpenAI CEO Sam Altman, answered questions in a wide-ranging Reddit AMA on Friday. OpenAI the company finds itself in a bit of a precarious position. It's battling the perception that it's ceding ground in the AI race to Chinese companies like DeepSeek, which OpenAI alleges might've stolen its IP. The ChatGPT maker has been trying to shore up its relationship with Washington and simultaneously pursue an ambitious data center project, while reportedly laying groundwork for one of the largest financing rounds in history. Altman admitted that DeepSeek has lessened OpenAI's lead in AI, and he also said he believes OpenAI has been "on the wrong side of history" when it comes to open-sourcing its technologies. While OpenAI has open-sourced models in the past, the company has generally favored a proprietary, closed-source development approach.

"[I personally think we need to] figure out a different open source strategy," Altman said. "Not everyone at OpenAI shares this view, and it's also not our current highest priority [] We will produce better models [going forward], but we will maintain less of a lead than we did in previous years." In a follow-up reply, Kevin Weil, OpenAI's chief product officer, said that OpenAI is considering open-sourcing older models that aren't state-of-the-art anymore. "We'll definitely think about doing more of this," he said, without going into greater detail.

Sam Altman: OpenAI Has Been On the 'Wrong Side of History' Concerning Open Source

Comments Filter:
  • Me too! Me too! (Score:5, Insightful)

    by jenningsthecat ( 1525947 ) on Saturday February 01, 2025 @08:14AM (#65134625)

    The comment subject pretty much says it all. Seeming to be considering climbing on the Open Source bandwagon by issuing vague and meaningless ponderings. Expressing awareness that OpenAI needs to say something, but really has nothing to say.

    It's possible that Altman is serious about some kind of give and take between open-source and closed source that would be beneficial for all, but I'm betting that it's pure PR bullshit.

    • Re:Me too! Me too! (Score:5, Interesting)

      by Rei ( 128717 ) on Saturday February 01, 2025 @08:42AM (#65134657) Homepage

      But the open source community doesn't want "old models". If we can't have top-end frontier models, at least give us a small distill of them. A lot of open development targets models that can run on normal gaming GPUs, so having access to giant models with hundreds of billions of parameters - while great - isn't a priority for most. But big, maximum-capability models are a priority for the commercial model providers like OpenAI. So if they want to support OSS, give us small models. They won't compete with your main offerings.

    • Altman cares about exactly one thing. Cashing in on this, and cashing in big. Everything he has done and said in at least the last year has been designed to keep the hype up around OpenAI until he can change the company so he can get his billions out of it.

      Nothing else matters to him. He doesn't care about open source, or about any actual correctness or utility of the products he pawns, other than as far as it serves him getting his billions.

      This has been obvious for some time now. He does damage control an

  • by jdawgnoonan ( 718294 ) on Saturday February 01, 2025 @08:24AM (#65134631)
    Altman is not a good human being and he runs a company called "OpenAI" that is everything other than open. I honestly hope someone else wins.
    • I don't hope China wins, but other than that, yes.

    • "OpenAI" is open in the sense in which it was used for Unix advertising back in the nineties, meaning documented and interoperable.

      It's not open in the sense of source code, or training data.

      That Altman is paying lip service to open source while trying to secure a new funding round is very interesting. Capital usually sees openness as a weakness, but maybe the open source buzzword is sufficiently popular now that it's become a strength.

      In general I think that it can be for software, but where so much of the

    • by znrt ( 2424692 ) on Saturday February 01, 2025 @09:08AM (#65134707)

      he lost already, we all won. thanks to a small chinese startup and opensource, llm as a paid for commodity are pretty much done. that was 70%-90% of openai's income and the reason for billions of investment pouring in (and many more expected in near future), to recoup which they will now have to find another business model, and i would be very humored if they don't. (don't worry, sam will get to keep his share of those billions).

      • by dfghjk ( 711126 )

        I don't believe this is true...yet, but I believe this is true. Current events have rattled the AI community but not transformed it. There will be more, though.

        Within the last year we had news that ternary data types were sufficient for many computations during inferencing, and that this could result in radical improvements in computational efficiency. The fact is, technology experts in AI don't really know much about what they are doing and that goes double for frauds like Sam Altman. This latest news

        • by gweihir ( 88907 )

          To be fair, actual meaningful and transformative advances to LLMs are currently far, far out of reach. Far enough that it is not even clear they are possible.

  • Do you think some dumb politicians will now see open source as a danger because of this? Will they blame OSS for the loose of stock market money?
    • I wouldn't be surprised if Altman is saying this in order to pave the way for that. Far more money has already been invested in this than will ever be recovered While there are uses for these models, and value will be gained from them in the future, the bulk of the money invested so far will need to be written off first. There is a big crash coming. These Companies are heading for a serious devaluation in the near to medium future. Probably the near future, as the big players in it seem to be selling off th

    • by dfghjk ( 711126 )

      If brown people did it, then yes. OSS is just DEI, part of deregulating to become great again is regulating the enemy.

  • Is the dictionary definition of the pot calling the kettle black.

  • What he is saying is that he wants M$ keep all the innovation and profit, and make the other code available to us so we can provide free bugfixes to the broligarchy.
  • Very few companies have open sourced anything relating to Generative Machine Learning. Many companies have lied about being Opensource, like Facebook.

    When you see a claim, look at the license, and yell at them if they're calling something opensource when it has restrictions on whether you can train other models with its outputs, or if you need a different license for commercial use or over a certain number of uses, or some social responsibility clause.

    • by dfghjk ( 711126 )

      "When you see a claim, look at the license, and yell at them if they're calling something opensource when it has restrictions on whether you can train other models with its outputs, or if you need a different license for commercial use or over a certain number of uses, or some social responsibility clause."

      Or constraints on the hardware it runs on? Or is that OK because it's a self-serving limitation? Or restrictions on how the user uses the software like the AGPL has?
      It seems your definition of open sour

      • AGPL does not in any way restrict how the user uses the software. On the contrary, it allows the user full access to the source code of the software they use.

      • by allo ( 1728082 )

        AGPL is approved by OSI and FSF.

  • ...is the one that benefits Altman. It's almost funny it's so transparent now.

    • by dfghjk ( 711126 )

      Well, at least it's gratifying that it took less time for bros to see it with Altman than it took with Musk. And it's not clear that the world yet realizes that both are posers.

      • by gweihir ( 88907 )

        Well, at least it's gratifying that it took less time for bros to see it with Altman than it took with Musk. And it's not clear that the world yet realizes that both are posers.

        Indeed. Despite it being blatantly obvious.

  • by geekmux ( 1040042 ) on Saturday February 01, 2025 @09:45AM (#65134747)

    I have no idea how good or bad a product Sam Altman is capable of making. Don’t really care how open, closed, or some half-ass in between the end product ends up being.

    What I do care about, is continuing to theme every-fucking-thing about AI as some kind of damn race. The nuclear armsrace, got us within 30 minutes of a nuclear wasteland for the last few decades of fearmongering taxpayer-fed profit, with a sitting US President warning us about that shit over half a century ago. No one listened.

    On top of that, kills me that anyone can convince anyone else that their particular “AI” toddler mind LLM is winning anything right now. AI is “winning” about as well as Charlie Sheen on a tiger blood bender. And when it finally does “win”, far too many humans will lose a job. Not just unemployed. Unemployable. With Greed marketing to Gen Fucked that UBI isn’t at all like being on welfare. You know, because they’ll pay with $hitcoin and Government cheese that’s extra sharp.

    Greed, is not prepared for a 25% unemployment rate. You will have blood before you have profit. We’re racing, and have no fucking idea why we’re even running. Not like this win is for humans. It’s for Greed.

    • by Tony Isaac ( 1301187 ) on Saturday February 01, 2025 @10:31AM (#65134815) Homepage

      Humans are, and always have been, competitive. It's no surprise that AI (or any technology) is a race. As soon as one person is successful in AI (or anything) somebody else will want in, and will one-up the original. That's human nature, its not going to change.

      There is zero evidence that AI will put massive umbers of people out of work. None. As a case study, Waymo can't even put Uber drivers out of business in a single city, even Phoenix, where they started. AI is to white collar work, what Waymo is to Uber drivers. We have been automating things for many decades, but yet we still have an unemployment rate around 4%. Why should we think AI can put massive number of people out of work?

      • by gweihir ( 88907 )

        There is zero evidence that AI will put massive numbers of people out of work. None.

        Well, yes for LLMs. Because they will never amount to anything. If any AI tech ever achieves what the false LLM promises claim, that massive job loss is inevitable though. There really is no way to not see it except being deeply caught in a false and completely unfounded belief.

        • So are driverless cars a false promise? Driverless Waymo taxis do exist, and take people for real rides in real cities. Theoretically, each one puts one Uber driver out of work. But there are still plenty of Uber drivers in cities where Waymo operates.

          You haven't explained why LLMs or AI are any different, even if they DO fulfill their promises.

    • by gweihir ( 88907 )

      I agree. Stupid runs very deep in the human race and so does ignorant. That is why massive scams like this one become possible: People to not get it and believe the promises even after decades of having been lied to.

  • Just yesterday Taiwan announced that they are outlawing Deepseek due to security risks, even though it is supposedly "open source." Open Source in an AI is meaningless, unless the training data set is open sourced. That allows us to repeat the training, and develop competitive models that are better. Without the training set, it's analogous to someone publishing the instruction set of a server, and allowing you to run the code remotely but not giving you the executable. It's meaningless. Yes, you can run t
    • Alas nobody will ever release the training set unless it's Wikipedia or something similar. Most training sets are built upon proprietary data they don't own (and are often using without explicit permission.) Even the "legal" stuff OpenAI does, such as basing their content on Reddit posts, is due to licensing - if they release the training set for you to use, you could only use it illegally.

      That said, Taiwan is probably jumping the gun on the Deeplinks stuff. The training set might tell a Taiwanese person so

  • Sam Altman should be sued for false advertising, because OpenAI is NOT open source, he should be forced to change the name to something else other than OpenAI
  • That's just business.

  • "Wrong side of history," Altman says, after locking down his AI garden and charging rent. Suddenly open source looks good when someone else is eating your lunch. Cry me a river, Sam. By making their model open source and free, DeepSeek out-"OpenAI-ed" Altman.

    The rest of the world isn't shedding tears for OpenAI's lost lead. In fact, a lot of people are probably popping champagne corks. There's a global appetite to see the US tech giants humbled, and DeepSeek is serving up a delicious dish of comeuppance. En

Do not use the blue keys on this terminal.

Working...