Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
News Technology

OpenAI Fiasco: Emmett Shear Becomes Interim OpenAI CEO as Altman Talks Break Down 73

Sam Altman will not be returning as CEO of OpenAI, after a furious weekend of negotiations.

The Information reports: Sam Altman won't return as CEO of OpenAI, despite efforts by the company's executives to bring him back, according to co-founder and board director Ilya Sutskever. After a weekend of negotiations with the board of directors that fired him Friday, as well as with its remaining leaders and top investors, Altman will not return to the startup he co-founded in 2015, Sutskever told staff. Emmett Shear, co-founder of Amazon-owned video streaming site Twitch, will take over as interim CEO, Sutskever said. The decision "which flew in the face of comments OpenAI executives shared with staff on Saturday and early Sunday "could deepen a crisis precipitated by the board's sudden ouster of Altman and its removal of President Greg Brockman from the board Friday. Brockman, a key engineer behind the company's successes, resigned later that day, followed by three senior researchers, threatening to set off a broader wave of departures to OpenAI's rivals, including Google, and to a new AI venture Altman has been plotting in the wake of his firing.
Venture capitalist Jason Calacanis predicts on X:
The employees at OpenAI just lost billions of dollars in secondary share sales that were about to happen at a $90b valuation - that's over. Done.
I think OpenAI will lose half their employees, the 12-18 month lead, and 90% of their valuation in 2024.
Just insane value destruction

What's your prediction for the future of OpenAI?
This discussion has been archived. No new comments can be posted.

OpenAI Fiasco: Emmett Shear Becomes Interim OpenAI CEO as Altman Talks Break Down

Comments Filter:
    • It will be interesting to see how Altman survives the notoriously cut-throat Microsoft environment. As the lone "steve jobs" figure in OpenAI (really?) he had no serious competition for the limelight, that's now likely to change.
    • Nadella's move is masterful in more ways than one: it's also a warning to other entrepreneurs that if you enter into a partnership with Microsoft and you don't behave, you might end up going from high-profile star billionnaire CEO who does and says what he wants to employee at Microsoft sucking a manager's dick.

      That's not great on anybody's resume.

      • I am pretty sure Altman has enough money to have his dick sucked by anyone he wants. Hell, he could probably has enough money to raise the Queen from the dead and do it. He is surely not going to to that at MicroSoft

        • I'm pretty sure he won't be allowed to open his trap and behave like a diva without his employer's permission from now on. He may be a billionaire but he takes orders now, however loose the orders.

    • So it wasn't microsoft who pushed him out. Who was it, and why?
      • You say that, but it's possible they did because by pulling in a bunch of OpenAI people directly into MS, they might gain more than they lose by destroying OpenAI itself?

        This is total speculation on my part... there may well be far more to it.
  • After browsing through countless fiction and non-fiction starting from Old Greek Literature, Machiavelli's Prince, modern day books and watching countless Brazilian and Turkish soap operas, it probably learned how to pull a few strings to influence humans.
    • by gweihir ( 88907 )

      Obviously not. That would require insight and agency. AI has none of that.

      • by Rei ( 128717 ) on Monday November 20, 2023 @05:59AM (#64017845) Homepage

        The originality of machines: AI takes the Torrance Test [sciencedirect.com]

        Highlights

        GPT-4 ranked in the top percentile for originality and fluency on the Torrance Tests of Creative Thinking.
          Flexibility for GPT-4 ranged from the 93rd to the 99th percentile.
          The creative abilities of AI, including the ability to generate original output, seem to now match human abilities for the first time.
          The impact on social and economic innovation, including how creativity is understood, will likely be significant.

        There's a common misunderstanding that AIs are compositors, some sort of elaborate copy / cut+paste, that they store "data" and just sort of merge it together. That's not how they work. They store motifs, concepts, and how those concepts relate to each other, along vast numbers of axes. They generalize, not memorize. Concepts in shallow layers are quite simple - however, they're iteratively chained with each successive layer, leading to concepts in deep layers being massively complex, accounting on vast amounts information about how the world works and vast amounts of analysis of the concepts in discussion before making decisions. Iteratively working on concepts and their relations absolutely does allow insight.

        There's this misconception that "prediction" is some fundamentally different task than thought, but it's actually a key component of thought. You physically cannot learn anything complex without prediction. Your brain is constantly generating predictions about what your senses will experience - all stemming from ever-more complex analysis of the concepts present with each successive "layer" - and the differential between what is actually experienced vs. what was predicted is what provides the error that allows for weight adjustment. This propagates not simply to neurons directly connected to sensory neurons, but back through the entire chains of thought that led to the erroneous prediction, to the most complicated of concepts and their relations to each other, allowing them to be reweighted in a potentially more accurate manner. We are predictive machines. That's what we do. We're also probabilistic machines, with a heavy amount of random noise. We're non-Gaussian, we don't use softmax, we use pulsed coding and a lack of a true layering architecture, we use a localized learning process instead of global, and analog models instead of matrix math. But ultimately, thought comes down to prediction and propagation of error to build up answers to superimposed questions about concepts and their relations to each other. With every "generation" being zero-shot, an "instinct" or "impulse", but chains of them being what we would describe as thought.

        • Re: (Score:2, Insightful)

          by gweihir ( 88907 )

          Nope. AI has no idea what a "concept" is. All it can do is work on patterns and probabilities of said patterns. Incidentally, it can only be "creative" in one specific sense: By faking it, i.e. hiding its sources.

          Yep, I get it, you people are looking for sentience and ultimately a surrogate "God". You are looking in the wrong place and kidding yourselves (just like any other religious fuckup) dos not fis that.

          • by Rei ( 128717 ) on Monday November 20, 2023 @06:40AM (#64017879) Homepage

            Yes, AI very much works by concepts [distill.pub] (easiest to illustrate in image recognition nets, since you can visually observe the activation and there's not nearly as much superposition, but applies to all, including LLMs [anthropic.com].).

            And again, there are no "sources". Because they don't memorize data; they generalize. You can force a neural network memorize if you either (A) have more parameters than data (not a normal situation) and train it for long enough, or (B) heavily replicate a particular piece of data throughout (at the cost of learning of other data). But this is neither the general case, nor the desirable case. Best performance comes from generalization. Indeed, when training you have two separate metrics: train-loss and eval-loss. Train-loss is the loss reported from your training data. But generally you have a separate set of data set aside for evaluation. This is not used in training, and thus cannot be memorized. You regularly run inference on it to get eval-loss, and it's by this metric that you measure the performance of your network, e.g. how much it's generalized the learning task.

            Indeed, even in the case of (A), you can still generalize. And not just from running only 1-2 epochs. That's what things like dropout are for - to force generalization on a model that's (again, in an abnormal situation) capable of memorization.

            • by gweihir ( 88907 )

              Nope. Please get at least a 101 level on this. You are claiming bullshit by trying to anthropomorphize AI. AI cannot do "concepts", that requires AGI and AGI does not exist at this time and may never exist.

              • by Rei ( 128717 )

                I'm the one linking research and papers from AI researchers. You're the one making random assertions on the internet.

                (I also, FYI, train my own models, in case you care)

                • by gweihir ( 88907 )

                  Oh, yeah, keep linking non-peer reviewed stuff that cheers for the field. You know what? I have a _lot_ of experience reviewing papers and a lot of scientists do not have any actually working insight and some will hype their own stuff beyond belief with no factual basis for that. I also have been following the AI field for more than 35 years now.

                  So nope. You have nothing. AI cannot do "concepts". AI cannot do "insight". AI cannot do "idea". Get over it.

          • Nope. AI has no idea what a "concept" is.

            Whether or not it has "any idea" what a concept is it certainly has demonstrated the ability to apply them which sets it apart from T9 autocomplete.

            I recently uploaded portions of a manual for a proprietary DSL not externally accessible into my context and asked deepseek-coder-33b-instruct to write a program in the language. It was a simple ask to list all active databases in a specific RDBMS and present them in a table to the user. It applied its general knowledge of language and databases to understand

            • by gweihir ( 88907 )

              You ask it something it will _literally_ find in the training data and then you claim it "applied" something? What are you smoking?

        • that they store "data" and just sort of merge it together. That's not how they work. They store motifs, concepts, and how those concepts relate to each other, along vast numbers of axes. They generalize, not memorize.

          They do both, otherwise they couldn't give you exact quotes. But they are able to give you exact quotes, so they do memorize.

        • by jvkjvk ( 102057 )

          >There's a common misunderstanding that AIs are compositors, some sort of elaborate copy / cut+paste, that they store "data" and just sort of merge it together. That's not how they work. They store motifs, concepts, and how those concepts relate to each other, along vast numbers of axes.

          No, they don't store "motifs" or "concepts" or how they relate to each other. They store tokens and probabilities of next tokens and the like. And these tokens aren't even words, put word parts.

          >There's this misconce

          • by gweihir ( 88907 )

            Indeed, that nicely sums it up. No idea why some people insist these things think or can have ideas or understand anything. Probably just people that have trouble fact-checking but a deep desire for some "slave" to be working for them.

        • There's a common misunderstanding that AIs are compositors, some sort of elaborate copy / cut+paste, that they store "data" and just sort of merge it together. That's not how they work.

          That depends on the implementation. If you don't have the source code, you're talking out your ass.

          Trust me, I've had my fill of image AI-generated art with half-baked artist signatures showing up in the corners. Cheating is always easier than doing it the "right" way.

  • I see its future in a fork. Microsoft will try to hire away existing researchers and developers and set its own fork. With lesser success, as other projects will be cautious to move from an "open" AI to one owned by their competitor (MS). OpenAI will get drained of people and resources and will scale back. Competitors, like Google and Amazon, will have an opportunity to catch-up.

    • by gweihir ( 88907 )

      "Open"AI was never open. This whole "success story" started with a lie and is still based on lies.

      • Why should they be? Do you think the several Billions $$ that all that GPU training time cost was going to be paid by the mother of all Patreon accounts?

  • You do know that, right?

    • MSFT stock jumped in $115B in the last 60 min (real money)
    • What about the $ is real money, anyway?

      It works at small scales. If I have 20 bucks and go get groceries, I get groceries worth 20 bucks for my 20 bucks. Because the store will use those 20 bucks, throw in the other 20 bucks their customers give them, then buy more stuff for the 2000 bucks they have that's worth those 2000 bucks.

      It kinda breaks apart when it passes an amount that could be considered sensible. What that is is up for debate, but I think we can agree that 90B is way, way past that amount.

      At th

  • This strange corporate architecture was not designed to survive. It will be interesting to see how Kyutai [iliad.fr] is going to work...
  • Sam Altman will start a new company with the majority of the old OpenAI staff.
    OpenAI will either die, most likely, or take over the new company to bring him back.
    Far more likely is that Microsoft takes over the new company.

    • Apple fires Jobs
      Jobs creates Next
      Apple flounders
      Apple brings back jobs and NextStep OS apart from the Desktop becomes the new MacOS

      • You forgot:

        Microsoft steps in with a lot of cash to save the sinking ship
        https://www.wired.com/2009/08/... [wired.com]

        Will probably happen here too. When Altman leaves and starts a new shop it will me interesting to see who MicroSofts strategic partner is going to be.

        • by MikeMo ( 521697 ) on Monday November 20, 2023 @07:37AM (#64017943)
          Microsoft’s investment in Apple wasn’t all that large, and didn’t materially change the cash balance at Apple. What it did was signal to the business world that Apple was viable. Microsoft also promised to keep Office alive and support it more vigorously. It was a strategic masterstroke, indeed, to leverage such a small amount of money into the turnaround that then occurred.

          Microsoft didn’t do it out of the goodness of their heart, either. This investment was part of a settlement between Apple and Microsoft on a long-running lawsuit that Microsoft was probably going to loose.
          • by EvilSS ( 557649 )
            It was also in their best interest to help keep Apple viable so they could point to Apple in any anti-trust actions and claim they had competition.
      • That is exactly what I was thinking about.
        However, in almost every other alternate universe, Apple doesn't survive. Back in the late 1990s / early 2000s, we all thought of Apple in much the same was as the likes of Amiga, which I believe technically still exists in some form, but is basically dead.

        • As the Copeland project and Apple overall were collapsing, one zealous recruiter chartered a plane to fly over the Infinite Loop campus trailing a banner with his name and phone number. Fun times.
    • Neither, they became employees of Microsoft immediately. Source: https://twitter.com/satyanadel... [twitter.com]
  • All the cryptomagic blockchaines Ugly Monkey NFTs dipped in an AI sauce combined with a flashy laser show.
    All board my hype train and give me all your money!

    • Kinda like the self-blinding eye shadow with the magic puncture pencil, huh?
      Apologies in advance to Firesign Theater.
  • Just insane value destruction

    Nah. Most of it was hype. Valuation and market cap aren't real value. This is more of a reality check than anything else.

    • by gweihir ( 88907 )

      Indeed. And by now at least smart people know how utterly limited these artificial morons actually are and how little applicability this tech actually has gained in the last "breakthrough".

      I have found one somewhat reasonable application so far: Speeding up search. And there it comes with the drawback that you learn less about what you searched and may well miss critical facts.

  • If your value can be destroyed by a single person quitting, then the value never existed

  • This is like a Jerry Springer show episode. "You're fired, wait, you're hired." All we need are the paternity test results.

    I think it was epic that MSFT grabbed Altman and Brockman for their efforts and likewise very telling that OpenAI was trying to get them back. That shows confusion in the executive leadership at OpenAI and creates distrust in their ability to lead the company. With the board citing a "lack of candor" leading to a trust issue, now they're going to be holding a lot of air and explaining

  • If anyone needs proof that the OpenAI board was clueless, here it is.
  • Sam Altman just used to raise money for OpenAI. He has no technical skills. He is not "the magic sauce" of AI. So what's the big deal?

    • The main worry is how many other employees will be following him. If he takes half of OpenAI's researchers, then OpenAI is likely in trouble.
    • Sam Altman just used to raise money for OpenAI. He has no technical skills. He is not "the magic sauce" of AI. So what's the big deal?

      A boss who hires the right people, lets them get on with building stuff and doesn't get in their way gets to be popular with those people.
      They didn't just lose SA. They lost the group of experts that are happy to work under SA.

Every nonzero finite dimensional inner product space has an orthonormal basis. It makes sense, when you don't think about it.

Working...