OpenAI Fiasco: Emmett Shear Becomes Interim OpenAI CEO as Altman Talks Break Down 73
Sam Altman will not be returning as CEO of OpenAI, after a furious weekend of negotiations.
The Information reports: Sam Altman won't return as CEO of OpenAI, despite efforts by the company's executives to bring him back, according to co-founder and board director Ilya Sutskever. After a weekend of negotiations with the board of directors that fired him Friday, as well as with its remaining leaders and top investors, Altman will not return to the startup he co-founded in 2015, Sutskever told staff. Emmett Shear, co-founder of Amazon-owned video streaming site Twitch, will take over as interim CEO, Sutskever said. The decision "which flew in the face of comments OpenAI executives shared with staff on Saturday and early Sunday "could deepen a crisis precipitated by the board's sudden ouster of Altman and its removal of President Greg Brockman from the board Friday. Brockman, a key engineer behind the company's successes, resigned later that day, followed by three senior researchers, threatening to set off a broader wave of departures to OpenAI's rivals, including Google, and to a new AI venture Altman has been plotting in the wake of his firing.
Venture capitalist Jason Calacanis predicts on X:
The employees at OpenAI just lost billions of dollars in secondary share sales that were about to happen at a $90b valuation - that's over. Done.
I think OpenAI will lose half their employees, the 12-18 month lead, and 90% of their valuation in 2024.
Just insane value destruction
What's your prediction for the future of OpenAI?
The Information reports: Sam Altman won't return as CEO of OpenAI, despite efforts by the company's executives to bring him back, according to co-founder and board director Ilya Sutskever. After a weekend of negotiations with the board of directors that fired him Friday, as well as with its remaining leaders and top investors, Altman will not return to the startup he co-founded in 2015, Sutskever told staff. Emmett Shear, co-founder of Amazon-owned video streaming site Twitch, will take over as interim CEO, Sutskever said. The decision "which flew in the face of comments OpenAI executives shared with staff on Saturday and early Sunday "could deepen a crisis precipitated by the board's sudden ouster of Altman and its removal of President Greg Brockman from the board Friday. Brockman, a key engineer behind the company's successes, resigned later that day, followed by three senior researchers, threatening to set off a broader wave of departures to OpenAI's rivals, including Google, and to a new AI venture Altman has been plotting in the wake of his firing.
Venture capitalist Jason Calacanis predicts on X:
The employees at OpenAI just lost billions of dollars in secondary share sales that were about to happen at a $90b valuation - that's over. Done.
I think OpenAI will lose half their employees, the 12-18 month lead, and 90% of their valuation in 2024.
Just insane value destruction
What's your prediction for the future of OpenAI?
Re:It was already snake oil. (Score:5, Interesting)
Re: (Score:2)
I'll take "Things Not Even Remotely Implied By The Article, And Indeed, Precisely The Opposite" for $1000, Ken!
Re:It was already snake oil. (Score:5, Informative)
To be more specific:
OpenAI is a rather unusual company. It actually started out as a nonprofit, designed for "democratizing AI technology" in a safe and responsible manner. In 2019 they transitioned from a nonprofit to a "capped for-profit", albeit with a huge cap of 100x investment. But it's still subject to the nonprofit board, OpenAI, Inc, in all decisions. Board members are even banned from owning stakes in the for-profit in order to prevent their interests from aligning with a "growth at any cost" motive. They see themselves as a safety valve against the hazards of uncontrolled AI.
The company was being drawn in two different directions. Reportedly, one, represented by Ilya Sutskever, wanted to go cautiously and stick with the founding principles while being careful to avoid the risk of uncontrolled AGI; his personal focus, as an AI researcher, is superalignment. The other, represented by Sam Altman, was pushing hard on commercialization and outracing all potential competitors. This was automatically going to create a collision course. But apparently there were also strong intrapersonal controversies as well. Altman is now being accused of the cardinal sin, of withholding information from and misleading the board. But he also has a lot of strong backers within the company who think he's right and want to see the company push as hard as it can rather than putting on the brakes.
Altman has a lot of fans, but he's also not been without controversy. Some years back, his younger sister accused him and his brothers of repeatedly raping her as a child, starting when she was four years old.
Re: (Score:1)
Altman has a lot of fans, but he's also not been without controversy. Some years back, his younger sister accused him and his brothers of repeatedly raping her as a child, starting when she was four years old.
It is an interesting twist. Are you suggesting that they fired him for this? They found out and decided to show their two cents?
Re: (Score:2)
Why would anyone with a brain think the increased availability changes snake oil into something that isn't snake oil?
Thinking otherwise is like believing once more hucksters began using the "selling a bridge" scam it was "maybe not a scam *this time*".
Re: (Score:3, Interesting)
There are a log of non-smart impressible people out there that have no clue how things work. These usually get excited by promises without backing evidence and then actively ignore any evidence to the contrary. Ideal scam victims and the "AI" scan currently running is probably tailor-made for these people.
Re: (Score:2)
> Why would anyone with a brain think...
and
> Thinking otherwise is...
You go on, girl, question why everyone thinks the sky is blue, and then telling them how to think.
You brought nothing to the discussion except to prove that if you know nothing about a business,
you shouldn't question the thinking of those who do, and then presuming to judge it.
Sam Altman will be around long after nobody remembers your uneducated opinions on thinking.
Re: (Score:2)
My dude you have to use pointed bracket tags (like in html) to quote, this site doesn't use markdown. Speaking as someone uneducated to someone educated.
Re: (Score:2)
his quoting style is absolutely spot on: it works on any medium that supports text, it preserves the structure of the content (unlike /.'s lame quote tag) and is actually how quotes were made on the internet before it was swamped with flashy banners and infantile widgets.
Re: (Score:2)
Those '>' quotes were good enough for email or USENET replies on my 80-column greenscreen with dialup BBS. And damnit, they are good enough for slashdot.
Re: (Score:2)
It's also a NON PROFIT. The employees were not set to lose billions. There was a side venture that was for-profit, but with a cap on returns, and even though it was controversial. I think there was a battle between the non-profit and for-profit sides, once the lure of potential money is there then ideals are hastily jettisoned.
And now Satya has made a masterful move (Score:5, Informative)
https://x.com/satyanadella/sta... [x.com]
Re: (Score:1)
Re: (Score:2, Insightful)
Microsoft definitely does not have a cut-throat environment.
Re: (Score:3)
Re: (Score:2)
Nadella's move is masterful in more ways than one: it's also a warning to other entrepreneurs that if you enter into a partnership with Microsoft and you don't behave, you might end up going from high-profile star billionnaire CEO who does and says what he wants to employee at Microsoft sucking a manager's dick.
That's not great on anybody's resume.
Re: (Score:2)
I am pretty sure Altman has enough money to have his dick sucked by anyone he wants. Hell, he could probably has enough money to raise the Queen from the dead and do it. He is surely not going to to that at MicroSoft
Re: (Score:2)
I'm pretty sure he won't be allowed to open his trap and behave like a diva without his employer's permission from now on. He may be a billionaire but he takes orders now, however loose the orders.
Re: (Score:2)
Re: (Score:2)
This is total speculation on my part... there may well be far more to it.
The AI is taking over ... (Score:2)
Re: (Score:2)
Obviously not. That would require insight and agency. AI has none of that.
Re:The AI is taking over ... (Score:5, Insightful)
The originality of machines: AI takes the Torrance Test [sciencedirect.com]
There's a common misunderstanding that AIs are compositors, some sort of elaborate copy / cut+paste, that they store "data" and just sort of merge it together. That's not how they work. They store motifs, concepts, and how those concepts relate to each other, along vast numbers of axes. They generalize, not memorize. Concepts in shallow layers are quite simple - however, they're iteratively chained with each successive layer, leading to concepts in deep layers being massively complex, accounting on vast amounts information about how the world works and vast amounts of analysis of the concepts in discussion before making decisions. Iteratively working on concepts and their relations absolutely does allow insight.
There's this misconception that "prediction" is some fundamentally different task than thought, but it's actually a key component of thought. You physically cannot learn anything complex without prediction. Your brain is constantly generating predictions about what your senses will experience - all stemming from ever-more complex analysis of the concepts present with each successive "layer" - and the differential between what is actually experienced vs. what was predicted is what provides the error that allows for weight adjustment. This propagates not simply to neurons directly connected to sensory neurons, but back through the entire chains of thought that led to the erroneous prediction, to the most complicated of concepts and their relations to each other, allowing them to be reweighted in a potentially more accurate manner. We are predictive machines. That's what we do. We're also probabilistic machines, with a heavy amount of random noise. We're non-Gaussian, we don't use softmax, we use pulsed coding and a lack of a true layering architecture, we use a localized learning process instead of global, and analog models instead of matrix math. But ultimately, thought comes down to prediction and propagation of error to build up answers to superimposed questions about concepts and their relations to each other. With every "generation" being zero-shot, an "instinct" or "impulse", but chains of them being what we would describe as thought.
Re: (Score:2, Insightful)
Nope. AI has no idea what a "concept" is. All it can do is work on patterns and probabilities of said patterns. Incidentally, it can only be "creative" in one specific sense: By faking it, i.e. hiding its sources.
Yep, I get it, you people are looking for sentience and ultimately a surrogate "God". You are looking in the wrong place and kidding yourselves (just like any other religious fuckup) dos not fis that.
Re:The AI is taking over ... (Score:4, Informative)
Yes, AI very much works by concepts [distill.pub] (easiest to illustrate in image recognition nets, since you can visually observe the activation and there's not nearly as much superposition, but applies to all, including LLMs [anthropic.com].).
And again, there are no "sources". Because they don't memorize data; they generalize. You can force a neural network memorize if you either (A) have more parameters than data (not a normal situation) and train it for long enough, or (B) heavily replicate a particular piece of data throughout (at the cost of learning of other data). But this is neither the general case, nor the desirable case. Best performance comes from generalization. Indeed, when training you have two separate metrics: train-loss and eval-loss. Train-loss is the loss reported from your training data. But generally you have a separate set of data set aside for evaluation. This is not used in training, and thus cannot be memorized. You regularly run inference on it to get eval-loss, and it's by this metric that you measure the performance of your network, e.g. how much it's generalized the learning task.
Indeed, even in the case of (A), you can still generalize. And not just from running only 1-2 epochs. That's what things like dropout are for - to force generalization on a model that's (again, in an abnormal situation) capable of memorization.
Re: (Score:2)
Nope. Please get at least a 101 level on this. You are claiming bullshit by trying to anthropomorphize AI. AI cannot do "concepts", that requires AGI and AGI does not exist at this time and may never exist.
Re: (Score:2)
I'm the one linking research and papers from AI researchers. You're the one making random assertions on the internet.
(I also, FYI, train my own models, in case you care)
Re: (Score:2)
Oh, yeah, keep linking non-peer reviewed stuff that cheers for the field. You know what? I have a _lot_ of experience reviewing papers and a lot of scientists do not have any actually working insight and some will hype their own stuff beyond belief with no factual basis for that. I also have been following the AI field for more than 35 years now.
So nope. You have nothing. AI cannot do "concepts". AI cannot do "insight". AI cannot do "idea". Get over it.
Re: (Score:2)
Nope. AI has no idea what a "concept" is.
Whether or not it has "any idea" what a concept is it certainly has demonstrated the ability to apply them which sets it apart from T9 autocomplete.
I recently uploaded portions of a manual for a proprietary DSL not externally accessible into my context and asked deepseek-coder-33b-instruct to write a program in the language. It was a simple ask to list all active databases in a specific RDBMS and present them in a table to the user. It applied its general knowledge of language and databases to understand
Re: (Score:2)
You ask it something it will _literally_ find in the training data and then you claim it "applied" something? What are you smoking?
Re: (Score:2)
that they store "data" and just sort of merge it together. That's not how they work. They store motifs, concepts, and how those concepts relate to each other, along vast numbers of axes. They generalize, not memorize.
They do both, otherwise they couldn't give you exact quotes. But they are able to give you exact quotes, so they do memorize.
Re: (Score:2)
>There's a common misunderstanding that AIs are compositors, some sort of elaborate copy / cut+paste, that they store "data" and just sort of merge it together. That's not how they work. They store motifs, concepts, and how those concepts relate to each other, along vast numbers of axes.
No, they don't store "motifs" or "concepts" or how they relate to each other. They store tokens and probabilities of next tokens and the like. And these tokens aren't even words, put word parts.
>There's this misconce
Re: (Score:2)
Indeed, that nicely sums it up. No idea why some people insist these things think or can have ideas or understand anything. Probably just people that have trouble fact-checking but a deep desire for some "slave" to be working for them.
Re: (Score:2)
There's a common misunderstanding that AIs are compositors, some sort of elaborate copy / cut+paste, that they store "data" and just sort of merge it together. That's not how they work.
That depends on the implementation. If you don't have the source code, you're talking out your ass.
Trust me, I've had my fill of image AI-generated art with half-baked artist signatures showing up in the corners. Cheating is always easier than doing it the "right" way.
fork (Score:2)
I see its future in a fork. Microsoft will try to hire away existing researchers and developers and set its own fork. With lesser success, as other projects will be cautious to move from an "open" AI to one owned by their competitor (MS). OpenAI will get drained of people and resources and will scale back. Competitors, like Google and Amazon, will have an opportunity to catch-up.
Re: (Score:3)
"Open"AI was never open. This whole "success story" started with a lie and is still based on lies.
Re: (Score:1)
Why should they be? Do you think the several Billions $$ that all that GPU training time cost was going to be paid by the mother of all Patreon accounts?
Re: (Score:3)
I see you are unaware of the history of OpenAI. Maybe fix that before making claims?
That $90B is not real money (Score:2)
You do know that, right?
MSFT stock is real money (Score:2)
Re: (Score:1)
Nope. That is something called "stock value". Try to keep up.
Re: (Score:2)
What about the $ is real money, anyway?
It works at small scales. If I have 20 bucks and go get groceries, I get groceries worth 20 bucks for my 20 bucks. Because the store will use those 20 bucks, throw in the other 20 bucks their customers give them, then buy more stuff for the 2000 bucks they have that's worth those 2000 bucks.
It kinda breaks apart when it passes an amount that could be considered sensible. What that is is up for debate, but I think we can agree that 90B is way, way past that amount.
At th
Non-profit not designed for profits. (Score:1)
My prediction (Score:2)
Sam Altman will start a new company with the majority of the old OpenAI staff.
OpenAI will either die, most likely, or take over the new company to bring him back.
Far more likely is that Microsoft takes over the new company.
Its 1990s apple all over again (Score:2)
Apple fires Jobs
Jobs creates Next
Apple flounders
Apple brings back jobs and NextStep OS apart from the Desktop becomes the new MacOS
Re: (Score:2)
You forgot:
Microsoft steps in with a lot of cash to save the sinking ship
https://www.wired.com/2009/08/... [wired.com]
Will probably happen here too. When Altman leaves and starts a new shop it will me interesting to see who MicroSofts strategic partner is going to be.
Re:Its 1990s apple all over again (Score:4, Interesting)
Microsoft didn’t do it out of the goodness of their heart, either. This investment was part of a settlement between Apple and Microsoft on a long-running lawsuit that Microsoft was probably going to loose.
Re: (Score:2)
Re: (Score:2)
That is exactly what I was thinking about.
However, in almost every other alternate universe, Apple doesn't survive. Back in the late 1990s / early 2000s, we all thought of Apple in much the same was as the likes of Amiga, which I believe technically still exists in some form, but is basically dead.
Re: (Score:2)
Re: (Score:1)
Re: (Score:2)
Because the minute one of them assesses the company fairly for what it's really worth and bursts the bubble, everybody loses piles of money.
Re: (Score:2)
Exactly. OpenAI is basically a scam with Altman the main scam artist. The whole current AI hype is based on a lie and on cleverly hiding the rater severe limitations of these artificial morons.
Re: (Score:2)
Even as it exists, it is leaps and bounds ahead of previous systems. We are on the exponential part of the curve, it will settle out to flat growth sooner or later. That doesn't mean it's just hype, though. Just look at all the things that are being done that couldn't be done before.
Re: (Score:2)
It is not and there is no "exponential part" of the curve. The only thing that is better is the natural language interface aimed at non-experts. Answer quality is actually worse than, say, IBM Watson (13 years old).
My idea : Ugly AI Monkey with UV laser eyes. (Score:2)
All the cryptomagic blockchaines Ugly Monkey NFTs dipped in an AI sauce combined with a flashy laser show.
All board my hype train and give me all your money!
Re: (Score:2)
Apologies in advance to Firesign Theater.
Nothing of value was lost (Score:2)
Just insane value destruction
Nah. Most of it was hype. Valuation and market cap aren't real value. This is more of a reality check than anything else.
Re: (Score:2)
Indeed. And by now at least smart people know how utterly limited these artificial morons actually are and how little applicability this tech actually has gained in the last "breakthrough".
I have found one somewhat reasonable application so far: Speeding up search. And there it comes with the drawback that you learn less about what you searched and may well miss critical facts.
lol "value destruction" (Score:2)
If your value can be destroyed by a single person quitting, then the value never existed
Re: (Score:2)
Re: (Score:2)
If there is this much drama (Score:2)
This is like a Jerry Springer show episode. "You're fired, wait, you're hired." All we need are the paternity test results.
I think it was epic that MSFT grabbed Altman and Brockman for their efforts and likewise very telling that OpenAI was trying to get them back. That shows confusion in the executive leadership at OpenAI and creates distrust in their ability to lead the company. With the board citing a "lack of candor" leading to a trust issue, now they're going to be holding a lot of air and explaining
Emmett Shear?? (Score:2)
I don't get it (Score:2)
Sam Altman just used to raise money for OpenAI. He has no technical skills. He is not "the magic sauce" of AI. So what's the big deal?
Re: (Score:2)
Re: (Score:2)
Sam Altman just used to raise money for OpenAI. He has no technical skills. He is not "the magic sauce" of AI. So what's the big deal?
A boss who hires the right people, lets them get on with building stuff and doesn't get in their way gets to be popular with those people.
They didn't just lose SA. They lost the group of experts that are happy to work under SA.