Building a Better Server? Oxide Computer Ships Its First Rack (thenewstack.io) 29
Oxide Computer Company spent four years working toward "The power of the cloud in your data center... bringing hyperscaler agility to the mainstream enterprise." And on June 30, Oxide finally shipped its very first server rack.
Long-time Slashdot reader destinyland shares this report: It's the culmination of years of work — to fulfill a long-standing dream. In December of 2019, Oxide co-founder Jess Frazelle had written a blog post remembering conversations over the year with people who'd been running their own workloads on-premises... "Hyperscalers like Facebook, Google, and Microsoft have what I like to call 'infrastructure privilege' since they long ago decided they could build their own hardware and software to fulfill their needs better than commodity vendors. We are working to bring that same infrastructure privilege to everyone else!"
Frazelle had seen a chance to make an impact with "better integration between the hardware and software stacks, better power distribution, and better density. It's even better for the environment due to the energy consumption wins."
Oxide CTO Bryan Cantrill sees real problems in the proprietary firmware that sits between hardware and system software — so Oxide's server eliminates the BIOS and UEFI altogether, and replaces the hardware-managing baseboard management controller (or BMC) with "a proper service processor." They even wrote their own custom, all-Rust operating system (named Hubris). On the Software Engineering Daily podcast, Cantrill says "These things boot like a rocket."
And it's all open source. "Everything we do is out there for people to see and understand..." Cantrill added. On the Changelog podcast Cantrill assessed its significance. "I don't necessarily view it as a revolution in its own right, so much as it is bringing the open source revolution to firmware."
Oxide's early funders include 92-year-old Pierre Lamond (who hired Andy Grove at Fairchild Semiconductor) — and customers who supported their vision. On Software Engineering Daily's podcast Cantrill points out that "If you're going to use a lot of compute, you actually don't want to rent it — you want to own it."
Long-time Slashdot reader destinyland shares this report: It's the culmination of years of work — to fulfill a long-standing dream. In December of 2019, Oxide co-founder Jess Frazelle had written a blog post remembering conversations over the year with people who'd been running their own workloads on-premises... "Hyperscalers like Facebook, Google, and Microsoft have what I like to call 'infrastructure privilege' since they long ago decided they could build their own hardware and software to fulfill their needs better than commodity vendors. We are working to bring that same infrastructure privilege to everyone else!"
Frazelle had seen a chance to make an impact with "better integration between the hardware and software stacks, better power distribution, and better density. It's even better for the environment due to the energy consumption wins."
Oxide CTO Bryan Cantrill sees real problems in the proprietary firmware that sits between hardware and system software — so Oxide's server eliminates the BIOS and UEFI altogether, and replaces the hardware-managing baseboard management controller (or BMC) with "a proper service processor." They even wrote their own custom, all-Rust operating system (named Hubris). On the Software Engineering Daily podcast, Cantrill says "These things boot like a rocket."
And it's all open source. "Everything we do is out there for people to see and understand..." Cantrill added. On the Changelog podcast Cantrill assessed its significance. "I don't necessarily view it as a revolution in its own right, so much as it is bringing the open source revolution to firmware."
Oxide's early funders include 92-year-old Pierre Lamond (who hired Andy Grove at Fairchild Semiconductor) — and customers who supported their vision. On Software Engineering Daily's podcast Cantrill points out that "If you're going to use a lot of compute, you actually don't want to rent it — you want to own it."
Re:Nice (Score:4, Informative)
Maybe, but read through their repos [github.com] - they're the real deal.
Impressive what they've accomplished so far. Ending the legacy PC inefficiencies would be a huge deal.
Re: (Score:3)
The ARM cores are for the BMC modules. The servers are based on AMD EPYC Zen 3 CPUS, as you can see on the specifications page [oxide.computer].
Re: (Score:1)
I was just reading through their GitHub repos, so what is the point that you have a different OS for your BMC, if it isn't a coreboot/OS alternative for your x86 servers. BMCs don't ever need to reboot and are relatively fast and open (RedFish) already.
Re: (Score:2)
There is no bios just a very small bootloader, the OS is supposed to do almost everything.
They were supposed to open up their OS, Helios, when they shipped their first rack ... dunno what happened.
Yup....everyone else is the stupid one! (Score:2)
Their slogan, "The power of the cloud in your datacenter" is just fucking stupid. "The Cloud" is a bunch of data centers. So they are going to give you "The power of the cloud in your cloud". This level of stupidity does not inspire confidence.
Yes, as we all know, there is absolutely no difference between the server infrastructure at hyperscalers like AWS and Azure and what Oxide's customers are running in their legacy server racks. Thank you for lending your expertise on hyperscaler architecture to us plebes!
Re: (Score:2)
That language is targeted at the business-types who make pretty much all tech decisions (rather than us tech-types actually making decisions). Sigh.
Re:Nice (Score:5, Interesting)
Hubris (Score:5, Funny)
They even wrote their own custom, all-Rust operating system (named Hubris).
If the Universe has a sense of either humor or irony, that name will come back to bite them on the ass at some point.
Just throwin' that out there...
Re: (Score:3)
The semantics for RPC (rendezvous) is similar to, but simplified from Ada. With use and implementation experience, there were a fair number of 'gotchas' discovered in the detailed semantics. One I remember had to do with priority inversion, where a higher priority task (in the Ada sense) was in a rendezvous with a lower priority task, and that got pre-empted by a task whose priority was in between those 2 priorities. (I forget the exact semantics, it's been a while since I thought about that stuff.) The
Re: (Score:2)
The classic example involves three tasks. Call them 1, 2 and 3, where task 1 has the highest priority and task 3 has the least. Task 3 grabs a resource, task 1 tries to grab it, then task 2 becomes runnable and keeps task 3 from running (long enough for task 1 to miss a deadline).
The classic fix is giving each lock a priority, either statically or dynamically. If statically, a higher-priority task cannot grab a lower-priority lock, and a lower-priority task inherits the priority of a higher-priority lock
Re: (Score:2)
Re:Hubris (Score:4, Funny)
Re:Hubris (Score:4, Insightful)
Exactly. Like Larry Wall's three "virtues" of programming: laziness, impatience and hubris. Thing is, those are actually the vices, and the true virtues of programming are actually diligence, patience and humility, though vices have their uses.
Re: (Score:3)
> They even wrote their own custom, all-Rust operating system (named Hubris).
At least they called the debugger "humility"
Goofy transcript (Score:3, Interesting)
Re: (Score:1)
Your shitty AI has broken.
Re: (Score:2)
proprietary hardware (Score:2)
Re: (Score:2)
I'm failing to understand what problem they're trying to solve. So they built their own dedicated service processor to take over the function of the bios? As a customer this allows me to to what exactly? I guess boot faster?
Re:proprietary hardware (Score:4, Insightful)
As a customer this allows me to to what exactly? I guess boot faster?
Several things. One thing is you can audit the entire stack; you don't have to take some vendors word that their work isn't the steaming POS it appears to be. Also, you can customize it; need to achieve compliance with some law/regulation that can't be achieved without a crucial feature? You can go into that service processor yourself and deal with it.
Honestly this is just commodity hardware growing up: mainframes have been using service processors since the 70's, and they aren't doing that because they're foolish and in desperate need of your thoughts on the matter. At some point sequencing the bootstrap of huge piles of costly hardware can no longer be subject to the vagaries of some vendors glitchy, proprietary firmware blob.
Re: proprietary hardware (Score:2)
Lots of people on here have no experience with ipmi and it shows