Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Intel Open Source

Can We Replace Intel x86 With an Open Source Chip? (zdnet.com) 359

An anonymous reader quotes Jason Perlow, the senior technology editor at ZDNet: Perhaps the Meltdown and Spectre bugs are the impetus for making long-overdue changes to the core DNA of the semiconductor industry and how chip architectures are designed... Linux (and other related FOSS tech that forms the overall stack) is now a mainstream operating system that forms the basis of public cloud infrastructure and the foundational software technology in mobile and Internet of Things (IoT)... We need to develop a modern equivalent of an OpenSPARC that any processor foundry can build upon without licensing of IP, in order to drive down the costs of building microprocessors at immense scale for the cloud, for mobile and the IoT. It makes the $200 smartphone as well as hyperscale datacenter lifecycle management that much more viable and cost-effective.

Just as Linux and open source transformed how we view operating systems and application software, we need the equivalent for microprocessors in order to move out of the private datacenter rife with these legacy issues and into the green field of the cloud... The fact that we have these software technologies that now enable us to easily abstract from the chip hardware enables us to correct and improve the chips through community efforts as needs arise... We need to stop thinking about microprocessor systems' architectures as these licensed things that are developed in secrecy by mega-companies like Intel or AMD or even ARM... The reality is that we now need to create something new, free from any legacy entities and baggage that has been driving the industry and dragging it down the past 40 years. Just as was done with Linux.

The bigger question is which chip should take its place. "I don't see ARM donating its IP to this effort, and I think OpenSPARC may not be it either. Perhaps IBM OpenPOWER? It would certainly be a nice gesture of Big Blue to open their specification up further without any additional licensing, and it would help to maintain and establish the company's relevancy in the cloud going forward.

"RISC-V, which is being developed by UC Berkeley, is completely Open Source."
This discussion has been archived. No new comments can be posted.

Can We Replace Intel x86 With an Open Source Chip?

Comments Filter:
  • No (Score:5, Insightful)

    by Anonymous Coward on Saturday January 06, 2018 @07:19PM (#55877251)

    No

    • Because who is going to pay for the initial research and design to get it up to snuff?
      • Re: (Score:3, Interesting)

        The same people who paid to develop Linux, Red Hat, etc?

        • Re:No (Score:5, Insightful)

          by swamp_ig ( 466489 ) on Saturday January 06, 2018 @07:51PM (#55877395)

          IC design isn't something you can do in your spare time. You need a full-scale industrial process.

          • Re:No (Score:5, Insightful)

            by K. S. Kyosuke ( 729550 ) on Saturday January 06, 2018 @08:16PM (#55877499)

            IC design isn't something you can do in your spare time. You need a full-scale industrial process.

            You mean IC manufacturing? I'm pretty sure design is largely independent. If it weren't, ARM wouldn't be able to sell synthesizable CPU cores.

            • Re:No (Score:5, Insightful)

              by sanf780 ( 4055211 ) on Saturday January 06, 2018 @09:52PM (#55877865)
              ARM does sell synthesizable cores. What synthesizable means is that you can convert that code into logic gates. So you need at least a standard cell library with the fine detail of those logic gates. Memories are not included, as memories are not synthesizable code if you really want a high bit density and low power usage. In order to get data in and out of the chip, you will also need a DDR interface (this is not synthesizable) with also a DDR memory controller (this one is). Add to this one that you need to generate internal clocks, so you might also want to get a few phase locked loop blocks. Recent CPUs also include some sort of dynamic voltage scaling, frequency scaling, thermal protection, etc. You also want to have peripherals connected through SPI, I2C, maybe UART, and maybe you want an interconnect fabric so that the CPU can talk to them. I am sure I am missing a lot of these things.

              So, it is not just the CPU core, you need a lot more of things in order to get some product. FPGA manufacturers give you both the hardware and the software to translate the code into something you can upload to that FPGA, and usually give you some freebies like DDR and PLLs. However, there are limitations on what you can get from an FPGA. ICs are the way to go if you want to be on the bleeding edge on either performance, price or power efficiency. And as far as I know, the tools to do ICs in advanced processes like 10nm are not either free to use or open source. They are probably also a patent field. At the end of the day, you do not want to spend over one million dollars with tools that tell you "USE AT YOUR OWN RISK".

              • Of course it's much more complex that that. But to say that IC design - especially CPU design - "isn't something you can do in your spare time" is clearly not true. At least substantial portions of the design process are something people could do in their spare time. Skills and education are a much more prominent entry barrier there. The lower level details, yes, that's still a problem. Physically dependent lower-level details such as voltage and power control even worse. I guess it all depends on your expe
                • It's certainly possible to design a CPU in your spare time. I've designed a couple myself.

                  Designing a modern Intel CPU replacement is something else though. That's a lot of man years of work, and most of it is in tedious work like testing and validating that very few people find joyful.

            • I know a guy whose entire job is to build clocks in CPUs. That's all he does. He's really good at it.
              I mention that to give you an idea of the specialization that has taken place in the hardware industry. In Software, you can still be a full-stack developer. In Hardware, those days are past.
              • I now have this image in my mind of tiny little cuckoo clocks chiming away inside CPUs.
              • Re:No (Score:5, Insightful)

                by Pulzar ( 81031 ) on Saturday January 06, 2018 @11:32PM (#55878207)

                Clock guys are like driver guys... the stuff they write and develop is quite a bit different from everything else.

                The guys who work on caches, decode, fetch, etc. are all fairly interchangeable, if you've got a good architect to direct and oversee the work.

              • Re:No (Score:5, Informative)

                by Anonymous Coward on Sunday January 07, 2018 @01:54AM (#55878657)

                I knew a guy who's entire job for over a year at HP was to *route* the clock signal across a single chip (this was on the Superdome chipset).

                Yes, anyone can design the basic ISA logic of a chip. But it takes *huge* teams of people to design a *good* chip with all the modern features that we seem to take for granted such as variable clock and power states, or even more complicated letting them vary across different portions of the same chip. Not to mention coordinating the design and validating the DRC against the manufacturing process.

                There's a really good reason that CPU chip design is only done these days by a very small handful of billion dollar companies with billion dollar budgets. These designs are very complicated and it's no wonder that they keep the IP--they've invested a *lot* to develop it.

                Trivializing it by suggesting that an open source development model could equal or best these products is a tad naive. Unless we were living in a Star Trek economy and there were a few thousand contributors working on it full-time (the same size workforce as these big vendors), I don't see any chance of a competitive result.

            • Re:No (Score:5, Insightful)

              by TooManyNames ( 711346 ) on Saturday January 06, 2018 @11:35PM (#55878219)

              Yeah, no. That's very, very wrong.

              Much of a processor can be designed in RTL (the type of code you could open source), but there are critical components (as in, CPUs do not function without them -- at all) that require detailed knowledge of the underlying process. Any sort of clock distribution, selection, skewing, or balancing, for example, pretty much requires not only detailed knowledge of the types of gates available in process libraries, but also exhaustive simulations across all kinds of different timing scenarios to ensure that designs work as intended. Additionally, these types of circuits are not trivial to design, and they're often tightly integrated into the rest of a design in a way that isn't exactly modular (as in, even if there are separate clocking modules, design assumptions make removal or modifications of those clocking modules quite difficult).

              Maybe you could get away with an open core on an FPGA, but if you do that, you're going to sacrifice a lot in terms of performance -- as in getting at best into the 100s of MHz range compared to GHz for an ASIC. Moreover, you're not going to squeeze multiple cores and huge caches onto a single FPGA, so you'll need some sort of stitching to get anything even remotely close to your most basic Intel off-the-shelf processor.

              Basically the best you'll be able to say with a purely open source CPU is, "buy this for 10x the cost and at 1/10th the performance... but you can feel good that it's open source." At that point, all I can say is, "good luck with that."

              • by AmiMoJo ( 196126 )

                Maybe we don't need to replace the main CPU, just add a second one that handles secure stuff for us. Performance doesn't need to be as good if it is only managing secrets and does some crypto, i.e. the stuff we are worried about being stolen.

                That's basically what a lot of these CPUs do anyway, with things like TrustZone and Intel's management engine stuff. The difference will be that it's under our control.

              • I've commented on it above. Yes, as you go down, you need process details. A nasty part of that is that unlike in the past, fewer and fewer details of the physical processes are openly available to begin with. That's the one obvious obstacle to any kind of independent design. I'm not sure how far one can go with independent design rules these days.
            • Re: (Score:3, Informative)

              You mean IC manufacturing? I'm pretty sure design is largely independent.

              Well, I'm pretty sure you're an idiot. I also know only one of us is right in our certainty. Chips average about a million dollars per prototype run. You can simulate things and have them work flawlessly, but you still have to manufacture a masks, run through the steps of chip fabrication, then do your tests to see if it even remotely works. On the scale of GHz with nanometers of precision things happen like inductive and capacitive effects you can't properly simulate but will utterly fuck over your des

            • As I don't think it has been mentioned in this thread, AMD has been fabless for the better part of a decade.
            • by lkcl ( 517947 )

              You mean IC manufacturing? I'm pretty sure design is largely independent. If it weren't, ARM wouldn't be able to sell synthesizable CPU cores.

              in a traditional environment it takes around 18 man-months to design (and formally test) around 3,000 gates. it's pretty insane. 3,000 gates is about the size of a RISC Core. look up the numbers for an Intel processor (number of transistors - yes many of those are in the cache) - and you get an idea of just how much work is involved.

              also, the actual layout (like a PCB only for transistors and tracks etc.) by automated tools tend to be... well... rubbish, basically. which means that much of the design ha

          • Oh yes you can and lots of people do it, it's not that different from writing programs really. The chip logic is written as code. Just having the chip logic doesn't get you latest and greatest CPU tho, having the manufacturing processes up to snuff is entirely different thing and major R&D effort, but that is completely separate from actual chip logic. Chip developer writes the logic, foundry handles the manufacturing process, Intel does both and it's the latter where they really earn their bread. You c
            • Sure chip design in mostly code. The difference is every patch costs a few hundred thousand dollars, and of course everyone who wants the patch will have to buy a new chip. This is why over 80% of code written for an ASIC is test/verification code.

          • Exactly. Also good luck with trying to 'open source' 10nm die fabrication.
        • To get the ground going on software, you need a smart coder, a computer good enough to run it, and a boatload of free time, if you treat your time as worthless, the high end of the cost is probably in the tens of thousands of dollars. Developing microscopic computer chips... you'd likely need billions of dollars worth of equipment, not to mention a huge team of people specialized in the extreme levels of multiple fields.
      • What are the odds of IBM giving away POWER? Is the architecture susceptible to the same vulnerabilities?
    • Oh Really (Score:2, Funny)

      by Bruce Perens ( 3872 )

      Thank you for this vast work of erudition, anonymous moron.

      Someday, perhaps, when you are a pre-adolescent, you may aquire somewhat more knowledge of computers, though probably not enough to make you top-heavy. At that time, you may hear of a miraculous device called a gate-array which makes it possible to craft a running CPU similarly to the way that programmers write software. With this device, someone of greater skill than you will put together a computer that might not be as fast as you like, and might

      • This will never be as efficient as a fully-custom chip, but it can be good enough. Many of us will be happier using it.

        This is a good point: people who care about security (like AWS) have different requirements, and may be willing to forgo some performance in exchange for security.

        • This will never be as efficient as a fully-custom chip, but it can be good enough. Many of us will be happier using it.

          This is a good point: people who care about security (like AWS) have different requirements, and may be willing to forgo some performance in exchange for security.

          SOME performance???

          For some pretty hefty values of "Some"...

          • Actually, if you look at this device [digikey.com], you'll see that gate-arrays aren't in the same class with your father's Oldsmobile any longer. We need them to be denser than the ones at that link, but the potential is there.

      • The comment above mine [slashdot.org] said, "While I don't think your post should have been modded down, it is unnecessarily rude."

        Bruce, I agree with that comment. Don't act out anger.

        Another quote from the comment above mine: "I doubt that open source hardware would prevent hardware bugs, but it would provide a way of avoiding backdoors that are intentionally placed. You're absolutely right in that respect."

        The possibility of backdoors may cause Intel to go bankrupt. How can Intel be re-organized so that it ca
      • Thank you for this vast work of erudition, anonymous moron.

        Someday, perhaps, when you are a pre-adolescent, you may aquire somewhat more knowledge of computers, though probably not enough to make you top-heavy. At that time, you may hear of a miraculous device called a gate-array which makes it possible to craft a running CPU similarly to the way that programmers write software. With this device, someone of greater skill than you will put together a computer that might not be as fast as you like, and might not have as many transistors as you like, and might use more power than you like, but will be capable of running an Open Source CPU with a known-bitstream so that the chance of there being nasties that we're not told about that spy on us built into the CPU die is reduced from today's horrible state (gate-arrays can still have them, but the people who make these nasties don't know in advance where we put the CPU implementation).

        The instruction set and currently-fixed hardware features like the MMU and the translation look-aside buffer (a feature implicated today) will be repairable by changing the bitstream.

        This will never be as efficient as a fully-custom chip, but it can be good enough. Many of us will be happier using it. And for those of us who require algorithm acceleration (hopefully for better reasons than mining cryptocoins, but that is one example) it will be possible to code it into the system and get the advantages of a hardware implementation without it being so hard.

        Unfortunately, as you well know, this approach means goodbye to virtually very computing-type device most of us have become accustomed-to. With all due respect, IMHO, Even desktop computers would have to devolve into houselight-dimming, room-warming, five-rackspace-hogging monstrocities, with barely the compute-power of a MacBook Air. And as far as modern GPU emulation with any reasonable number of available FPGAs, forget it!

        • Re:Oh Really (Score:4, Informative)

          by Bruce Perens ( 3872 ) <bruce@perens.com> on Saturday January 06, 2018 @11:32PM (#55878213) Homepage Journal

          Unfortunately, as you well know, this approach means goodbye to virtually very computing-type device most of us have become accustomed-to.

          Maybe you haven't been following gate-array development. There are mobile ones now. They use FLASH to store the program bits. And the rest is CMOS which we know how to power-manage. The gate-arrays of yore were more power-thirsty because nobody cared back then.

    • Yes. But it'll be 100 times slower.
  • Comment removed (Score:5, Interesting)

    by account_deleted ( 4530225 ) on Saturday January 06, 2018 @07:22PM (#55877265)
    Comment removed based on user account deletion
    • How is OpenSPARC vs RISC-V?

      I really hope we get one open hardware taking off.

      It does not have to replace the existing arcitechtures but could act as a suppliment.

      Many people would like something more open now after all ME bugs etc.
      Specially in the INFOSEC community and like.
      • Re: (Score:3, Interesting)

        by OrangeTide ( 124937 )

        RISC-V's specification is a lot more flexible and permits a wider range of variants on capabilities in implementations than OpenSPARC. I've worked on 32-bit RISC-V based microcontrollers embedded in ASICs, and theoretically you can put together a multiprocessor 64-bit RISC-V with advanced features such as speculative execution.

        I think that RISC-V has a pretty good future because it is a specification rather than a single implementation. There are multiple implementations, some of them are open source. And i

      • by TheRaven64 ( 641858 ) on Sunday January 07, 2018 @07:25AM (#55879455) Journal
        You can't compare OpenSPARC and RISC-V. OpenSPARC is an implementation of the SPARCv9 ISA. RISC-V is a specification. There are about a dozen open source RISC-V designs now, ranging from simple in-order 32-bit cores to out-of-order superscalar 64-bit cores.

        OpenSPARC is the full Verilog implementation of T1 (Niagara) and T2. Unfortunately, both are written in the traditional disposable style of commercial CPU implementations: there are some reusable building blocks, but the general idea is that each CPU design is a complete from-scratch rewrite. Unlike software designs, there's no thought to long-term maintenance or making the designs easier to refactor. Such concerns are often at odds with getting the best possible performance from the current process (the next generation process may have completely different constraints).

        In contrast, the reference RISC-V design, Rocket, is written in Chisel, a high-level Scala-derived HDL that can generate Verilog. It is designed to be used to be reusable and this was shown by the Berkeley Out-of-Order Machine (BOOM), which is an out-of-order superscalar design that reuses most of the Rocket core's execution units.

        If you just want to send something to a fab now, the OpenSPARC cores are probably better, but if you want to make significant modifications then Rocket or BOOM is orders of magnitude easier to work with. In addition, the RISC-V ecosystem is growing, whereas the SPARC ecosystem is contracting or dead.

    • by davecb ( 6526 )
      The newest T-series SPARCs were based on two persons' work using a much earlier variant, contemporary with the open SPARC design. It takes brilliance, not huge companies, you see.
  • by cheesyweasel ( 5072497 ) on Saturday January 06, 2018 @07:25PM (#55877271)
    What version of DOS would you like to run on it?
  • by JoeyRox ( 2711699 ) on Saturday January 06, 2018 @07:29PM (#55877293)
    Being open source doesn't magically prevent bugs from reaching the silicon stage of a chip's design, nor does it make it any easier to fix bugs baked into a completed design. There are only so many people in the world smart enough to even fully understand modern superscalar designs let alone contribute usefully to it.
    • indeed, at least one of the chips mentioned has issue. Still waiting for someone to fiddle with Power8 and UltarSparc to see if it has issue. Itanium is claimed not to, haven't heard of evidence to contrary yet.

    • Re: (Score:2, Insightful)

      by quantaman ( 517394 )

      There are only so many people in the world smart enough to even fully understand modern superscalar designs let alone contribute usefully to it.

      I doubt that's true.

      The problem wasn't the lack of people smart enough to spot the bug, it was the fact the bug was created 20 years ago back when people probably weren't thinking about bugs like that. And then in the 20 years since there probably weren't many people with a reason to start digging into that level of the design.

      I'm not sure open source would have made a big difference, it gives you more eyes in some cases, but as OpenSSL demonstrated people only read the code they change, so old code that "

    • by lkcl ( 517947 )

      Being open source doesn't magically prevent bugs from reaching the silicon stage of a chip's design, nor does it make it any easier to fix bugs baked into a completed design. There are only so many people in the world smart enough to even fully understand modern superscalar designs let alone contribute usefully to it.

      interestingly the head of the shakti team, madhu, is an advocate of something called "bluespec". it's similar to Berkeley's "chisel" except that, because bluespec is writteen in Haskell it's possible to do *formal mathematical proofs on the designs*.

      there was a talk at ccc just last week about doing mathematical proofs on designs, but it's much harder to do if the underlying programming language for the ASIC doesn't really support formal proofs.

      anyway, this is extremely interesting timing as i am, puzzling

  • by Anonymous Coward on Saturday January 06, 2018 @07:30PM (#55877299)

    Yes, look at IBM's Power9-based Talos Workstation. It has open firmware, open microcode, open BMC firmware so pretty much all of it is auditable. Is it secure? Who knows...

    The downside is obviously the price.

    Repositories:
    https://git.raptorcs.com/git/
    https://github.com/open-power
    https://github.com/openbmc

  • No (Score:2, Interesting)

    by Anonymous Coward

    Figure out some way to fund the billions in development costs, legal/IP issues and marshal the necessary talent then maybe... Of course, there is no reason to believe the result would be any better: RISC-V memory model has severe problems [electronicsweekly.com] due to underspecified memory ordering that were revealed by formal testing and are still being resolved. [google.com] Perhaps this is an example of an open process working well, but just throwing out RISC-V doesn't guarantee a bug free design.

  • Not every semiconductor foundry can make a modern CPU, you can get your hands on latest i7 IP but only Intel will have the foundry with equipment to make a equivalent chip out of it. When moore's law truly flattens out then rest of the semiconductor manufacturing might catch up and difference between a CPU and CPU will truly be just the IP.
  • To a reasonable approximation all patents must have been filed before then - as soon as details are published they cannot be patented. Post 1995 lifetime of a patent is 20 years.

    So anything 20 years or older must be patent free. I.e. anything before 1998 or so should be fine. Oddly enough that means that the original 386 instruction set is OK. So is MIPS.

    SSE etc is not though

    Intel published a helpful chart of when each SIMD instruction set was patented

    https://arstechnica.com/inform... [arstechnica.com]

    Since x64 requires SSE2

    • Sadly, thee is a trick to work around the 20 year patent limit. Patent a subtle feature of the old design, and if necessary tune the new patent to be more applicable to modern tools. This is an old practice with software patents, still in use by companies that create defensive and competition stifling suites of patents. A review of existing tools for patentable material is standard practice for a skilled patent attorney.

      • Sadly, thee is a trick to work around the 20 year patent limit. Patent a subtle feature of the old design, and if necessary tune the new patent to be more applicable to modern tools.

        I could see drug companies patenting a 'modified release' version of an old drug which is going out of patent. Still the non 'modified release' version still enters the public domain.

        What I can't see is how you can do this for a documented CPU ISA. You could patent the details of superscalar or out of order execution. However doing that is actually creating a new invention.

        Intel keep adding new instructions - SSE, AVX etc - but then those are actually new inventions too.

    • It's also more difficult for an open source design. CPU makers generally don't bother too much patenting microarchitectural features, because it's very expensive to stick a competitor's chip under an electron microscope and get enough evidence to convince a court that it's actually infringing. For an open source design, you have access at least to the RTL and so can see very easily and cheaply whether it's infringing. If you wait until your competitor has taped out before sending your C&D, you can ma
  • To what end? (Score:4, Insightful)

    by Anonymous Coward on Saturday January 06, 2018 @07:52PM (#55877403)

    OpenSSL being open source didnâ(TM)t find or prevent Heartbleed.

    An open chip likely wouldnâ(TM)t have affected meltdown or spectre. This wasnâ(TM)t negligence by Intel (as evidenced that some of the recent vulnerabilities were shared by AMD chips on a completely different architecture.

    The problem isnâ(TM)t that Intel failed to secure something obvious. Itâ(TM)s that there was a mechanism that everyone knew about, but which all experts thought couldnâ(TM)t be used to extract data. Then someone found a clever technique nobody thought of before that made everyone realize it WAS vulnerable.

    Being open wouldnâ(TM)t have prevented the issue. Indeed, the issue was found by third-party researchers who didnâ(TM)t have access to the low level details of the architecture.

    Open Source is not a panacea.

  • by GerryGilmore ( 663905 ) on Saturday January 06, 2018 @07:55PM (#55877423)
    Is it technically possible? Sure, there are already open-source core designs available. All you have to do is come up with the hundreds of thousands of engineers, designers, and manufacturing experts, replicate about 40 years of legacy toolchains from basic compilers to OSes, languages and frameworks, add in a smidgen of semi-conductor factories, testing facilities and packaging support. Oh! Did I mention sales and marketing? Go right-da-fuck ahead!
  • Linux was incremental. You had the kernel, a command line and things were slowly added to it but even early on you had something people could play with. It's easy to transfer software, it's easy to work on small parts. Hardware is a bit different. Your first open source CPU is going to suck. It will have absolutely no advantage over the existing processors and won't for many years. How are you going to keep a community going with very little tangible to show.
  • I don't see ARM donating its IP to this effort...

    I don't imagine Softbank paid $32B for ARM Holdings just so they could give the IP away.

  • I was just asking about that in a previous thread [slashdot.org]. So, if MIPS is really unchained by patents etc, then we might have a chance.

  • "It would certainly be a nice gesture of Big Blue"

    Indeed. Perhaps they could throw in a nice free pony for everybody.

  • Comment removed based on user account deletion
  • by sdinfoserv ( 1793266 ) on Saturday January 06, 2018 @08:42PM (#55877581)
    A fairly unthoughtful, knee jerk reaction from someone who is clearly no more involved in technology than being a writer.
    Bugs happen. Everywhere on every layer. Save your outrage for true malfeasance. Get angry at Feds for storing FS86 forms (the questionnaire for top secret clearance) on OPM servers unencrypted. Get angry at Equifax management for making the conscious, criminally liable decision, of storing PIN of pretty much every US tax payer “in the clear” at rest.
    But for bugs that take years or generational development and understanding to discover, it’s unavoidable.
    And certainly don’t suggest replacing it with a questionably supportable ecosystem. Linux, despite global usage, outside of a few niche hardcore users has completely failed on the desktop. (I know he didn’t specifically say Linux, but it’s an example of an attempt at global open source) Not a tolerable trajectory for hardware manufacture, let alone one that already represents market majority.
  • by Bruce Perens ( 3872 ) <bruce@perens.com> on Saturday January 06, 2018 @08:47PM (#55877591) Homepage Journal

    If you really want an Open Source, after-market bug fixes, and security, the best way to do that is to use not a CPU at all but a programmable gate-array. This also gives you the ability to have evolution in purchased hardware, for example improvement of the instruction set. The problem is finding a gate-array that is fast enough, dense enough, and power-conserving enough.

    It would be cool to code your own special-purpose algorithm accelerators in VHDL or Verilog, etc.

    This is sort of on the edge of practical, if you have the money to spend. Not as fast, not as powerful, uses more electricity, infinitely flexible. Certainly there would be some good research papers, etc., in building one.

  • by sunking2 ( 521698 )

    Software has the luxury that you can be behind the curve in development and still manage as an alternative. That doesn't work in hardware. If you are late to the game nobody supports you. And people aren't going to spend money making hardware that nobody wants supports.

  • There is a massive amount of tooling and infrastructure needed to design a modern CPU architecture. Sure, you can start with open designs that are 20 years old, but you'll need to add massive amount of changes around out of order execution, speculative execution (yes, it caused this problem, it's also a critical optimization), cache management and coherency and so on. A lot of this requires highly specialized workers that expect to be paid well for their expertise.

    Much better to invest to formal verificatio

  • Why is this even an idea? As an aside: Every major architecture suffered the same type of defect because it is approach that has a major performance impact. Why wouldnâ(TM)t an open source project have the same problem? This defect is so central to the design that it canâ(TM)t be fixed with microcode.
  • Is no one going to mention RISC-V?

  • TL;DR No, you can not replace X86, AMD64, Power, Sparc, MIPS and ARM with a FOSS design.

    openSparc, openPower, MIPS-V?
    Those have opened the 'ISA', but NOT the design, so you have to design your microprocessor from scratch.

    ARM? MIPS? You can get the full design, if you pay. Or you can pay for rights to the ISA, and design everything from scratch.

    Designing a somewhat modern microprocesor is hard enough, even if you already have the ISA, and the beast is cruft free (64 or 128 bits from the get go, without bein

  • Designing bug free hardware that is extremely fast and efficient is so easy i'm surprised people aren't already building this stuff in their kitchen in their spare time. It's not in Intel's interests to make hardware with bugs. Unless it comes out that they were grossly negligent or were working with the NSA, i think this just falls into one of those unfortunate categories. Besides every few years people need to buy new CPUs anyways, so the problem will resolve itself, just in time for another issue to mak
  • Uh, ZDNet? Really? (Score:5, Insightful)

    by asackett ( 161377 ) on Saturday January 06, 2018 @10:29PM (#55877973) Homepage

    It's not surprising that someone who doesn't seem to know that "the cloud" *is* private data centers also knows nothing of IC fab.

  • What socket will this ' free fantasy CPU' use? What chipset will it use? Are we talking about a volunteer clone effort to reverse-engineer the X86 processors, or something wholly new, so that we require new drivers, OS, BIOS, etc?

    Only by being ignorant of what's involved can someone make such a proposition.

    This is really an over-sized reaction to a minor problem that will be resolved in the next generation of silicon from every chipmaker.

  • Comment removed based on user account deletion
  • "The reality is that we now need to create something new, free from any legacy entities and baggage that has been driving the industry and dragging it down the past 40 years. Just as was done with Linux."

    For Linux you just needed a copy of gcc. Chip design and fabrication requires just a weee bit more.

  • then in my opinion, the next generation of CPUs should have re-programmable gate logic. Kinda like how FPGA works, but significantly faster and on a massive scale. Just imagine the kind of power you'd get if the OS switches large areas on the silicon to fit certain tasks. When you play games or do some massive 3D work, the CPU would be reprogrammed for that task. When you want to mine crypto or do some massive encryption/decryption/compression/decompression, the CPU would be reprogrammed accordingly.

    • Just buy a FPGA on PCI board, integrating one on CPU is moronic. And its not going to outperform a dedicated GPU for gaming etc. Or a dedicated crypto chip. Field programmability comes at a cost. Plus more powerful FPGA-s cost an arm and a leg, too much to have as an standard feature of regular PC. Even supercomputers don't generally bother with it, if you need raw compute power its usually faster and easier to buy it on cloud. FPGA-s are cool as frig, but they are not a universal fix to every problem.
      • by skaag ( 206358 )

        I'm not talking about the kind of tech we have now. Obviously FPGA is not suitable for the stuff I'm talking about, and I wouldn't integrate FPGA into today's CPUs. But just imagine if you could add more "FPGA Style" chips on a bus. Some would be purposed as GPU, some as Crypto chips, some would be purposed for audio processing, some for manipulating large scale 3D scenes (complex interactions, collision detection, physics), some for AI / Neural Net style compute, powerful image recognition, some even for n

  • It's too bad the project at Sun to produce an asynchronous CPU was cancelled; that seemed like an interesting path. I wonder if anyone else if experimenting with that now.

  • Intel have poured literally billions of dollars into R&D of their products for decades to get to where they are now.

    I know there's a lot of clever people out there, but I'd be amazed if the open source community could ever catch up then stay with whatever Intel's current CPUs are for features and performance.

  • It looks great!

Genius is ten percent inspiration and fifty percent capital gains.

Working...