Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
The Almighty Buck The Military Government United States

DARPA Invests $100 Million In a Silicon Compiler (eetimes.com) 104

The Defense Advanced Research Projects Agency (DARPA) will invest $100 million into two research programs over the next four years to create the equivalent of a silicon compiler aimed at significantly lowering the barriers to design chips. "The two programs are just part of the Electronics Resurgence Initiative (ERI) expected to receive $1.5 billion over the next five years to drive the U.S. electronics industry forward," reports EE Times. "ERI will disclose details of its other programs at an event in Silicon Valley in late July." From the report: Congress recently added $150 million per year to ERI's funding. The initiative, managed by the Defense Advanced Research Projects Agency (DARPA), announced on Monday that the July event will also include workshops to brainstorm ideas for future research programs in five areas ranging from artificial intelligence to photonics. With $100 million in finding, the IDEAS and POSH programs represent "one of the biggest EDA research programs ever," said Andreas Olofsson, who manages the two programs.

Together, they aim to combat the growing complexity and cost of designing chips, now approaching $500 million for a bleeding-edge SoC. Essentially, POSH aims to create an open-source library of silicon blocks, and IDEAS hopes to spawn a variety of open-source and commercial tools to automate testing of those blocks and knitting them into SoCs and printed circuit boards. If successful, the programs "will change the economics of the industry," enabling companies to design in relatively low-volume chips that would be prohibitive today. It could also open a door for designers working under secure regimes in the government to make their own SoCs targeting nanosecond latencies that are not commercially viable, said Olofsson.

This discussion has been archived. No new comments can be posted.

DARPA Invests $100 Million In a Silicon Compiler

Comments Filter:
  • by Anonymous Coward

    That makes sense if you look at the commercial chip design market. The process is error prone and expensive.

    It makes a hell of a lot less sense if you look at some other people busy in the space. Like how Chuck Moore does his chip designs with a "silicon compiler" written by a single person. Meaning that DARPA could have effective chip design tools for as little as a hundred thousand dollars, iff they manage to find the right person to build it for them. Software design is funny like that, and we haven't st

    • They don't care about saving a few million dollars if they hire the exact right person, they want to get a good result without having to rely on hiring the exact right person.

      Their goal is to develop partner businesses through the giving of contracts, they're not trying to get the K-Mart Special.

    • Re:Geh. (Score:5, Interesting)

      by NicknameUnavailable ( 4134147 ) on Saturday June 30, 2018 @05:21AM (#56870018)
      I don't think you actually understand how difficult a "silicon compiler" would be to produce. Even relatively known things like FPGA compilers are absurdly complex and rely 99% on arranging tetris block like configurations of flip flops in the tightest configuration possible to avoid wasting space for the given design (and take obscene amounts of time to do so.) Now imagine designing those tetris blocks from the ground up, with variable transistor sizes as tech and manufacturing needs dictate, and breaking the whole thing down at the end into the CNC files to machine out the masks with metadata for the exposure times and it gets mind bogglingly complex. No one guy has a design for even one of those things that is close to comprehensive, let alone all of them. You're talking about things which even broken into their base components would take the life work of several dozen geniuses to achieve if they were in the flow state their entire lives and experts at the specific things they were working on at every level - scale that out to a manageable software development team on a time limit as aggressive as this and 100m is an absolute bargain.
      • by AHuxley ( 892839 )
        Like finding people who could work with Basic and put it on a chip in 1980 difficult?
        • No, as in even if you know every aspect of the system from staying current to bleeding edge transistor designs to the logical arrangements of them into cores to the wiring of those cores to the inductive effects between transistors and traces to the other thousand issues which all require special expertise in - you wouldn't live long enough to write it all if you were in the flow state 24/7, started coding on it with expert level knowledge when you were born and lived to 150 years old doing nothing else the
      • by mlyle ( 148697 )

        ... Synthesis tools already exist, and every fab has a design library of standard transistors. While the tools are complicated and very expensive (though open source versions exist), they are there. So the problem you're describing is already solved. Designers describe logic, help a little with floorplan and constraints, and get a design out minutes to hours later.

        What's difficult is that we don't have great programming mechanisms to describe parallel logic, or to synthesize sequential descriptions of ta

      • by Anonymous Coward

        Sounds like a job for machine learning and blockchain.

        Quick. Write up a press release.

    • by Anonymous Coward

      There are existing silicon compilers like VHDL, and those are built up from standard libraries using templates in the same way as C++. Instead of passing class objects, you are passing blocks of bits. Even if you do get the design working and verified, there are still problems with the electromagnetic fields and crosstalk as electrons move around as well as clock timing with all the different parts operating in parallel. So tests have to be done with FPGA's, prototype silicon and then the final chip.

      Large p

  • In an industry that already spends billions of dollars on design and manufacturing of chips, as per the example of $500 million for a single SoC, what are you going to do with a measly $100 million ?

    • How much of that $500m is legit R&D, and how much is marketing, and how much is payments to partners to use it? How much of it is bogus expenses designed to avoid taxes, and how much of it is actual cash money that walked out the door?

      So we find out, it doesn't take $500m to make an IC.

      Actually, I've got a ~$20 FPGA dev board on my desk right now, and it isn't going to take me $500m to write a little verilog. ;)

      Compilers are hard, but still, they're generally written by a very small software team. The h

      • by Anonymous Coward

        There's a world of difference between coding up an FPGA or the cut and paste IP methodology used in commodity ASIC design, and the processes that went into that Intel CPU or nvidia GPU sitting in your game toaster. The extreme scales and manufacturing means shit goes beyond connecting the dots or even electronic design into serious physics and managing heat and the likes. Your not going to encounter that on your Xilinx hobby board, but it's real and it's expensive with commercial CPU and GPU(etc) design

      • Compilers are hard, but still, they're generally written by a very small software team.

        Compilers for hardware targets are a lot harder than for a general purpose CPU, because the hardware offers much more degrees of freedom in implementing a design.

      • Re:small budget (Score:5, Interesting)

        by NicknameUnavailable ( 4134147 ) on Saturday June 30, 2018 @05:33AM (#56870044)

        How much of that $500m is legit R&D, and how much is marketing, and how much is payments to partners to use it? How much of it is bogus expenses designed to avoid taxes, and how much of it is actual cash money that walked out the door?

        99% of it goes into making masks, configuring equipment, and testing out new designs, so basically all of it. Any kind of development takes iteration to achieve - think of if you had to pay several million dollars every time you hit the debug button on visual studio. That's the equivalent of chip R&D. It takes months of engineers working to craft and machine simple things like masks - on average a mask alone runs a million dollars due to the failure rates in making them and the labor required to do so, and it takes several for the different layers of a chip. Once you've shelled out 10-20m you then have to spend another few million on configuring the equipment to use it and materials which get scrapped in all your calibration fuckups. When all is said and done you're at about 25-30m when you try to debug it. They certainly try to cut costs and find all the possible bugs in that singular debug session, but it doesn't happen, so 4 iterations later if you're lucky you have a new chip at 100m. I'm not actually sure this project will do much if anything to help since the bulk of the cost is in making the things to make the chips (masks, etc) but it seems interesting.

        Actually, I've got a ~$20 FPGA dev board on my desk right now, and it isn't going to take me $500m to write a little verilog. ;)
        Compilers are hard, but still, they're generally written by a very small software team. The hardware team would not be bigger, if anything it would be smaller.

        Do you know how that FPGA compiler works? Chances are it's made by 1 of two companies (the open source cores for FPGAs are terrible) and you've likely noticed it takes around a dozen gigabytes to install the compiler. Now consider that only does arrangements of flip flops and not actual hardware design. Hardware design is like a 2D (and for chips of any complexity, 3D) version of tetris-like compilation. You not only have to compile things in sequence, you also have to make sure they work in parallel and FIT onto a constrained space in the most efficient manner - AND they have to do so without doing things like creating inductive effects which make bits tunnel to the wrong channel of a bus or otherwise screw up calculations - AND you have to take into account heat dissipation - AND you have to take into account the limited external IO pins - AND you have to take into account the limited internal IO pins between those tetris-like blocks - AND you have to take into account changing hardware (how long until you have to scrap the whole compiler and start over because your transistor dimensions changed? 6 months?)

        This isn't software design, software is super fucking easy compared to hardware (hint: FPGAs are still effectively software.)

    • In an industry that already spends billions of dollars ...

      This isn't about chopping down a bigger tree. It is about sharpening the ax.

      as per the example of $500 million for a single SoC, what are you going to do with a measly $100 million ?

      Make future SoC designs cost a lot less.

      • If a $100 million effort can make a $500 million SoC design cost "a lot less", then these projects would already have been done.

        • By the companies selling the multi-gigabyte software tools? Oh, my sweet summer child...
        • Most the innovation does not come from manufacturing. Big risk is what pure research does; some of it seems completely pointless at the time it is being done-- the applications of the gained knowledge are unknown at the time; furthermore, many things are discovered by accident.
          This is $$$ put into "future work" areas that companies have little incentive to explore; especially companies on the market who are always under pressure to cut R&D for greater returns for investors.

  • DARPA have different requirements for chips than the rest of us. For example, they might not want separate "systems management" circuits.
  • by Gravis Zero ( 934156 ) on Saturday June 30, 2018 @05:39AM (#56870056)

    This is actually a project I've read about in the past so I'll explain. What they are trying to do is make a automatic layout engine for silicon. In effect, it will take your VHDL and turn it into a completed layout that is ready for manufacturing. However, to avoid a massive layout times, they also want to be able to use premade layouts for subsystems. If you consider each subsystem to be a block of object code then the layout engine is a compiler that is connecting your "main.c" up to all the functions already compiled.

    It's a really good concept but the laws of physics won't make it an easy task and much like handwritten assembly, it's unlikely to be competitive with manual layouts.

    • Sounds like an area where machine learning could help in the near future. You know the goals, and you can run design through simulations to see how close you get to the goal.

    • Because anybody can use Simulink!

    • It will be competitive, but only on different metrics. Manual layout will win on size, performance, power efficiency etc, but the new approach will end up winning on design time. This has a larger effect on product cost and time to market for applications they are targetting.

  • I've been trying to understand what this actually does and after reading the article I still don't understand it!
    The name Silicon Compiler is confusing beyond belief; traditional compilers convert programming languages to assembly, so a Silicon Compiler seams like it would convert different assembly languages, so code would run no matter the architecture.
    The article seems to mention new ways to wire the different architectures, making me think it's a computer aided architecture design using AI, but then men

    • by AHuxley ( 892839 )
      Like the computers in the 1980's that shipped with Basic on a chip. Turn the computer on and trust the chip to make the correct code.
    • by mikael ( 484 )

      You can convert a software algorithm in a high level language into a silicon compiler language like Verilog or VHDL. These support variable types like floating point and variable sized integers. But everything is done using bits. Each function takes in inputs as sets of bits, and outputs as sets of bits along with a clock signal. The silicon compiler will convert this code into a series of logic blocks. Variables become hardware registers. Conditional statement become AND, OR and NOT logic gates. Maths libr

    • Obviously it compiled a high level language, looking like Ada or VHDL, into production masks to create a chip or SoC (made from silicon) on a wafer.
      Hand in your geek card.

    • by DeAxes ( 522822 )

      Thanks for telling me it's OBVIOUSLY, given the name, a hardware based software compiler, which automatically compiles it on the fly using it's own hardware.. If you said that to me, you would be completely WRONG. Not only is that already in existence, it's very costly and often has no real benefit for the expense.
      From the article: "Essentially, POSH aims to create an open-source library of silicon blocks, and IDEAS hopes to spawn a variety of open-source and commercial tools to automate testing of those bl

  • by StandardCell ( 589682 ) on Saturday June 30, 2018 @09:43AM (#56870546)
    As a former lead ASIC designer, I can say this is one of the most ambitious projects likely ever undertaken in EDA. Companies like Cadence, Mentor and Synopsys have been working on these problems for literally decades now. Everyone wants an easy solution for push-button design, but it is hardly that simple. Consider the following:

    - Synthesis from RTL-to-gate level
    - Functional design rule checks
    - Place and route, including clock routing, PLLs/DLLs, etc.
    - Timing extraction and static timing analysis
    - I/O/SSO and core power
    - Internal signal integrity and re-layout
    - Test insertion and test vector generation
    - Formal verification
    - Functional verification
    - Packaging and ball-out/bonding, especially with core I/O
    - Physical design rule checks / Netlist vs. layout checks

    A suite of tools that does all of this costs into the millions of dollars today, and is really a subscription as there are always bugs and improvements to be made. It also assumes physical design rule decks from the silicon vendors that have gone extensive characterization on limits such as minimum feature widths and notch rules can yield to a sufficient level economically, and that the gate and hard IP/mixed IP libraries have been validated. Front end functional design often requires re-architecture due to considerations when physically implementing the chip. All of this, of course, presumes that we don't run into additional phenomena that were irrelevant at larger process nodes (e.g. at ~250nm/180nm, wire delay dominated gate delay, and at 90nm/65nm, RC signal integrity models gave way to RLC, plus power/clock gating, multi-gate finFETs vs. single-gate planar past 22nm, etc.).

    A push-button tool would have to take all of this into consideration. But let's face it...as well-intended as this is, you probably need another couple of orders of magnitude of money thrown at this to even begin succeeding under the fundamental assumption you don't have additional phenomena like alternatives to manufacturing. And that's the fundamental catch that is not captured in the article: we are chasing an ever-changing animal called process technology advancement that has created issues for us over the last few decades and likely will continue until we reach the limit of physics as we can manipulated them.

    Bottom line: love the idealism, but don't buy into this hype with this piddle of investment.
  • by Required Snark ( 1702878 ) on Saturday June 30, 2018 @06:12PM (#56872384)
    This is what DARPA is supposed to do: tackle problems that are too risky for private funding. The phrase for it is "DARPA Hard".

    They often get a lot of bang for the buck because they attract more investment from partners in both academic research and business. That is what the DARPA Grand Challenge [wikipedia.org] projects are all about. Remember the autonomous vehicle race from California to Las Vegas? Or the emergency rescue robot competition? Things like that.

    In fact, both of those were "failures". The goals were not met. The robots fell over. No team finished the Mojave race. The prizes were not awarded. But the government got more then it's money's worth. And everyone who participated learned a whole lot. For DARPA that was a good result.

    So stop whining about the futility of the project just because you are too short sighted to understand what it is really about. There are plenty of very very smart motivated people who do get it, and they are going to produce some very interesting work. Go back to computer and watch someone else play a video game. It's all you're good for.

"Ninety percent of baseball is half mental." -- Yogi Berra

Working...