DARPA Invests $100 Million In a Silicon Compiler (eetimes.com) 104
The Defense Advanced Research Projects Agency (DARPA) will invest $100 million into two research programs over the next four years to create the equivalent of a silicon compiler aimed at significantly lowering the barriers to design chips. "The two programs are just part of the Electronics Resurgence Initiative (ERI) expected to receive $1.5 billion over the next five years to drive the U.S. electronics industry forward," reports EE Times. "ERI will disclose details of its other programs at an event in Silicon Valley in late July." From the report: Congress recently added $150 million per year to ERI's funding. The initiative, managed by the Defense Advanced Research Projects Agency (DARPA), announced on Monday that the July event will also include workshops to brainstorm ideas for future research programs in five areas ranging from artificial intelligence to photonics. With $100 million in finding, the IDEAS and POSH programs represent "one of the biggest EDA research programs ever," said Andreas Olofsson, who manages the two programs.
Together, they aim to combat the growing complexity and cost of designing chips, now approaching $500 million for a bleeding-edge SoC. Essentially, POSH aims to create an open-source library of silicon blocks, and IDEAS hopes to spawn a variety of open-source and commercial tools to automate testing of those blocks and knitting them into SoCs and printed circuit boards. If successful, the programs "will change the economics of the industry," enabling companies to design in relatively low-volume chips that would be prohibitive today. It could also open a door for designers working under secure regimes in the government to make their own SoCs targeting nanosecond latencies that are not commercially viable, said Olofsson.
Together, they aim to combat the growing complexity and cost of designing chips, now approaching $500 million for a bleeding-edge SoC. Essentially, POSH aims to create an open-source library of silicon blocks, and IDEAS hopes to spawn a variety of open-source and commercial tools to automate testing of those blocks and knitting them into SoCs and printed circuit boards. If successful, the programs "will change the economics of the industry," enabling companies to design in relatively low-volume chips that would be prohibitive today. It could also open a door for designers working under secure regimes in the government to make their own SoCs targeting nanosecond latencies that are not commercially viable, said Olofsson.
Re: (Score:3)
It's about time that such a strategic industry gets revived.
Chip design is dominated by America. There is no need to "revive" it.
Chip manufacturing will still be done in Asia.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Actually AMD already ships the silicon chips to either NY or FL to be packaged. Idk about intel.
Re: (Score:2)
Labour costs aren't much cheaper in asia. The win is in logistics.
Re: (Score:2)
The US does need to keep innovating like this to stay ahead though. China is producing some really competitive chips now, especially for mobile devices (CPUs, cellular modems).
Re: (Score:2)
he US does need to keep innovating like this to stay ahead though. China is producing some really competitive chips now, especially for mobile devices (CPUs, cellular modems).
It seems very shortsighted for me that the USA has put itself in this situation - because this is a great scenario for China, not so much for he USA. When the USA keeps innovating and China immediately takes the innovation (for example, via laws that force American companies to relinquish the intellectual property, or via straightforward theft) and mass produces it, the money and power go to China. In this pairing, the USA is the weak partner; if China blocks the production of new USA designs, the USA has n
Geh. (Score:1)
That makes sense if you look at the commercial chip design market. The process is error prone and expensive.
It makes a hell of a lot less sense if you look at some other people busy in the space. Like how Chuck Moore does his chip designs with a "silicon compiler" written by a single person. Meaning that DARPA could have effective chip design tools for as little as a hundred thousand dollars, iff they manage to find the right person to build it for them. Software design is funny like that, and we haven't st
Re: (Score:2)
They don't care about saving a few million dollars if they hire the exact right person, they want to get a good result without having to rely on hiring the exact right person.
Their goal is to develop partner businesses through the giving of contracts, they're not trying to get the K-Mart Special.
Re:Geh. (Score:5, Interesting)
Re: (Score:1)
Re: (Score:2)
Re: (Score:1)
Re: (Score:3)
... Synthesis tools already exist, and every fab has a design library of standard transistors. While the tools are complicated and very expensive (though open source versions exist), they are there. So the problem you're describing is already solved. Designers describe logic, help a little with floorplan and constraints, and get a design out minutes to hours later.
What's difficult is that we don't have great programming mechanisms to describe parallel logic, or to synthesize sequential descriptions of ta
Re: (Score:1)
Sounds like a job for machine learning and blockchain.
Quick. Write up a press release.
Re: (Score:1)
There are existing silicon compilers like VHDL, and those are built up from standard libraries using templates in the same way as C++. Instead of passing class objects, you are passing blocks of bits. Even if you do get the design working and verified, there are still problems with the electromagnetic fields and crosstalk as electrons move around as well as clock timing with all the different parts operating in parallel. So tests have to be done with FPGA's, prototype silicon and then the final chip.
Large p
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
You've obviously not heard of the GreenArrays GA144 chip. Insane amounts of compute power in a small passively cooled package.
Re: (Score:2)
Insane ? It's got 144 extremely limited cores, with tiny memories, small registers, no floating point, no cache, and no DDR memory interface.
It's low power, but that's the only good thing.
small budget (Score:2)
In an industry that already spends billions of dollars on design and manufacturing of chips, as per the example of $500 million for a single SoC, what are you going to do with a measly $100 million ?
Re: (Score:2)
How much of that $500m is legit R&D, and how much is marketing, and how much is payments to partners to use it? How much of it is bogus expenses designed to avoid taxes, and how much of it is actual cash money that walked out the door?
So we find out, it doesn't take $500m to make an IC.
Actually, I've got a ~$20 FPGA dev board on my desk right now, and it isn't going to take me $500m to write a little verilog. ;)
Compilers are hard, but still, they're generally written by a very small software team. The h
Re: small budget (Score:1)
There's a world of difference between coding up an FPGA or the cut and paste IP methodology used in commodity ASIC design, and the processes that went into that Intel CPU or nvidia GPU sitting in your game toaster. The extreme scales and manufacturing means shit goes beyond connecting the dots or even electronic design into serious physics and managing heat and the likes. Your not going to encounter that on your Xilinx hobby board, but it's real and it's expensive with commercial CPU and GPU(etc) design
Re: (Score:3)
Compilers are hard, but still, they're generally written by a very small software team.
Compilers for hardware targets are a lot harder than for a general purpose CPU, because the hardware offers much more degrees of freedom in implementing a design.
Re:small budget (Score:5, Interesting)
How much of that $500m is legit R&D, and how much is marketing, and how much is payments to partners to use it? How much of it is bogus expenses designed to avoid taxes, and how much of it is actual cash money that walked out the door?
99% of it goes into making masks, configuring equipment, and testing out new designs, so basically all of it. Any kind of development takes iteration to achieve - think of if you had to pay several million dollars every time you hit the debug button on visual studio. That's the equivalent of chip R&D. It takes months of engineers working to craft and machine simple things like masks - on average a mask alone runs a million dollars due to the failure rates in making them and the labor required to do so, and it takes several for the different layers of a chip. Once you've shelled out 10-20m you then have to spend another few million on configuring the equipment to use it and materials which get scrapped in all your calibration fuckups. When all is said and done you're at about 25-30m when you try to debug it. They certainly try to cut costs and find all the possible bugs in that singular debug session, but it doesn't happen, so 4 iterations later if you're lucky you have a new chip at 100m. I'm not actually sure this project will do much if anything to help since the bulk of the cost is in making the things to make the chips (masks, etc) but it seems interesting.
Actually, I've got a ~$20 FPGA dev board on my desk right now, and it isn't going to take me $500m to write a little verilog. ;)
Compilers are hard, but still, they're generally written by a very small software team. The hardware team would not be bigger, if anything it would be smaller.
Do you know how that FPGA compiler works? Chances are it's made by 1 of two companies (the open source cores for FPGAs are terrible) and you've likely noticed it takes around a dozen gigabytes to install the compiler. Now consider that only does arrangements of flip flops and not actual hardware design. Hardware design is like a 2D (and for chips of any complexity, 3D) version of tetris-like compilation. You not only have to compile things in sequence, you also have to make sure they work in parallel and FIT onto a constrained space in the most efficient manner - AND they have to do so without doing things like creating inductive effects which make bits tunnel to the wrong channel of a bus or otherwise screw up calculations - AND you have to take into account heat dissipation - AND you have to take into account the limited external IO pins - AND you have to take into account the limited internal IO pins between those tetris-like blocks - AND you have to take into account changing hardware (how long until you have to scrap the whole compiler and start over because your transistor dimensions changed? 6 months?)
This isn't software design, software is super fucking easy compared to hardware (hint: FPGAs are still effectively software.)
Re: (Score:2)
No[. H]ardware is harder due to having to deal with physical reality, and getting less chances to fix
That "physical reality" bit is correct but IMHO really glosses over the vast number of physical limitations and constraints that hardware has to deal with. Beyond the electrical design there's the physical layout (real-estate) kingdom. Limitations in layout, or any of the following, sometimes require changes to the electrical design which then modifies the layout. This loop can repeat.
Re: (Score:2)
Let's see how you design software, which is essentially solving a rubiks cube with an unspecified, potentially infinite, number of dimensions. Hardware is a joke.
In hardware you have to fit everything into just two dimensions. In software you have an infinite number of dimensions available, and thus you can fit an infinite amount of complexity into the job without even thinking where it goes. It's obvious that this makes developing software simpler than developing hardware.
Re: (Score:1)
Re: (Score:2)
In an industry that already spends billions of dollars ...
This isn't about chopping down a bigger tree. It is about sharpening the ax.
as per the example of $500 million for a single SoC, what are you going to do with a measly $100 million ?
Make future SoC designs cost a lot less.
Re: (Score:2)
If a $100 million effort can make a $500 million SoC design cost "a lot less", then these projects would already have been done.
Re: (Score:2)
Re: (Score:2)
Why not ? Make better tools, charge more money for them.
Companies do not invest in the long term (Score:2)
Most the innovation does not come from manufacturing. Big risk is what pure research does; some of it seems completely pointless at the time it is being done-- the applications of the gained knowledge are unknown at the time; furthermore, many things are discovered by accident.
This is $$$ put into "future work" areas that companies have little incentive to explore; especially companies on the market who are always under pressure to cut R&D for greater returns for investors.
Re: (Score:2)
The US mil cannot even trust its most secure systems and the contractors that make the new code.
Too many people from outside the USA, cults, faith groups, contractors with split loyalties, contractors open to blackmail are now wondering around very secure projects.
In the past the project would secure the contractors and get to work.
Due to the way contractors are now hired everyone can be security risk and still get a gov/mil job.
The
Re: Why? (Score:2)
Funny thing: I'm a pot-smoking communist sympathizer who thinks national hero Edward Snowden deserves the Medal of Freedom. So I'm pretty sure I would never get a clearance.
But there's no fucking way I would ever sell out my country. No amount of money or blackmail would make me put a back door or other bug in a sensitive military system. Even when I hate my government I still love my country.
So my question is, where the fuck are they getting these contractors? People who somehow DO qualify for a clearance.
Re: (Score:2)
A person is not considered on their security risk rather on their ability to make the gov be like the wider US community.
Merit, skill, the question of security, education cannot be used to stop a contractor from getting hired.
Criminals can now ask to work for a government/mil.
People with not real history in the USA get to work on the most sensitive projects.
Contractors can set up a front company wit
This is DARPA, so specialised chips (Score:2)
Not a compiler, a layout engine (Score:4, Interesting)
This is actually a project I've read about in the past so I'll explain. What they are trying to do is make a automatic layout engine for silicon. In effect, it will take your VHDL and turn it into a completed layout that is ready for manufacturing. However, to avoid a massive layout times, they also want to be able to use premade layouts for subsystems. If you consider each subsystem to be a block of object code then the layout engine is a compiler that is connecting your "main.c" up to all the functions already compiled.
It's a really good concept but the laws of physics won't make it an easy task and much like handwritten assembly, it's unlikely to be competitive with manual layouts.
Re: (Score:2)
Sounds like an area where machine learning could help in the near future. You know the goals, and you can run design through simulations to see how close you get to the goal.
Re: (Score:2)
Re: (Score:2)
Because anybody can use Simulink!
Re: (Score:2)
It will be competitive, but only on different metrics. Manual layout will win on size, performance, power efficiency etc, but the new approach will end up winning on design time. This has a larger effect on product cost and time to market for applications they are targetting.
Re: (Score:2)
yeah, that the whole "much like handwritten assembly" part, duh. -_-
Re: (Score:2)
I was not agreeing with you as you seem to assume - read it again. Duh
Re: (Score:2)
99 million for patent lawsuits.
DARPA, working on government applications. Patents do not matter.
What does it do? (Score:2)
I've been trying to understand what this actually does and after reading the article I still don't understand it!
The name Silicon Compiler is confusing beyond belief; traditional compilers convert programming languages to assembly, so a Silicon Compiler seams like it would convert different assembly languages, so code would run no matter the architecture.
The article seems to mention new ways to wire the different architectures, making me think it's a computer aided architecture design using AI, but then men
Re: (Score:2)
Re: (Score:2)
You can convert a software algorithm in a high level language into a silicon compiler language like Verilog or VHDL. These support variable types like floating point and variable sized integers. But everything is done using bits. Each function takes in inputs as sets of bits, and outputs as sets of bits along with a clock signal. The silicon compiler will convert this code into a series of logic blocks. Variables become hardware registers. Conditional statement become AND, OR and NOT logic gates. Maths libr
Re: (Score:2)
Obviously it compiled a high level language, looking like Ada or VHDL, into production masks to create a chip or SoC (made from silicon) on a wafer.
Hand in your geek card.
Re: (Score:2)
Except these things already exist, and chips are still really expensive to make.
Re: (Score:2)
Of couese they exist. But a government funded 'restart' might yield quicker better results than waiting for improvement of the existing tools.
Re: (Score:2)
Thanks for telling me it's OBVIOUSLY, given the name, a hardware based software compiler, which automatically compiles it on the fly using it's own hardware.. If you said that to me, you would be completely WRONG. Not only is that already in existence, it's very costly and often has no real benefit for the expense.
From the article: "Essentially, POSH aims to create an open-source library of silicon blocks, and IDEAS hopes to spawn a variety of open-source and commercial tools to automate testing of those bl
Only $100M? Nowhere near enough, DARPA... (Score:5, Insightful)
- Synthesis from RTL-to-gate level
- Functional design rule checks
- Place and route, including clock routing, PLLs/DLLs, etc.
- Timing extraction and static timing analysis
- I/O/SSO and core power
- Internal signal integrity and re-layout
- Test insertion and test vector generation
- Formal verification
- Functional verification
- Packaging and ball-out/bonding, especially with core I/O
- Physical design rule checks / Netlist vs. layout checks
A suite of tools that does all of this costs into the millions of dollars today, and is really a subscription as there are always bugs and improvements to be made. It also assumes physical design rule decks from the silicon vendors that have gone extensive characterization on limits such as minimum feature widths and notch rules can yield to a sufficient level economically, and that the gate and hard IP/mixed IP libraries have been validated. Front end functional design often requires re-architecture due to considerations when physically implementing the chip. All of this, of course, presumes that we don't run into additional phenomena that were irrelevant at larger process nodes (e.g. at ~250nm/180nm, wire delay dominated gate delay, and at 90nm/65nm, RC signal integrity models gave way to RLC, plus power/clock gating, multi-gate finFETs vs. single-gate planar past 22nm, etc.).
A push-button tool would have to take all of this into consideration. But let's face it...as well-intended as this is, you probably need another couple of orders of magnitude of money thrown at this to even begin succeeding under the fundamental assumption you don't have additional phenomena like alternatives to manufacturing. And that's the fundamental catch that is not captured in the article: we are chasing an ever-changing animal called process technology advancement that has created issues for us over the last few decades and likely will continue until we reach the limit of physics as we can manipulated them.
Bottom line: love the idealism, but don't buy into this hype with this piddle of investment.
"DARPA Hard" (Score:3)
They often get a lot of bang for the buck because they attract more investment from partners in both academic research and business. That is what the DARPA Grand Challenge [wikipedia.org] projects are all about. Remember the autonomous vehicle race from California to Las Vegas? Or the emergency rescue robot competition? Things like that.
In fact, both of those were "failures". The goals were not met. The robots fell over. No team finished the Mojave race. The prizes were not awarded. But the government got more then it's money's worth. And everyone who participated learned a whole lot. For DARPA that was a good result.
So stop whining about the futility of the project just because you are too short sighted to understand what it is really about. There are plenty of very very smart motivated people who do get it, and they are going to produce some very interesting work. Go back to computer and watch someone else play a video game. It's all you're good for.