Russia, Europe Seek Divorce From U.S. Tech Vendors 201
dcblogs writes "The Russians are building a 10-petaflop supercomputer as part of a goal to build an exascale system by 2018-20, in the same timeframe as the US. The Russians, as well as Europe and China, want to reduce reliance on U.S. tech vendors and believe that exascale system development will lead to breakthroughs that could seed new tech industries. 'Exascale computing is a challenge, and indeed an opportunity for Europe to become a global HPC leader,' said Leonardo Flores Anover, who is the European Commission's project officer for the European Exascale Software Initiative. 'The goal is to foster the development of a European industrial capability,' he said. Think what Europe accomplished with Airbus. For Russia: 'You can expect to see Russia holding its own in the exascale race with little or no dependence on foreign manufacturers,' said Mike Bernhardt, who writes The Exascale Report. For now, Russia is relying on Intel and Nvidia."
Where is the infrastructure? (Score:3, Interesting)
Russia doesn't have the silicon crystal production facilities, they'll be stuck using the same European, American and Japanese lithography tools everyone else does, no fabs, no economies of scales for production like Samsung, Intel, AMD, Toshiba, etc have.
Re:Industrial Espionage. (Score:5, Interesting)
I lived there for a while, went to Uni there, am married to a Chinese person and have many Chinese friends, both here and in China. I'm very comfortable saying that Chinese people do not innovate very well. In general, creativity and innovation are not traits that are encouraged in Chinese society. The culture encourages conformity and the like. In school, they study very, VERY hard but it's route memorization not creativity. They are much better at copying others' ideas than coming up with their own. That's not US marketing speaking, that's my own observations.
Let us not forget that "stealing" went both ways. (Score:5, Interesting)
For instance, F-35 JSF started its life as a carbon copy of Yak-141, blueprints for which Locheed Martin blatantly stole from Russians by first forming and then dissolving a "partnership" with the Yakovlev bureau all in the span of about a year. Don't believe me? Check out the videos below:
http://www.youtube.com/watch?v=23ohOKthO18 [youtube.com] - Yak 141, circa 1987
http://www.youtube.com/watch?v=Ki86x1WKPmE [youtube.com] - F-35, 2011
See other videos of Yak-141, and see it from the rear in particular. F-35 is a blatant copy, just with today's electronics and stealth.
Uses for exascale machines? (Score:2, Interesting)
As a scientific user of large HPC machines like Franklin, Hopper, HECToR etc., this race for exascale machines seems like the tail wagging the dog. There are currently very very few codes which can actually use an exascale supercomputer, due to the extreme parallelism needed. If you have to make use of several hundred thousand cores, anything beyond embarrassingly parallel montecarlo problems have problems moving data around. Something like Intel's Knight's Corner chip might help OpenMP-MPI hybrid codes, but a lot of conferences now are focussed on how to design codes to make use of these big machines. More useful would be to put the money into more smaller (say 100,000 core) machines, so more runs can be done with different inputs.
The CS guys love doing a single massive run which burns through CPU time on headline-grabbing number of processors, but actually that's not very useful for scientific research. More useful is to be able to run the code tens or hundreds of times with a quick turnaround (not waiting days in a queue) with different inputs. Whilst this exascale race is a good way to get money into the maths/CS labs, in my opinion it's not going to give the massive leap in understanding which is promised.
Re:Uses for exascale machines? (Score:5, Interesting)
Really large tightly coupled clusters are usually offered in a time-sharing arrangement. One Exa-scale system could normally support hundreds to thousands of concurrent users, each with a temporary slice of the machine. Truly large-scale jobs would be run only at specific times.
At that point you can offer the facility to a much wider range of users, and be much less selective about what kind of jobs are worthy of getting time on the machine. That easy availability is arguably more important than the peak performance, but is of course not headline-grabbing in the same way.