World's Fastest Supercomputer Coming To US in 2021 From Cray, AMD (cnet.com) 89
The "exascale" computing race is getting a new entrant called Frontier, a $600 million machine with Cray and AMD technology that could become the world's fastest when it arrives at Oak Ridge National Laboratory in 2021. From a report: Frontier should be able to perform 1.5 quintillion calculations per second, a level called 1.5 exaflops and enough to claim the performance crown, the Energy Department announced Tuesday. Its speed will be about 10 times faster than that of the current record holder on the Top500 supercomputer ranking, the IBM-built Summit machine, also at Oak Ridge, and should surpass a $500 million, 1-exaflops Cray-Intel supercomputer called Aurora to be built in 2021 at Argonne National Laboratory. There's no guarantee the US will win the race to exascale machines -- those that cross the 1-exaflop threshold -- because China, Japan and France each could have exascale machines in 2020. At stake is more than national bragging rights: It's also about the ability to perform cutting-edge research in areas like genomics, nuclear physics, cosmology, drug discovery, artificial intelligence and climate simulation.
Re:Green Power? (Score:4, Informative)
Of course there is. Here are bluewaters reports: https://bluewaters.ncsa.illino... [illinois.edu]
I am sure other systems have their own reports.
Re:Green Power? (Score:4, Informative)
From the very summary above: "... It's also about the ability to perform cutting-edge research in areas like genomics, nuclear physics, cosmology, drug discovery, artificial intelligence and climate simulation.".
Both most fundamental theories we have at the moment are very computational demanding, and as the General Relativity (because of its scale) is more about progressing our knowledge about the nature of our Universe the quantum theory on the other hand has very real implications in todays technology, e.g. when calculating physical properties of new alloys or in simplified version for chemistry, biology and medicine not to mention climate, brain structure and other simulations.
So as a result we have: better materials, chemicals, medicine, climate and weather forecasts, AI, better understanding of plasma behavior (useful in energy holy grail of fusion based generators).
An example from cosmology is the recently published first direct black hole image from the Event Horizon Telescope [eventhoriz...escope.org], which took about 2 years to calculate and verify - to foresee possible another question "why do we need this...", here is my (not an academic) answer: black holes are the very places (besides the Big Bang) where General Relativity and Quantum Theory collides, which can give us insights into the nature of not yet known Quantum Gravity theory, which might be quite useful I would say.
Re: (Score:3)
General Relativity (because of its scale) is more about progressing our knowledge about the nature of our Universe
Hold on there. Without detailed calculations using the mathematics of General Relativity, your GPS would be off by kilometers. [ohio-state.edu] General relativity is part of your daily life, it is not just about far away places in the universe.
Just a nit. Nice post.
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
1. Simulation with more advanced physics/numerics/whatever, smaller grids/timesteps/whatever, better AI/ML integration, more data throughput
2. Primarily data movement. "Benchmarking" machines have impressive flops numbers but are generally terrible for getting any real work done because their networks/memory systems/storage systems are weak.
Re: (Score:3)
Weather forecasting and modeling is all done via super computer, and requires constant running computations thanks to the Earth never slowing down! And while you might live in an area where weather doesn't impact life too much (like here in the Pacific Northwest), in regions that get hurricanes and tornadoes, their path predictions could literally be life or death, and knowledge of them further out could prevent billions in damages by bunkering down ahead of time.
Re: (Score:2)
Is there a list if scientific achievements from past supercomputers
Yes, but don't bother your head about it, it's pretty much all beyond the comprehension of somebody unable to answer such a simple question for themselves. Back to your Tiktoks now.
Re:Are these machines getting smaller? (Score:4, Funny)
Frontier is expected to be able to run Crysis at 4K resolution with medium settings.
Re: (Score:2, Funny)
Re: (Score:1)
I'm an engineer working on the hardware architecture for Frontier. What you say is correct, but sometimes we do get frame dips below 60 FPS on this configuration.
Under-budgeted (Score:2)
Re: (Score:1)
So can a penis.
How fast is fast enough (Score:1)
Other than the "because we can" reason (which, sure, I suppose is reason enough to do it all by itself), How fast is fast enough? Several orgs are constantly racing to build ever faster machines that can process more and more data. Constantly. Like every few years (at most) Oak Ridge announces a new plan for a new machine. Can they not get more use out of their current super-computers? When is fast fast enough?
Re: (Score:3)
As far as computationally intensive problems go, it's turtles all the way up.
Re:How fast is fast enough (Score:5, Insightful)
Well, lifespan of a super computer is about 5 years. Also, energy efficiency keeps on improving and energy consumption is about 1/3 of the total cost of the system (the other two thirds being roughly the system itself and manpower). The machines are packed and scientist can definitely use the cycles. So it is not surprising that we keep building/refreshing them.
A hard question is do we need 1 exaflop computer or 1000 petaflop computers. That's a much harder question to answer. 1000 petaflop computers would be quite valuable and are easier to build.
But there are scientific questions that you can only answer on big system. I worked with physicists about 10 years ago. And some of their calculation required about 2PB of main memory. At the time, it was about half the memory of the biggest super computer in the US.
Re: (Score:1)
Re: (Score:2)
What kind of calculation were they doing that required about 2PB of main memory?
Probably one with some big numbers. But yes, I would also love to know the serious answer to that question. It makes my head hurt to think about crunching that much data in one sitting...
Re: (Score:3)
Their application was "Facebook" - yes, all of it.
Opening Microsoft Office 2020 (Score:3)
They were attempting to run a preview build of Microsoft Office 2020. Unfortunately, it went to swap and started thrashing.
Re:How fast is fast enough (Score:5, Interesting)
The basic answer is that they were trying to eigensolve a sparse matrix which is 2PB large. You don't want to go to disk on a matrix that big or you waste all your time in IO. Recomputing the matrix is too expensive so you can't do that either. Just the vectors you need to keep in your eigensolver are dozens of GB large. So really your only solution was to store the sparse matrix in (distributed) memory. Actually that problem use case was one that was considered when designing what became burst buffer technology.
The matrix itself comes from the expansion of the schrodinger equation for a particular atom when doing ab initio method in no-core configuration interaction calculation. (These are just words to me, I don't really know what they mean.) The number of row and column goes exponential in the number of particle that make the atom. And the number of non zero grows based on type of interactions (2 body, 3 body, ...) you consider. If you are looking at boron 10 (which is smaller than they were interested in), with a schrodinger equation truncated to a point that is probably not useful for doing physics (but helpful to run some test), the matrix was about 1TB large and you need to keep 30GB of vectors.
I am not a physicist, so I can't tell you much more than that; and I apologize if I mis represented the physics aspect of the question. The paper is here: https://iopscience.iop.org/art... [iop.org] In case you wonder, I am author number 8.
I am not sure they managed to run the 2PB case because accessing these machine is difficult. But they looking at smaller atoms and run 100TBs problems.
Re: (Score:1)
Please let's stop calling clusters "supercomputers".
Re:How fast is fast enough (Score:4, Informative)
That's not true. None of the code that run at that scale are serial codes. On small systems maybe, on top-20 machine you won't get cycles unless you can show you can use them meaningfully.
It's all tightly coupled large paralell applications, large sparse solvers, PIC simulations. There is a reason a significant fraction of these machines' cost is in the network.
No limit (Score:3)
Can they not get more use out of their current super-computers? When is fast fast enough?
For the sorts of problems you use a supercomputer for there is no useful limit to "fast enough". There are always problems that require more computational power to solve than we currently possess or which take long enough to solve on our current machines as to render them busy and thus unavailable for other problems while they crunch away. Essentially an opportunity cost - by solving one problem you necessarily have to wait to solve another until the processor time is available. So faster machines let yo
Re: (Score:2)
Re: How fast is fast enough (Score:1)
No, the sky is not the limit here. This is the point. Strong scaling on these systems is severely impacted by the communication of data between eacb node during a job. The more nodes, the greater this overhead. What is being attempted is to find ways to: A) make this overhead smaller, and B) make the computation portion of code significantly more robust (not just faster) so that communication has less impact on overall runtimes.
These things require large systems to test, and new interconnection types, along
Re: (Score:2)
Umm, I think the GP was talking about the sky being the limit for computational demand, not capacity.
There are plenty of research disciplines that will happily saturate any supercomputer resources you throw at it.
Re: (Score:1)
ie the massive amount of math needed for new climate change and agricultural "calculations".
Re: (Score:2)
Depends what you mean by "useful". Most of it is physics stuff, chemistry stuff, weather stuff, and biology stuff.
I live in the south east of the US, and these systems (the research that came from them and the direct usage of the systems) gave us early tornado warning and path predictions. That's useful.
A lot of the physics that made fiber optics, wireless networking, quantum computing, carbon nanotubes possible came from these kinds of system. They're useful.
A supercomputer has a lifetime of about 5 years,
Re: (Score:1)
A pile of paper cards got delivered to run on the computer. Hours later the printed out results got given back.
The resulting thermonuclear weapons worked when tested.
The math done was correct and the computer was ready for more work.
Now days its all about the GUI and simulations of decades of old parts.
Re: (Score:2)
Obligatory "You could have spent ten days writing it in something other than Python and it would be finished by now" comment.
Re: (Score:1)
All goes back to skill and ability.
FTFY (Score:2)
And cryptograp4\|&.,k@.
no carrier
Yeah but (Score:2)
Most importantly - (Score:2)
Oh wait - it's a "research system".
Cracking (Score:1)
You wouldn't use a general purpose supercomputer for cracking TLS ciphers, it would be horribly inefficient. You'd run, at the very least, FPGAs, or dedicated silicon for the more common ciphers.
Something like this:
https://en.wikipedia.org/wiki/EFF_DES_cracker
That thing crushes DES and was built by a couple of small companies and the EFF. You think the NSA would buy off the shelf hardware and announce it in a press release? They have their own chip fab, you know.
Re:Most importantly (Score:2)
Oh wait - it's a "research system". /s
huh?
Yes it is. Such a system is a bit crap for cracking encryption keys.
Will run SUSE Linux (Score:3)
The Cray supercomputers run a SUSE based distribution. SUSE is very advanced and rock solid with a fast package management system, gained during the Novell days, so it is will suited. early all supercomputers run Linux, which is impressive how Linux runs everything from a smartphone to a supercomputer. With all of the scalability improvements Linux is geting with io_uring etc, and the improved flexibility, configurability and control systemd offers, its becoming a very flexible enterprise grade OS.
Re: (Score:1)
The Cray supercomputers run a SUSE based distribution. SUSE is very advanced and rock solid with a fast package management system, gained during the Novell days, so it is will suited. early all supercomputers run Linux, which is impressive how Linux runs everything from a smartphone to a supercomputer. With all of the scalability improvements Linux is geting with io_uring etc, and the improved flexibility, configurability and control systemd offers, its becoming a very flexible enterprise grade OS.
Umm no. package system came from ximian which novell bought. Systemd is a desktop product which should have stayed away from enterprise
Re: (Score:2)
systemd is garbage that Redhat squatted and shat on the enterprise world.
The SLES based "Cray Linux Environment" could work fine without it, in fact better without it from what I've seen by the SLES servers I have to manage. systemd is bloated unnecessary crap in the enterprise space, and when things go wrong it impedes recovery and troubleshooting.
even for cnet this is bad (Score:2)
Even for Cnet this is a bad article. I particularly enjoyed the "1.5 quintillion calculations per second, a level called 1.5 exaflops".
Algorithms beat speed ... (Score:2)
... as demonstrated by Watson's dismal performance [businessinsider.com]
IBM is shuttering its Watson AI tool for drug discovery
Re: (Score:2)
For sure. But the codes that are running in there are already using the best algorithms that we know for these problems. So you get to a point where going bigger is the only option. (Beside not computing it at all.)
Let me guess (Score:2)