Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Education United States

University of Texas is Getting a $60 Million Supercomputer (cnet.com) 88

The University of Texas at Austin, will soon be home to one of the most powerful supercomputers in the world. From a report: The National Science Foundation awarded a $60 million grant to the school's Texas Advanced Computing Center, UT Austin and NSF said Wednesday. The supercomputer, named Frontera, is set to become operational roughly a year from now in 2019, and will be "among the most powerful in the world," according to a statement. To be exact, it will be the fifth most powerful in the world, third most powerful in the US, and the most powerful at a university.
This discussion has been archived. No new comments can be posted.

University of Texas is Getting a $60 Million Supercomputer

Comments Filter:
  • Dang (Score:3, Funny)

    by Anonymous Coward on Wednesday August 29, 2018 @10:25AM (#57217730)

    I can't wait to play Quake on that thing.

  • by XXongo ( 3986865 ) on Wednesday August 29, 2018 @10:26AM (#57217736) Homepage
    will they name it "HAL"?
    • by Anonymous Coward

      will they name it "HAL"?

      I'm afraid they can't do that.

  • by ZorinLynx ( 31751 ) on Wednesday August 29, 2018 @10:31AM (#57217788) Homepage

    Back in the day, supercomputers used to be about cutting edge system architecture, making CPUs as absolutely fast as possible, and even shortening connecting wires in the system to squeeze every last bit of performance out of a system. Think back to the Cray systems and such.

    These days, supercomputers are just about who can spend the most money to build the biggest data center and buy the largest number of generic blade servers. It's just not interesting anymore; whoever can spend the most money will have the fastest system simply because they can buy the most blades.

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      Interconnect still matters.

      Anyone can do it with generic ethernet.

      Hell, I have a "supercomputer" with 8x 4core raspberry pi's. It's faster than the early crays, and certainly has more memory.

      Then we go up a few notches, and go into fiberchannel, inifniband, and other high performance stuff.

      it's just not about blades, it's about interconnect, memory utilization, data locality, algorithmic complexity, etc.

      Heck, even CERN is known to use a "small" raspberry pi cluster to tune algorithms.

    • It's just not interesting anymore; whoever can spend the most money will have the fastest system simply because they can buy the most blades.

      I was more excited back when System X was built (aka "Big Mac"). It was able to make #3 on the list for less than $6,000,000.
      https://en.wikipedia.org/wiki/... [wikipedia.org]

    • Comment removed based on user account deletion
    • by bobby ( 109046 )

      Agreed, which I fear makes me sound like a "back in my day" curmudgeon. The liquid cooled Crays were the coolest thing and way cooler than sci-fi stuff. But I was a kid, dreaming of the day I'd program a Cray, not knowing I'd have more than Cray-1 power under my fingers in a laptop. But with far more code for it to wade through...

      I don't see it in TFA, but a quick search reveals it appears to be a huge pile of Dell blades, which makes sense they'd buy from Dell. It'd be nice to see some specs: Intel or

      • The liquid cooled Crays were the coolest thing

        Cool for their day, but my iPhone has way more compute capacity today. Custom CPUs can't compete with a 14nm fab, and never will again.

        it appears to be a huge pile of Dell blades

        It is more than that. What makes it a "supercomputer" is the fast interconnects between the blades.

        It'd be nice to see some specs: Intel or AMD CPUs?

        Who cares? Most of the compute capacity is in the GPU, not the CPU. The press release mentions Nvidia.

        This is just a funding announcement. It is likely light on tech details because the details haven't actually been ironed out yet.

      • There was a mention of Intel and Nvidia in the article so that's my guess. Intel Xeons and Nvidia cards.
    • whoever can spend the most money will have the fastest system simply because they can buy the most blades

      My best Speed Racer voice:

      And you will see that I will spend the most money and have the fastest system because I have the most blades because of the most money and therefore I have the fastest system and you did not spend the most money and therefore I did and I have the fastest system you will see ha ha.

  • by Anonymous Coward

    Is in China. Because China has eclipsed the US in every conceivable way possible. In 50 years the US will be speaking Mandarin and there's nothing you can do about it.

    • Most people in USA can't even speak English properly. Good luck teaching them Chinese.

      • Most people in USA can't even speak English properly. Good luck teaching them Chinese.

        If you learn Chinese as a child, it is easier than English. The grammar is simpler, there are no irregular verbs, and the pronouns are drop-dead simple. You don't need regionalisms like "y'all" to make up for a lack of a second person plural, or singular "their" to make up for the lack of a gender neutral third person pronoun. There is no difference between subjective pronouns (I, we, they, who) and objective (me, us, them, whom).

        I speak both. Chinese is better for haggling and insults. English is bette

    • Is in China. Because China has eclipsed the US in every conceivable way possible.

      Per the June 2018 Top 500 List, [wikipedia.org] the US owns the title of fastest supercomputer. In fact, 6 of the top 10 on the list are installed in the United States. And for those opining the days of Cray's dominance in this space, I'll point out that several of the systems in the top 10 are identified with Cray as the vendor. Just two computers in the top 10 are hosted in Jackie Chan's home country.

  • by Anonymous Coward

    Look UT Austin, just because you bought yourself a supercomputer doesn't mean its going to be enough to help the Longhorns beat the Sooners no matter how many xFLOPS it can do, or if it can run Witcher 3 in 4k smoothly.

  • How big is a beowulf cluster of twelve million Raspberry Pi Zero?

  • by johnpagenola ( 601936 ) on Wednesday August 29, 2018 @11:36AM (#57218298)
    My university got a "supercomputer" and I got excited about what I could do with all of its capabilities. Then I started submitting jobs in batch that were limited to 64gb of ram. I could request 128gb batch machines which would take hours to become available and the maximum machines were 256gb which would sometimes take days to get. Storage was limited to 1TB. Of course I didn't have the permissions to install software so it was an endless hassle to request installation of new versions. So I went back to my own dual E5-2667v2 processors ($590 for both) and 96gb of ram. My z620 is no supercomputer, but it is better than my share of the supercomputer.
    • by jellomizer ( 103300 ) on Wednesday August 29, 2018 @11:47AM (#57218370)

      Well more to the question, What is the university using the supercomputer for? Is it just 60 million dollar bragging rights, or did they get some grants for research project(s) that can cover the cost that could have effects to make it worth the cost?

      Using a Supercomputer for a Shared system is general a waste of money, and you are better off with just a server farm, or (gasp) a cloud service (which is a server farm hosted remotely). However if there is a project that really is utilizing the full computer then a Super Computer is needed.

    • by imidan ( 559239 )
      At my university, we technically have access to a DOE supercomputer. I say "technically" because the actual facility is hundreds of miles away, they offer no consistent support for users, you have to jump through a lot of hoops to be cleared to even log in to the thing, all of which make it pretty hard to use, plus basically all of the problems you listed. When I had a project where I needed big RAM (~250GB), I spun up a virtual machine on Amazon and did my processing there. It cost me some money, but at le
  • by Orp ( 6583 ) on Wednesday August 29, 2018 @02:14PM (#57219536) Homepage

    I'm an atmospheric scientist who has been using federal supercomputing hardware to better understand thunderstorms [orf.media] for years. Blue Waters is the current "Leadership Class" NSF-sponsored supercomputer. My Blue Waters allocation is currently winding down, and I can speak to how great it has been as a machine that has enabled (I know it's a cliche, but it's true) breakthrough science. A typical Blue Waters node contains 16 floating point AMD cores and 64 GB of memory. Many of the Blue Waters nodes contain a GPU, but it's miles behind the times since the machine was created about 7 years ago.

    Frontera (for some reason the Canyonera theme song plays in my head) is the Phase 1 machine for the next Leadership Class supercomputer. The Phase 1 machine is supposed to come on line in 2019 and hold us over until 2024 when the next machine will come on line. When you look at how much money is being spent on Fronterra, and you compare it to Blue Waters, you realize that the vendor is being asked to create a much more powerful machine for a fraction of the price. What this will mean in practice, and what most of the scientific computing world is not ready for, is that a large bulk of the FLOPS on this new machine will be GPU flops. GPUs are not easy to use for doing heavy lifting (say, fluid dynamics solvers) using existing code. So a lot of people are going to have to decide whether to try to shoehorn their current MPI only (or MPI + some OpenMP) code to MPI, OpenMP + OpenACL (or nvidia CUDA), or to start from scratch (nobody wants to start from scratch). You have to remember that the vast majority of us scientists are NOT trained computer scientists, and most of us code for shit. I am off to a hackathon at NCSA in a couple weeks with some students to optimize some radiation code for GPUS... I spend half of my time doing computer stuff, and the other half doing science (and the other other half writing proposals, etc.).

    So for those of you who aren't excited about new supercomputers, or don't understand their true power, I'm here to say that it's currently a very exciting time to be a numerical modeler, if you're willing to learn a bit on how to best wrestle these supercomputers into submission. I've spend over a decade just figuring out the most efficient way to write, organize, and analyze the TB-PB of data that a high resolution model can produce, and trying to make sense out of the firehose of data that these things can make. The hard-won benefits are crystal clear to me, but as always, tech is a moving target, so what works today might not work tomorrow...

    The nice thing about supercomputers is they serve as a virtual lab for just about any field you can imagine. There are people in the humanities using supercomputers to do interesting things, beyond all the usual astrophysics, chemistry, and geophysical modeling.

    Yay supercomputers, and yay NSF.

  • by PPH ( 736903 )

    and spare batteries.

  • So they got $60 million. What was the proposal, just "give us $60 million and we'll think about how to spend it?". Seems reasonable.

    The last one, Stampede2, was Xeons + NVidia. Will this one be Ryzen + Radeon? I expect there are a number of Intel and NVidia salesmen now stalking their prey on campus.

"Facts are stupid things." -- President Ronald Reagan (a blooper from his speeach at the '88 GOP convention)

Working...