×
IBM

IBM Wants Its Quantum Supercomputers Running at 4,000-Plus Qubits by 2025 (engadget.com) 60

An anonymous reader shares a report: Forty years after it first began to dabble in quantum computing, IBM is ready to expand the technology out of the lab and into more practical applications -- like supercomputing! The company has already hit a number of development milestones since it released its previous quantum roadmap in 2020, including the 127-qubit Eagle processor that uses quantum circuits and the Qiskit Runtime API. IBM announced on Wednesday that it plans to further scale its quantum ambitions and has revised the 2020 roadmap with an even loftier goal of operating a 4,000-qubit system by 2025.

Before it sets about building the biggest quantum computer to date, IBM plans release its 433-qubit Osprey chip later this year and migrate the Qiskit Runtime to the cloud in 2023, "bringing a serverless approach into the core quantum software stack," per Wednesday's release. Those products will be followed later that year by Condor, a quantum chip IBM is billing as "the world's first universal quantum processor with over 1,000 qubits." This rapid four-fold jump in quantum volume (the number of qubits packed into a processor) will enable users to run increasingly longer quantum circuits, while increasing the processing speed -- measured in CLOPS (circuit layer operations per second) -- from a maximum of 2,900 OPS to over 10,000. Then it's just a simple matter of quadrupaling that capacity in the span of less than 24 months.

Supercomputing

Russia Cobbles Together Supercomputing Platform To Wean Off Foreign Suppliers (theregister.com) 38

Russia is adapting to a world where it no longer has access to many technologies abroad with the development of a new supercomputer platform that can use foreign x86 processors such as Intel's in combination with the country's homegrown Elbrus processors. The Register reports: The new supercomputer reference system, dubbed "RSK Tornado," was developed on behalf of the Russian government by HPC system integrator RSC Group, according to an English translation of a Russian-language press release published March 30. RSC said it created RSK Tornado as a "unified interoperable" platform to "accelerate the pace of important substitution" for HPC systems, data processing centers and data storage systems in Russia. In other words, the HPC system architecture is meant to help Russia quickly adjust to the fact that major chip companies such as Intel, AMD and TSMC -- plus several other technology vendors, like Dell and Lenovo -- have suspended product shipments to the country as a result of sanctions by the US and other countries in reaction to Russia's invasion of Ukraine.

RSK Tornado supports up to 104 servers in a rack, with the idea being to support foreign x86 processors (should they come available) as well as Russia's Elbrus processors, which debuted in 2015. The hope appears to be the ability for Russian developers to port HPC, AI and big data applications from x86 architectures to the Elbrus architecture, which, in theory, will make it easier for Russia to rely on its own supply chain and better cope with continued sanctions from abroad. RSK Tornado systems software is RSC proprietary and is currently used to orchestrate supercomputer resources at the Interdepartmental Supercomputer Center of the Russian Academy of Sciences, St Petersburg Polytechnic University and the Joint Institute for Nuclear Research. RSC claims to have also developed its own liquid-cooling system for supercomputers and data storage systems, the latter of which can use Elbrus CPUs too.

Supercomputing

'Quantum Computing Has a Hype Problem' (technologyreview.com) 48

"A reputed expert in the quantum computing field puts it in black and white: as of today, quantum computing is a paper tiger, and nobody knows when (if ever) it will become commercially practical," writes Slashdot reader OneHundredAndTen. "In the meantime, the hype continues."

In an opinion piece for MIT Technology Review, Sankar Das Sarma, a "pro-quantum-computing" physicist that's "published more than 100 technical papers on the subject," says he's disturbed by some of the quantum computing hype he sees today, "particularly when it comes to claims about how it will be commercialized." Here's an excerpt from his article: Established applications for quantum computers do exist. The best known is Peter Shor's 1994 theoretical demonstration that a quantum computer can solve the hard problem of finding the prime factors of large numbers exponentially faster than all classical schemes. Prime factorization is at the heart of breaking the universally used RSA-based cryptography, so Shor's factorization scheme immediately attracted the attention of national governments everywhere, leading to considerable quantum-computing research funding. The only problem? Actually making a quantum computer that could do it. That depends on implementing an idea pioneered by Shor and others called quantum-error correction, a process to compensate for the fact that quantum states disappear quickly because of environmental noise (a phenomenon called "decoherence"). In 1994, scientists thought that such error correction would be easy because physics allows it. But in practice, it is extremely difficult.

The most advanced quantum computers today have dozens of decohering (or "noisy") physical qubits. Building a quantum computer that could crack RSA codes out of such components would require many millions if not billions of qubits. Only tens of thousands of these would be used for computation -- so-called logical qubits; the rest would be needed for error correction, compensating for decoherence. The qubit systems we have today are a tremendous scientific achievement, but they take us no closer to having a quantum computer that can solve a problem that anybody cares about. It is akin to trying to make today's best smartphones using vacuum tubes from the early 1900s. You can put 100 tubes together and establish the principle that if you could somehow get 10 billion of them to work together in a coherent, seamless manner, you could achieve all kinds of miracles. What, however, is missing is the breakthrough of integrated circuits and CPUs leading to smartphones -- it took 60 years of very difficult engineering to go from the invention of transistors to the smartphone with no new physics involved in the process.

China

How China Built an Exascale Supercomputer Out of Old 14nm Tech (nextplatform.com) 29

Slashdot reader katydid77 shares a report from the supercomputing site The Next Platform: If you need any proof that it doesn't take the most advanced chip manufacturing processes to create an exascale-class supercomputer, you need look no further than the Sunway "OceanLight" system housed at the National Supercomputing Center in Wuxi, China. Some of the architectural details of the OceanLight supercomputer came to our attention as part of a paper published by Alibaba Group, Tsinghua University, DAMO Academy, Zhejiang Lab, and Beijing Academy of Artificial Intelligence, which is running a pretrained machine learning model called BaGuaLu, across more than 37 million cores and 14.5 trillion parameters (presumably with FP32 single precision), and has the capability to scale to 174 trillion parameters (and approaching what is called "brain-scale" where the number of parameters starts approaching the number of synapses in the human brain)....

Add it all up, and the 105 cabinet system tested on the BaGuaLu training model, with its 107,250 SW26010-Pro processors, had a peak theoretical performance of 1.51 exaflops. We like base 2 numbers and think that the OceanLight system probably scales to 160 cabinets, which would be 163,840 nodes and just under 2.3 exaflops of peak FP64 and FP32 performance. If it is only 120 cabinets (also a base 2 number), OceanLight will come in at 1.72 exaflops peak. But these rack scales are, once again, just hunches. If the 160 cabinet scale is the maximum for OceanLight, then China could best the performance of the 1.5 exaflops "Frontier" supercomputer being tuned up at Oak Ridge National Laboratories today and also extend beyond the peak theoretical performance of the 2 exaflops "Aurora" supercomputer coming to Argonne National Laboratory later this year — and maybe even further than the "El Capitan" supercomputer going into Lawrence Livermore National Laboratory in 2023 and expected to be around 2.2 exaflops to 2.3 exaflops according to the scuttlebutt.

We would love to see the thermals and costs of OceanLight. The SW26010-Pro chip could burn very hot, to be sure, and run up the electric bill for power and cooling, but if SMIC [China's largest foundry] can get good yield on 14 nanometer processes, the chip could be a lot less expensive to make than, say, a massive GPU accelerator from Nvidia, AMD, or Intel. (It's hard to say.) Regardless, having indigenous parts matters more than power efficiency for China right now, and into its future, and we said as much last summer when contemplating China's long road to IT independence. Imagine what China can do with a shrink to 7 nanometer processes when SMIC delivers them — apparently not even using extreme ultraviolet (EUV) light — many years hence....

The bottom line is that the National Research Center of Parallel Computer Engineering and Technology (known as NRCPC), working with SMIC, has had an exascale machine in the field for a year already. (There are two, in fact.) Can the United States say that right now? No it can't.

Science

Computers Uncover 100,000 Novel Viruses in Old Genetic Data (science.org) 50

sciencehabit writes: It took just one virus to cripple the world's economy and kill millions of people; yet virologists estimate that trillions of still-unknown viruses exist, many of which might be lethal or have the potential to spark the next pandemic. Now, they have a new -- and very long -- list of possible suspects to interrogate. By sifting through unprecedented amounts of existing genomic data, scientists have uncovered more than 100,000 novel viruses, including nine coronaviruses and more than 300 related to the hepatitis Delta virus, which can cause liver failure. "It's a foundational piece of work," says J. Rodney Brister, a bioinformatician at the National Center for Biotechnology Information's National Library of Medicine who was not involved in the new study. The work expands the number of known viruses that use RNA instead of DNA for their genes by an order of magnitude. It also "demonstrates our outrageous lack of knowledge about this group of organisms," says disease ecologist Peter Daszak, president of the EcoHealth Alliance, a nonprofit research group in New York City that is raising money to launch a global survey of viruses. The work will also help launch so-called petabyte genomics -- the analyses of previously unfathomable quantities of DNA and RNA data.

That wasn't exactly what computational biologist Artem Babaian had in mind when he was in between jobs in early 2020. Instead, he was simply curious about how many coronaviruses -- aside from the virus that had just launched the COVID-19 pandemic -- could be found in sequences in existing genomic databases. So, he and independent supercomputing expert Jeff Taylor scoured cloud-based genomic data that had been deposited to a global sequence database and uploaded by the U.S. National Institutes of Health. As of now, the database contains 16 petabytes of archived sequences, which come from genetic surveys of everything from fugu fish to farm soils to the insides of human guts. (A database with a digital photo of every person in the United States would take up about the same amount of space.) The genomes of viruses infecting different organisms in these samples are also captured by sequencing, but they usually go undetected.

Hardware

First RISC-V Computer Chip Lands At the European Processor Initiative (theregister.com) 27

An anonymous reader quotes a report from The Register: The European Processor Initiative (EPI) has run the successful first test of its RISC-V-based European Processor Accelerator (EPAC), touting it as the initial step towards homegrown supercomputing hardware. EPI, launched back in 2018, aims to increase the independence of Europe's supercomputing industry from foreign technology companies. At its heart is the adoption of the free and open-source RISC-V instruction set architecture for the development and production of high-performance chips within Europe's borders. The project's latest milestone is the delivery of 143 samples of EPAC chips, accelerators designed for high-performance computing applications and built around the free and open-source RISC-V instruction set architecture. Designed to prove the processor's design, the 22nm test chips -- fabbed at GlobalFoundries, the not-terribly-European semiconductor manufacturer spun out of AMD back in 2009 -- have passed initial testing, running a bare-metal "hello, world" program as proof of life.

It's a rapid turnaround. The EPAC design was proven on FPGA in March and the project announced silicon tape-out for the test chips in June -- hitting a 26.97mm2 area with 14 million placeable instances, equivalent to 93 million gates, including 991 memory instances. While the FPGA variant, which implemented a subset of the functions of the full EPAC design, was shown booting a Linux operating system, the physical test chips have so far only been tested with basic bare-metal workloads -- leaving plenty of work to be done.
Earlier today, the UK government released its 10-year plan to make the country a global "artificial intelligence superpower," seeking to rival the likes of the U.S. and China. "The so-called 'National Artificial Intelligence Strategy' is designed to boost the use of AI among the nation's businesses, attract international investment into British AI companies and develop the next generation of homegrown tech talent," reports CNBC.
AI

What Does It Take to Build the World's Largest Computer Chip? (newyorker.com) 23

The New Yorker looks at Cerebras, a startup which has raised nearly half a billion dollars to build massive plate-sized chips targeted at AI applications — the largest computer chip in the world. In the end, said Cerebras's co-founder Andrew Feldman, the mega-chip design offers several advantages. Cores communicate faster when they're on the same chip: instead of being spread around a room, the computer's brain is now in a single skull. Big chips handle memory better, too. Typically, a small chip that's ready to process a file must first fetch it from a shared memory chip located elsewhere on its circuit board; only the most frequently used data might be cached closer to home...

A typical, large computer chip might draw three hundred and fifty watts of power, but Cerebras's giant chip draws fifteen kilowatts — enough to run a small house. "Nobody ever delivered that much power to a chip," Feldman said. "Nobody ever had to cool a chip like that." In the end, three-quarters of the CS-1, the computer that Cerebras built around its WSE-1 chip, is dedicated to preventing the motherboard from melting. Most computers use fans to blow cool air over their processors, but the CS-1 uses water, which conducts heat better; connected to piping and sitting atop the silicon is a water-cooled plate, made of a custom copper alloy that won't expand too much when warmed, and polished to perfection so as not to scratch the chip. On most chips, data and power flow in through wires at the edges, in roughly the same way that they arrive at a suburban house; for the more metropolitan Wafer-Scale Engines, they needed to come in perpendicularly, from below. The engineers had to invent a new connecting material that could withstand the heat and stress of the mega-chip environment. "That took us more than a year," Feldman said...

[I]n a rack in a data center, it takes up the same space as fifteen of the pizza-box-size machines powered by G.P.U.s. Custom-built machine-learning software works to assign tasks to the chip in the most efficient way possible, and even distributes work in order to prevent cold spots, so that the wafer doesn't crack.... According to Cerebras, the CS-1 is being used in several world-class labs — including the Lawrence Livermore National Laboratory, the Pittsburgh Supercomputing Center, and E.P.C.C., the supercomputing centre at the University of Edinburgh — as well as by pharmaceutical companies, industrial firms, and "military and intelligence customers." Earlier this year, in a blog post, an engineer at the pharmaceutical company AstraZeneca wrote that it had used a CS-1 to train a neural network that could extract information from research papers; the computer performed in two days what would take "a large cluster of G.P.U.s" two weeks.

The U.S. National Energy Technology Laboratory reported that its CS-1 solved a system of equations more than two hundred times faster than its supercomputer, while using "a fraction" of the power consumption. "To our knowledge, this is the first ever system capable of faster-than real-time simulation of millions of cells in realistic fluid-dynamics models," the researchers wrote. They concluded that, because of scaling inefficiencies, there could be no version of their supercomputer big enough to beat the CS-1.... Bronis de Supinski, the C.T.O. for Livermore Computing, told me that, in initial tests, the CS-1 had run neural networks about five times as fast per transistor as a cluster of G.P.U.s, and had accelerated network training even more.

It all suggests one possible work-around for Moore's Law: optimizing chips for specific applications. "For now," Feldman tells the New Yorker, "progress will come through specialization."
Supercomputing

World's Fastest AI Supercomputer Built from 6,159 NVIDIA A100 Tensor Core GPUs (nvidia.com) 57

Slashdot reader 4wdloop shared this report from NVIDIA's blog, joking that maybe this is where all NVIDIA's chips are going: It will help piece together a 3D map of the universe, probe subatomic interactions for green energy sources and much more. Perlmutter, officially dedicated Thursday at the National Energy Research Scientific Computing Center (NERSC), is a supercomputer that will deliver nearly four exaflops of AI performance for more than 7,000 researchers. That makes Perlmutter the fastest system on the planet on the 16- and 32-bit mixed-precision math AI uses. And that performance doesn't even include a second phase coming later this year to the system based at Lawrence Berkeley National Lab.

More than two dozen applications are getting ready to be among the first to ride the 6,159 NVIDIA A100 Tensor Core GPUs in Perlmutter, the largest A100-powered system in the world. They aim to advance science in astrophysics, climate science and more. In one project, the supercomputer will help assemble the largest 3D map of the visible universe to date. It will process data from the Dark Energy Spectroscopic Instrument (DESI), a kind of cosmic camera that can capture as many as 5,000 galaxies in a single exposure. Researchers need the speed of Perlmutter's GPUs to capture dozens of exposures from one night to know where to point DESI the next night. Preparing a year's worth of the data for publication would take weeks or months on prior systems, but Perlmutter should help them accomplish the task in as little as a few days.

"I'm really happy with the 20x speedups we've gotten on GPUs in our preparatory work," said Rollin Thomas, a data architect at NERSC who's helping researchers get their code ready for Perlmutter. DESI's map aims to shed light on dark energy, the mysterious physics behind the accelerating expansion of the universe.

A similar spirit fuels many projects that will run on NERSC's new supercomputer. For example, work in materials science aims to discover atomic interactions that could point the way to better batteries and biofuels. Traditional supercomputers can barely handle the math required to generate simulations of a few atoms over a few nanoseconds with programs such as Quantum Espresso. But by combining their highly accurate simulations with machine learning, scientists can study more atoms over longer stretches of time. "In the past it was impossible to do fully atomistic simulations of big systems like battery interfaces, but now scientists plan to use Perlmutter to do just that," said Brandon Cook, an applications performance specialist at NERSC who's helping researchers launch such projects. That's where Tensor Cores in the A100 play a unique role. They accelerate both the double-precision floating point math for simulations and the mixed-precision calculations required for deep learning.

Supercomputing

Google Plans To Build a Commercial Quantum Computer By 2029 (engadget.com) 56

Google developers are confident they can build a commercial-grade quantum computer by 2029. Engadget reports: Google CEO Sundar Pichai announced the plan during today's I/O stream, and in a blog post, quantum AI lead engineer Erik Lucero further outlined the company's goal to "build a useful, error-corrected quantum computer" within the decade. Executives also revealed Google's new campus in Santa Barbara, California, which is dedicated to quantum AI. The campus has Google's first quantum data center, hardware research laboratories, and the company's very own quantum processor chip fabrication facilities.

"As we look 10 years into the future, many of the greatest global challenges, from climate change to handling the next pandemic, demand a new kind of computing," Lucero said. "To build better batteries (to lighten the load on the power grid), or to create fertilizer to feed the world without creating 2 percent of global carbon emissions (as nitrogen fixation does today), or to create more targeted medicines (to stop the next pandemic before it starts), we need to understand and design molecules better. That means simulating nature accurately. But you can't simulate molecules very well using classical computers."

Australia

Ancient Australian 'Superhighways' Suggested By Massive Supercomputing Study (sciencemag.org) 56

sciencehabit shares a report from Science Magazine: When humans first set foot in Australia more than 65,000 years ago, they faced the perilous task of navigating a landscape they'd never seen. Now, researchers have used supercomputers to simulate 125 billion possible travel routes and reconstruct the most likely "superhighways" these ancient immigrants used as they spread across the continent. The project offers new insight into how landmarks and water supplies shape human migrations, and provides archaeologists with clues for where to look for undiscovered ancient settlements.

It took weeks to run the complex simulations on a supercomputer operated by the U.S. government. But the number crunching ultimately revealed a network of "optimal superhighways" that had the most attractive combinations of easy walking, water, and landmarks. Optimal road map in hand, the researchers faced a fundamental question, says lead author Stefani Crabtree, an archaeologist at Utah State University, Logan, and the Santa Fe Institute: Was there any evidence that real people had once used these computer-identified corridors? To find out, the researchers compared their routes to the locations of the roughly three dozen archaeological sites in Australia known to be at least 35,000 years old. Many sites sat on or near the superhighways. Some corridors also coincided with ancient trade routes known from indigenous oral histories, or aligned with genetic and linguistic studies used to trace early human migrations. "I think all of us were surprised by the goodness of the fit," says archaeologist Sean Ulm of James Cook University, Cairns.

The map has also highlighted little-studied migration corridors that could yield future archaeological discoveries. For example, some early superhighways sat on coastal lands that are now submerged, giving marine researchers a guide for exploration. Even more intriguing, the authors and others say, are major routes that cut across several arid areas in Australia's center and in the northeastern state of Queensland. Those paths challenge a "long-standing view that the earliest people avoided the deserts," Ulm says. The Queensland highway, in particular, presents "an excellent focus point" for future archaeological surveys, says archaeologist Shimona Kealy of the Australian National University.
The study has been published in the journal Nature Human Behavior.
Intel

Nvidia To Make CPUs, Going After Intel (bloomberg.com) 111

Nvidia said it's offering the company's first server microprocessors, extending a push into Intel's most lucrative market with a chip aimed at handling the most complicated computing work. Intel shares fell more than 2% on the news. From a report: The graphics chipmaker has designed a central processing unit, or CPU, based on technology from Arm, a company it's trying to acquire from Japan's SoftBank Group. The Swiss National Supercomputing Centre and U.S. Department of Energy's Los Alamos National Laboratory will be the first to use the chips in their computers, Nvidia said Monday at an online event. Nvidia has focused mainly on graphics processing units, or GPUs, which are used to power video games and data-heavy computing tasks in data centers. CPUs, by contrast, are a type of chip that's more of a generalist and can do basic tasks like running operating systems. Expanding into this product category opens up more revenue opportunities for Nvidia.

Founder and Chief Executive Officer Jensen Huang has made Nvidia the most valuable U.S. chipmaker by delivering on his promise to give graphics chips a major role in the explosion in cloud computing. Data center revenue contributes about 40% of the company's sales, up from less than 7% just five years ago. Intel still has more than 90% of the market in server processors, which can sell for more than $10,000 each. The CPU, named Grace after the late pioneering computer scientist Grace Hopper, is designed to work closely with Nvidia graphics chips to better handle new computing problems that will come with a trillion parameters. Systems working with the new chip will be 10 times faster than those currently using a combination of Nvidia graphics chips and Intel CPUs. The new product will be available at the beginning of 2023, Nvidia said.

Supercomputing

US Adds Chinese Supercomputing Entities To Economic Blacklist (reuters.com) 81

The U.S. Commerce Department said Thursday it was adding seven Chinese supercomputing entities to a U.S. economic blacklist for assisting Chinese military efforts. From a report: The department is adding Tianjin Phytium Information Technology, Shanghai High-Performance Integrated Circuit Design Center, Sunway Microelectronics, the National Supercomputing Center Jinan, the National Supercomputing Center Shenzhen, the National Supercomputing Center Wuxi, and the National Supercomputing Center Zhengzhou to its blacklist. The Commerce Department said the seven were "involved with building supercomputers used by China's military actors, its destabilizing military modernization efforts, and/or weapons of mass destruction programs.' The Chinese Embassy in Washington did not immediately respond to requests for comment. "Supercomputing capabilities are vital for the development of many -- perhaps almost all -- modern weapons and national security systems, such as nuclear weapons and hypersonic weapons," Commerce Secretary Gina Raimondo said in a statement.
Hardware

Samsung Unveils 512GB DDR5 RAM Module (engadget.com) 33

Samsung has unveiled a new RAM module that shows the potential of DDR5 memory in terms of speed and capacity. Engadget reports: The 512GB DDR5 module is the first to use High-K Metal Gate (HKMG) tech, delivering 7,200 Mbps speeds -- over double that of DDR4, Samsung said. Right now, it's aimed at data-hungry supercomputing, AI and machine learning functions, but DDR5 will eventually find its way to regular PCs, boosting gaming and other applications. Developed by Intel, it uses hafnium instead of silicon, with metals replacing the normal polysilicon gate electrodes. All of that allows for higher chip densities, while reducing current leakage.

Each chip uses eight layers of 16Gb DRAM chips for a capacity of 128Gb, or 16GB. As such, Samsung would need 32 of those to make a 512GB RAM module. On top of the higher speeds and capacity, Samsung said that the chip uses 13 percent less power than non-HKMG modules -- ideal for data centers, but not so bad for regular PCs, either. With 7,200 Mbps speeds, Samsung's latest module would deliver around 57.6 GB/s transfer speeds on a single channel.

HP

Hewlett Packard Enterprise Will Build a $160 Million Supercomputer in Finland (venturebeat.com) 9

Hewlett Packard Enterprise (HPE) today announced it has been awarded over $160 million to build a supercomputer called LUMI in Finland. LUMI will be funded by the European Joint Undertaking EuroHPC, a joint supercomputing collaboration between national governments and the European Union. From a report: The supercomputer will have a theoretical peak performance of more than 550 petaflops and is expected to best the RIKEN Center for Computational Science's top-performing Fugaku petascale computer, which reached 415.5 petaflops in June 2020.
Power

Researchers Use Supercomputer to Design New Molecule That Captures Solar Energy (liu.se) 36

A reader shares some news from Sweden's Linköping University: The Earth receives many times more energy from the sun than we humans can use. This energy is absorbed by solar energy facilities, but one of the challenges of solar energy is to store it efficiently, such that the energy is available when the sun is not shining. This led scientists at Linköping University to investigate the possibility of capturing and storing solar energy in a new molecule.

"Our molecule can take on two different forms: a parent form that can absorb energy from sunlight, and an alternative form in which the structure of the parent form has been changed and become much more energy-rich, while remaining stable. This makes it possible to store the energy in sunlight in the molecule efficiently", says Bo Durbeej, professor of computational physics in the Department of Physics, Chemistry and Biology at LinkÃping University, and leader of the study...

It's common in research that experiments are done first and theoretical work subsequently confirms the experimental results, but in this case the procedure was reversed. Bo Durbeej and his group work in theoretical chemistry, and conduct calculations and simulations of chemical reactions. This involves advanced computer simulations, which are performed on supercomputers at the National Supercomputer Centre, NSC, in Linköping. The calculations showed that the molecule the researchers had developed would undergo the chemical reaction they required, and that it would take place extremely fast, within 200 femtoseconds. Their colleagues at the Research Centre for Natural Sciences in Hungary were then able to build the molecule, and perform experiments that confirmed the theoretical prediction...

"Most chemical reactions start in a condition where a molecule has high energy and subsequently passes to one with a low energy. Here, we do the opposite — a molecule that has low energy becomes one with high energy. We would expect this to be difficult, but we have shown that it is possible for such a reaction to take place both rapidly and efficiently", says Bo Durbeej.

The researchers will now examine how the stored energy can be released from the energy-rich form of the molecule in the best way...

Supercomputing

ARM Not Just For Macs: Might Make Weather Forecasting Cheaper Too (nag.com) 41

An anonymous reader writes: The fact that Apple is moving away from Intel to ARM has been making a lot of headlines recently — but that's not the only new place where ARM CPUs have been making a splash.

ARM has also been turning heads in High Performance Computing (HPC), and an ARM-based system is now the world's most powerful supercomputer (Fugaku). AWS recently made their 2nd generation ARM Graviton chips available which allows everyone to test HPC workloads on ARM silicon. A company called The Numerical Algorithms Group recently published a small benchmark study that compared weather simulations on Intel, AMD and ARM instances on AWS and reported that although the ARM silicon is slowest, it is also the cheapest for this benchmark.

The benchmark test concludes the ARM processor provides "a very cost-efficient solution...and performance is competitive to other, more traditional HPC processors."
Businesses

Nvidia Reportedly Could Be Pursuing ARM In Disruptive Acquisition Move (hothardware.com) 89

MojoKid writes: Word across a number of business and tech press publications tonight is that NVIDIA is reportedly pursuing a possible acquisition of Arm, the chip IP juggernaut that currently powers virtually every smartphone on the planet (including iPhones), to a myriad of devices in the IoT and embedded spaces, as well as supercomputing and in the datacenter. NVIDIA has risen in the ranks over the past few years to become a force in the chip industry, and more recently has even been trading places with Intel as the most valuable chipmaker in the United States, with a current market cap of $256 billion. NVIDIA has found major success in consumer and pro graphics, the data center, artificial intelligence/machine learning and automotive sectors in recent years, meanwhile CEO Jensen Huang has expressed a desire to further branch out into the growing Internet of Things (IoT) market, where Arm chip designs flourish. However, Arm's current parent company, SoftBank, is looking for a hefty return on its investment and Arm reportedly could be valued at around $44 billion, if it were to go public. A deal with NVIDIA, however, would short-circuit those IPO plans and potentially send shockwaves in the semiconductor market.
Supercomputing

A Volunteer Supercomputer Team is Hunting for Covid Clues (defenseone.com) 91

The world's fastest computer is now part of "a vast supercomputer-powered search for new findings pertaining to the novel coronavirus' spread" and "how to effectively treat and mitigate it," according to an emerging tech journalist at Nextgov.

It's part of a consortium currently facilitating over 65 active research projects, for which "Dozens of national and international members are volunteering free compute time...providing at least 485 petaflops of capacity and steadily growing, to more rapidly generate new solutions against COVID-19."

"What started as a simple concept has grown to span three continents with over 40 supercomputer providers," Dario Gil, director of IBM Research and consortium co-chair, told Nextgov last week. "In the face of a global pandemic like COVID-19, hopefully a once-in-a-lifetime event, the speed at which researchers can drive discovery is a critical factor in the search for a cure and it is essential that we combine forces...."

[I]ts resources have been used to sort through billions of molecules to identify promising compounds that can be manufactured quickly and tested for potency to target the novel coronavirus, produce large data sets to study variations in patient responses, perform airflow simulations on a new device that will allow doctors to use one ventilator to support multiple patients — and more. The complex systems are powering calculations, simulations and results in a matter of days that several scientists have noted would take a matter of months on traditional computers.

The Undersecretary for Science at America's Energy Department said "What's really interesting about this from an organizational point of view is that it's basically a volunteer organization."

The article identifies some of the notable participants:
  • IBM was part of the joint launch with America's Office of Science and Technology Policy and its Energy Department.
  • The chief of NASA's Advanced Supercomputing says they're "making the full reserve portion of NASA supercomputing resources available to researchers working on the COVID-19 response, along with providing our expertise and support to port and run their applications on NASA systems."
  • Amazon Web Services "saw a clear opportunity to bring the benefits of cloud... to bear in the race for treatments and a vaccine," according to a company executive.
  • Japan's Fugaku — "which surpassed leading U.S. machines on the Top 500 list of global supercomputers in late June" — also joined the consortium in June.

Other consortium members:

  • Google Cloud
  • Microsoft
  • Massachusetts Institute of Technology
  • Rensselaer Polytechnic Institute
  • The National Science Foundation
  • Argonne, Lawrence Livermore, Los Alamos, Oak Ridge and Sandia National laboratories.
  • National Center for Atmospheric Research's Wyoming Supercomputing Center
  • AMD
  • NVIDIA
  • Dell Technologies. ("The company is now donating cycles from the Zenith supercomputer and other resources.")

Security

Supercomputers Breached Across Europe To Mine Cryptocurrency (zdnet.com) 43

An anonymous reader quotes ZDNet: Multiple supercomputers across Europe have been infected this week with cryptocurrency mining malware and have shut down to investigate the intrusions. Security incidents have been reported in the UK, Germany, and Switzerland, while a similar intrusion is rumored to have also happened at a high-performance computing center located in Spain.

Cado Security, a US-based cyber-security firm, said the attackers appear to have gained access to the supercomputer clusters via compromised SSH credentials... Once attackers gained access to a supercomputing node, they appear to have used an exploit for the CVE-2019-15666 vulnerability to gain root access and then deployed an application that mined the Monero cryptocurrency.

Biotech

Quantum Computing Milestone: Researchers Compute With 'Hot' Silicon Qubits (ieee.org) 18

"Two research groups say they've independently built quantum devices that can operate at temperatures above 1 Kelvin — 15 times hotter than rival technologies can withstand," reports IEEE Spectrum. (In an article shared by Slashdot reader Wave723.)

"The ability to work at higher temperatures is key to scaling up to the many qubits thought to be required for future commercial-grade quantum computers..." HongWen Jiang, a physicist at UCLA and a peer reviewer for both papers, described the research as "a technological breakthrough for semiconductor based quantum computing." In today's quantum computers, qubits must be kept inside large dilution refrigerators at temperatures hovering just above absolute zero. Electronics required to manipulate and read the qubits produce too much heat and so remain outside of the fridge, which adds complexity (and many wires) to the system...

"To me, these works do represent, in rapid succession, pretty big milestones in silicon spin qubits," says John Gamble, a peer reviewer for one of the papers and a senior quantum engineer at Microsoft. "It's compelling work...." Moving forward, Gamble is interested to see if the research groups can scale their approach to include more qubits. He's encouraged by their efforts so far, saying, "The fact that we're seeing these types of advances means the field is progressing really well and that people are thinking of the right problems."

Besides Microsoft, Google and IBM have also "invested heavily in superconducting qubits," the article points out. And there's also a hopeful comment from Lee Bassett, a physicist focused on quantum systems at the University of Pennsylvania. "Each time these silicon devices pass a milestone — and this is an important milestone — it's closer and closer to the inflection point.

"This infrastructure of integrated, silicon-based electronics could take over, and this technology could just explode."

Slashdot Top Deals