A new supercomputer in the US is the first to officially pass the exascale hallmark – a milestone that marks a new era of supercomputing.
The Frontier supercomputer, which belongs to the US Department of Energy’s Oak Ridge National Laboratory (ORNL), is now the fastest supercomputer on Earth, according to the Top500 ranking, after benchmarking at 1.1 exaflops.
The supercomputer is made up of 74 Hewlett Packard Enterprise (HPE) Cray cabinets which contain, in total, over 37,000 GPUs and 9,400 CPUs with a total of 8.7 million cores.
Justin Hotard, general manager of High-Performance Computing at HPE, said Frontier’s record-breaking compute power “will give us the opportunity to answer questions we never knew to ask”.
“Frontier is a first-of-its-kind system that was envisioned by technologists, scientists and researchers to unleash a new level of capability to deliver open science, AI and other breakthroughs, that will benefit humanity,” he said.
An exaflop is equal to one quintillion (1018) calculations per second.
To put that in perspective, if every person on Earth performed one calculation – like addition, subtraction, or multiplication – each second, it would take over four years to do what Frontier can do in just one second.
Frontier will go fully online later this year, giving researchers their first opportunity to access its raw power.
In an interview with the Next Platform, Jeff Nichols, associate lab director for Computing and Computational Sciences at ORNL, said Frontier will add “more physics” to scientific modelling, adding fidelity to scientific research.
“We can do more in climate, in wind, in solar, in additive manufacturing, in materials-by-design, in biology,” he said.
“All these domains are going to be impacted by these platforms that have been the sole target of these types of investments over the past six or seven years.”
Along with the $833 million (US$600 million) spent on building Frontier, the US Exascale Computing Project has poured $2.2 billion (US$1.6 billion) into putting together software and applications for use by these extreme supercomputers, according to Nichols.
“[These are] the tools and libraries necessary for exposing the amount of parallelism that has to be exposed in order to get good efficiencies out of these machines,” he said.
“It’s building on the order of 25 exascale applications across every domain that you can think of.”