Southern Methodist University is propelling north Texas into the AI era with an NVIDIA DGX SuperPOD as Mississippi State and Texas A&M prepare to ride NVIDIA Quantum-2 networks and a U.K. college upgrades its InfiniBand network.
Just as the Dallas/Fort Worth airport became a hub for travelers crisscrossing America, the north Texas region will be a gateway to AI if folks at Southern Methodist University have their way.
SMU is installing an NVIDIA DGX SuperPOD, an accelerated supercomputer it expects will power projects in machine learning for its sprawling metro community with more than 12,000 students and 2,400 faculty and staff.
It's one of three universities in the south-central U.S. announcing plans to use NVIDIA technologies to shift research into high gear.
Texas A&M and Mississippi State University are adopting NVIDIA Quantum-2, our 400 Gbit/second InfiniBand networking platform, as the backbone for their latest high-performance computers. In addition, a supercomputer in the U.K. has upgraded its InfiniBand network.
Texas Lassos a SuperPOD
"We're the second university in America to get a DGX SuperPOD and that will put this community ahead in AI capabilities to fuel our degree programs and corporate partnerships," said Michael Hites, chief information officer of SMU, referring to a system installed earlier this year at the University of Florida.
A September report called the Dallas area "hobbled" by a lack of major AI research. Ironically, the story hit the local newspaper just as SMU was buttoning up its plans for its DGX SuperPOD.
Previewing its initiative, an SMU report in March said AI is "at the heart of digital transformation ... and no sector of society will remain untouched" by the technology. "The potential for dramatic improvements in K-12 education and workforce development is enormous and will contribute to the sustained economic growth of the region," it added.
SMU Ignite, a $1.5 billion fundraiser kicked off in September, will fuel the AI initiative, helping propel Southern Methodist into the top ranks of university research nationally. The university is hiring a chief innovation officer to help guide the effort.
Crafting a Computational Crucible
It's all about the people, says Jason Warner, who manages the IT teams that support SMU's researchers. So, he hired a seminal group of data science specialists to staff a new center at SMU's Ford Hall for Research and Innovation, a hub Warner calls SMU's "computational crucible."
Eric Godat leads that team. He earned his Ph.D. in particle physics at SMU modeling nuclear structure using data from the Large Hadron Collider.
Now he's helping fire up SMU's students about opportunities on the DGX SuperPOD. As a first step, he asked two SMU students to build a miniature model of a DGX SuperPOD using NVIDIA Jetson modules.
"We wanted to give people — especially those in nontechnical fields who haven't done AI — a sense of what's coming," Godat said.
The full-sized supercomputer made up of 20 NVIDIA DGX A100 systems on an NVIDIA Quantum InfiniBand network, could be up and running as early as January thanks to its Lego-like, modular architecture. It will deliver a whopping 100 petaflops of computing power, enough to give it a respectable slot on the TOP500 list of the world's fastest supercomputers.
Aggies Tap NVIDIA Quantum-2 InfiniBand for ACES
About 200 miles south, the high performance computing center at Texas A&M will be among the first to plug into the NVIDIA Quantum-2 InfiniBand platform. Its ACES supercomputer, built by Dell Technologies, will use the 400G InfiniBand network to connect researchers to a mix of five accelerators from four vendors.
NVIDIA Quantum-2 ensures "that a single job on ACES can scale up using all the computing cores and accelerators. Besides the obvious 2x jump in throughput from NVIDIA Quantum-1 InfiniBand at 200G, it will provide improved total cost of ownership, beefed up in-network computing features and increased scaling," said Honggao Liu, ACES's principal investigator and project director.
Texas A&M already gives researchers access to accelerated computing in four systems that include more than 600 NVIDIA A100 Tensor Core and prior-generation GPUs. Two of the four systems use an earlier version of NVIDIA's InfiniBand technology.
MSU Rides a 400G Train
Mississippi State University high performance computing will also tap the NVIDIA Quantum-2 InfiniBand platform. It's the network of choice for a new system that supplements Orion, the largest of four clusters MSU manages, all using earlier versions of InfiniBand.
Both Orion and the new system are funded by the U.S. National Oceanic and Atmospheric Administration (NOAA) and built by Dell. They conduct work for NOAA's missions as well as research for MSU.
Orion was listed as the fourth largest academic supercomputer in America when it debuted on the TOP500 list in June 2019.
"We're using InfiniBand in four generations of supercomputers here at MSU so we know it's both powerful and mature to run our big jobs reliably," said Trey Breckenridge, director of high performance computing at MSU.
"We're adding a new system with NVIDIA Quantum-2 to stay at the leading edge in HPC," he added.
Quantum Nets Cover the UK
Across the pond in the U.K., the Data Intensive supercomputer at the University of Leicester, known as the DIaL system, has upgraded to NVIDIA Quantum, the 200G version of InfiniBand.
"DIaL is specifically designed to tackle the complex, data-intensive questions which must be answered to evolve our understanding of the universe around us," said Mark Wilkinson, professor of theoretical astrophysics at the University of Leicester and director of its HPC center.
"The intense requirements of these specialist workloads rely on the unparalleled bandwidth and latency that only InfiniBand can provide to make the research possible," he said.
DIaL is one of four supercomputers in the U.K.'s DiRAC facility using InfiniBand, including the Tursa system at the University of Edinburgh.
InfiniBand Shines in Evaluation
In a technical evaluation, researchers found Tursa with NVIDIA GPU accelerators on a Quantum network delivered 5x the performance of their CPU-only Tesseract system using an alternative interconnect.
Application benchmarks show 16 nodes of Tursa have twice the performance of 512 nodes of Tesseract. Tursa delivers 10 teraflops/node using 90 percent of the network's bandwidth at a significant improvement in performance per kilowatt over Tesseract.
For more, watch our special address at SC21 either live on Monday, Nov. 15 at 3 pm PST or later on demand. NVIDIA's Marc Hamilton will provide an overview of our latest news, innovations and technologies, followed by a live Q&A panel with NVIDIA experts.