CLEMSON — The world’s fastest supercomputer, capable of a billion-billion calculations per second, is expected to debut in the U.S. in four years, and a Clemson University graduate student has been tasked with preparing for its capabilities.

Garmon will begin his fellowship in August 2018.

Garmon will begin his fellowship in August 2018.
Image Credit: College of Science photo

Andrew Garmon, a doctoral student in the department of physics and astronomy, has received a fellowship from the U.S. Department of Energy Office of Science Graduate Research (SCGSR) to study at Los Alamos National Laboratory, one of the nation’s leading research facilities. Garmon will spend a year in Los Alamos, New Mexico, pursuing better techniques for a computer simulation known as molecular dynamics (MD).

The goal of MD is to numerically solve equations that predict the trajectories of atoms and molecules as they interact with an arrangement of particles. Ultimately, the resulting simulation can be played back like a movie, modeling the behavior and movements of materials or biological molecules under various conditions. The most common application of molecular dynamics allows for visualizing protein-protein interactions or conformational changes of a protein over time – processes that are vitally important to disease research and drug development.

Given the right software applications, any computer – even one as miniature as a cell phone – has the potential to run an MD simulation. But biological phenomena like protein folding or cell division can take long periods of time to complete from start to finish, and it’s that timespan that the everyday computer cannot break through to simulate.

“We have a timescale problem in molecular dynamics because when you’re simulating, you have to solve your equations in order. First step 1, then step 2, step 3, and you can’t go to the next step until you’ve reached the current. It’s like reading a book – it has a beginning and end, it tells a story in time, and that is the key,” Garmon said. “Because we’re constrained by this step size, we can only simulate, at best, processes that are microseconds long, never mind something like cancer cell division, which can take hours to years.”

Even as computers become faster and more efficient, the timescale problem in molecular dynamics will not be solved, Garmon said, because single computer processors are designed to run instructions in this sequential order, and are unable to jump ahead to visualize the end of a biological process before simulating its beginning.

Accelerated molecular dynamics is a computer-based simulation that predicts an atom's trajectory.

Accelerated molecular dynamics is a computer-based simulation that predicts an atom’s trajectory.
Image Credit: College of Science photo

To overcome this barrier, physicists have turned to supercomputing to simulate some of the most complex biological phenomena, benefiting from multiple computer processors working together. With parallel computing, one large calculation can be broken into a series of smaller ones, allowing each processor to tackle a different portion of the same problem at the same time. Simulations like molecular dynamics run faster and more efficiently on supercomputers, not because the processor itself has improved, but because there is a greater number of processors all working on the same task at once – “an army working in unison,” Garmon said.

Researchers already are able to harness the power of parallel computing, using petascale computers that can perform one quadrillion FLOPS, or floating point operations per second. Often owned by government-run laboratories, these are the fastest supercomputers in the world, equipped with a million-plus processors, or cores, within one supercomputer. Los Alamos National Laboratory, where Garmon will be studying, houses the Trinity supercomputer, which holds 979,968 cores.

Yet, to break the timescale barrier in molecular dynamics, more cores and better techniques are necessary, and the U.S. Department of Energy’s Exascale Computing Project aims to get us there.

“The exascale is very exciting. It’s 1018 FLOPS, or floating-point operations per second – 50 times faster than today’s supercomputers,” Garmon said. “Los Alamos has already purchased an exascale computer that will be here in 2022. The technology itself hasn’t been created yet, but we are preparing for it, designing code and programming for the exascale, so that when we do get there, we’re going to start off sprinting.”

Garmon’s sprint will have him optimizing for the exascale an MD simulation called Parallel Trajectory Splicing (ParSplice), which works by leveraging many cores – all working in parallel – to generate short segments of an atom’s trajectory. Once generated, those segments can be spliced together to create a long-time trajectory of the atom’s course.

“I think it’s great if you think of an atom as just bouncing on a checkerboard,” Garmon said. “For a while, it’s on a white square, but then there’s a transition to a black square and a transition back to a white square. This a really slow process, and we can watch it mapped all the way to the other side of the checkerboard, but that would take a very long time. So how can we speed up this process? In this very simple example, if we’re on one square, we should also start building segments in adjacent squares because those are the easiest jumps the atom could make.”

Using ParSplice, every processor in a supercomputer can be assigned to different squares on this hypothetical checkerboard, and simultaneously, the processors will build segments of trajectory. Then, when the atom actually jumps to a new square, the trajectory will have already been generated, and it can be spliced onto the movements the atom made previously.

Garmon runs ParSplice using Clemson's Palmetto Cluster supercomputer.

Garmon runs ParSplice using Clemson’s Palmetto Cluster supercomputer.
Image Credit: College of Science photo

It’s as if the atom is road-tripping across the United States, and there’s a processor assigned to each state that is charged with outlining every possible road the atom could take on its way through that state. Meanwhile, another processor is working in real time to splice together the actual directions as the atom makes its journey. What is really a series of small trips – point A to B to C and so on – ends up appearing as a one clean, concise map from point A to Z that was generated in a fraction of the time.

At Los Alamos, Garmon will learn from physicists Danny Perez and Art Voter, the scientists who pioneered ParSplice and the study of accelerated molecular dynamics.

“I’m incredibly humbled and eager for this opportunity. I think Art Voter’s group has been doing very exciting things for a very long time, but specifically in the past five years. The way that their work has now combined with moving toward the exascale is super exciting to me,” Garmon said.

Murray Daw, an R.A. Bowen Professor of Physics and Garmon’s doctoral adviser, is just as excited for Garmon’s impending studies.

“Andrew competed nationally for a prestigious fellowship to carry out his research at Los Alamos, and we in the physics department are very pleased with his work and proud to see him win this fellowship. We are happy to see our grad students representing Clemson at one of the nation’s premier physics laboratories,” Daw said.

For more on what Los Alamos is doing for the Exascale Computing Project, visit this link.

END