Skip to content



Supernova Simulation Lights Up the Sky

Alexei Poludnenko and Alexei Khokhlov, ASC Flash Center and Department of Astronomy and Astrophysics, University of Chicago

Among the brightest objects in the sky, Type Ia supernovae have long fascinated astronomers and lay people alike. These brilliant bursts of light are also of paramount importance in cosmology, serving as “standard candles” that play a key role in helping scientists accurately mark off distance across the vast reaches of the universe.

But what are the steps that take place inside such an exploding star? To help answer this question with new precision cosmology, the scientists are simulating a supernova using a supercomputer. Alexei Poludnenko and Alexei Khokhlov of the ASC Flash Center at the University of Chicago have used their enhanced simulation code to carry out the longest self-consistent 3-D numerical simulation of a Type 1a supernova explosion ever performed.

Far surpassing previous simulations that modeled only the first few tens of seconds, the computations extended from supernova ignition through the active explosion phase, following the evolution for 11 days and revealing a longer-than-expected evolution with distinct stages in which different physical processes are important. The longer simulation became possible using a novel computational technique that enables the code to follow the expansion over a great range of scales. Such simulations are an important complement to the observations that scientists make, and the capability of the simulations to follow the long-term evolution of supernovae will play a vital role in allowing researchers to make meaningful comparisons of their theoretical or numerical models with their observations of supernovae.

To achieve this result, the researchers needed to run simulations for longer periods than previously possible, and they were able to complete these computations on DataStar, using 512 processors for some 30,000 processor-hours taking more than two days. For more information on this research see the DOE-supported Flash Center ( at the University of Chicago.

Speeding Predictions of the Molecules of Life

R.C. Walker, SDSC; S. Raman, U Washington

Proteins are the molecules of life, and the myriad ways they function in the body depend on the precise details of their 3-D shapes. Using supercomputers, scientists are now dramatically speeding up their predictions of 3-D protein structures, which can play a crucial role in endeavors such as rational drug design.

With SDSC help, Howard Hughes Medical Institute investigator David Baker of the University of Washington achieved the largest-ever calculation of the Rosetta protein structure prediction code, running on over 40,000 processors of the 114 Teraflops (trillion calculations per second) IBM Watson BlueGene system. The computation allowed a complete, from scratch, prediction in the Critical Assessment of Structure Prediction 7 (CASP7) competition to be completed in less than three hours, something that normally takes weeks. This competition spurs researchers to develop more accurate and faster methods of identifying protein structures. Baker also computed many other entries in the CASP7 competition on SDSC’s DataStar and BlueGene Data systems.

SDSC staff introduced Baker to NSF supercomputing resources, and a SDSC Strategic Applications Collaboration with SDSC computational scientist Ross Walker adapted the Rosetta code, enabling it to run on large parallel supercomputers. SDSC's BlueGene Data system was the workhorse for Baker’s computations, using 730,000 processor-hours for 22 CASP7 targets, and the only NSF machine that could provide job turnaround of less than one week for all targets. SDSC's BlueGene Data was also the testbed for extreme-scaling modifications that allowed Rosetta code to run on more than 40,000 processors on the 114 peak teraflops IBM BlueGene Watson system, offering dramatic speedups in protein structure prediction time. SDSC's DataStar was used for 2,000-processor jobs for the 10 percent of CASP targets that involved large structure prediction problems. More information is online at the Baker Lab (

TeraShake: Unleashing a Virtual Earthquake

Amit Chourasia, SDSC Visualization Services

Scientists believe that California is overdue for a large earthquake on the southern San Andreas Fault. To understand such major earthquakes before they happen, scientists are turning to simulations, or "virtual earthquakes" produced using supercomputers. The knowledge they gain can help engineers and others prepare for a major quake through improved seismic hazard analysis estimates, better building codes in high-risk areas, and safer structural designs, potentially saving lives and property.

SDSC and the Southern California Earthquake Center (SCEC) have formed a close partnership resulting in significant progress in earthquake research. SDSC introduced the SCEC earthquake scientists to NSF supercomputing resources, and Tom Jordan (University of Southern California), Bernard Minster (Scripps Institution of Oceanography SIO/UCSD), Kim Olsen (San Diego State University) and an eight-institution team of SCEC scientists conducted the largest and most detailed simulations ever of a 7.7-magnitude earthquake on a 230 kilometer stretch of the San Andreas Fault. The simulations modeled seismic waves at 200 meter resolution, providing new information about potential earthquake ground motion in Los Angeles and other sediment-filled basins, including the discovery that the basins and mountains can form a “waveguide” to channel unexpectedly large amounts of earthquake wave energy into the Los Angeles basin.

Collaborating in this research, Yifeng Cui, a member of the SDSC Strategic Applications Collaboration team, worked closely with SCEC scientists to integrate a dynamic rupture component into the TeraShake code, and compute more than 45 large-scale simulations. 110 terabytes (trillion bytes) of simulation output was registered into the SCEC Storage Resource Broker (SRB) digital library at SDSC. In addition, SDSC SAC staff optimized the code to improve scalability to 2,048 processors and beyond, for a problem size of 8.6 billion grid points, four times larger than TeraShake 1. Current TeraShake simulations sustain 1 teraflop performance (one trillion calculations per second), and take less than 5 hours for 22,000 time steps on 2,048 DataStar processors. This is more than 15 times faster than the 4 days on 240 processors required for TeraShake 1. Such improvements are vital to increasing the realism of the simulations and yielding new scientific insights into the earthquakes that threaten California and many areas of the world. The enhanced TeraShake code is now available to the U.S. earthquake community. For more information see SCEC.

Scientific Visualization in TeraShake

As the size and complexity of simulations has grown, scientific visualization has become increasingly important for scientists to make sense of their mushrooming data collections. SDSC's Visualization Services group led by Steve Cutchin plays a central role in many projects, including the TeraShake research. SDSC visualization scientist Amit Chourasia has produced many data-driven visualizations and animations of the earthquake simulations, used by the scientists to understand the "big picture" of their results, to locate and correct problem areas in their simulation software, and to explain the simulations to other seismologists, scientists, and the general public.

In this research, SDSC staff developed a new visualization technique, applying bump maps to seismic data, yielding an intuitive way to present subtle variations of seismic energy within near-maximal seismic activity zones. SDSC visualizations have appeared on Los Angeles news broadcasts on all major networks and were a key part of the episode "LA's Future Quake" on the Explorer program on the National Geographic Channel. A high-definition animation will also be shown in the IEEE Visualization Conference Scientific Visualization Theatre. The compute- and data-intensive visualizations using DataStar involved rendering 2,000 volumes at 3000x1500x400 resolution and rendering 20,000 surfaces at 3000x1500 for a total rendering of 43 terabytes of data. For more information on TeraShake visualizations see

Exploring the Details of Turbulence

Amit Chourasia, SDSC; D. Donzis, Georgia Tech

Understanding how substances mix in the disorder of turbulent flows can provide insights into a wide variety of problems, from the patterns cream makes as it is stirred into a cup of coffee, to how fuel mixes with air in a fuel injector, to the way pollution from a factory smokestack disperses through the air. But the study of turbulent flows remains one of the most challenging problems in science and engineering.

To provide insights into this important problem, researchers are using simulations of "virtual turbulence" to gain information that is difficult or impossible to obtain from experiments alone. To be useful, however, the simulated turbulence must capture a realistic version of the flow, from tiny eddies all the way up to the largest sizes in the flow.

With SDSC's help, simulations by Georgia Tech professor P.K. Yeung and collaborators have explored turbulent flows in unprecedented detail, dividing the simulated fluid volume into 20483 and even 40963 cells for more than 8 billion and 64 billion fluid domains. This powerful model is giving the researchers new understanding of the all-important details of problems such as the mixing of substances of weak diffusivity in regimes never before reached. These simulations can lead to improved predictions of such practical problems as how well different chemical species will mix in a turbulent flow.

To enable these high-resolution runs, Dimitry Pekurosvsky from the SDSC Strategic Applications Collaboration team worked with Yeung to develop an improved version of the simulation code and related methods, which was crucial to allowing the simulations to run on today’s largest supercomputers and accurately capture details of the flow. With these SDSC enhancements, Yeung achieved long-sought goals of successfully scaling to 20483 resolution on SDSC's DataStar and BlueGene Data system. In collaboration with IBM, the team achieved 40963 resolution on 32,768 processors of the BlueGene Watson system, the first time such realistic virtual turbulence has been achieved in the U.S. This achievement is an important step on the path to petascale computing, systems that can compute at the astounding speed of a petaflop -- 1015 or a thousand trillion calculations per second. For more information about this research see Items/PR070805.html and

Predicting Weather on the Sun

Solar Physics Group, SAIC

At times the sun experiences "solar storms" that can eject plasma from the corona in the direction of the Earth, resulting in potentially serious disruptions in satellite operations, communications, and even electrical power grids. Since our society is heavily dependent on this infrastructure, predicting the "solar weather" is of tremendous importance.

To study the sun's weather, scientists need to better understand the complex solar magnetic fields that control it. In results presented in a meeting of the American Astronomical Society, solar physicist Zoran Mikic and the Solar Physics Group at SAIC, participants in the NSF-funded Center for Integrated Space Weather Modeling, ran their solar model for four days on more than 700 processors of DataStar to compute the most accurate predictions to date of the solar corona.

Normally, the sun’s corona can only be seen with special instruments that block out the intense light from the surface of the sun. But during a solar eclipse, the Moon (which appears about the same size as the sun in the sky) blocks that light naturally and the beautiful corona is directly visible. Scientists take advantage of this to make accurate observations of the corona, and the researchers’ coronal predictions for the total solar eclipse on March 29, 2006 showed the best-ever agreement of simulated magnetic field lines with coronal observations, an important step toward more accurate solar weather predictions.

To achieve these important results, the SAIC Solar Physics group needed to run large-scale simulations on DataStar on a reliable schedule, completing multiple runs just before the eclipse. SDSC staff worked closely with the researchers to ensure that the timing of the runs provided the needed information to prepare the predictions before the eclipse.

Virtual Engineering


When designing a new ship's propeller, a hypersonic jet engine, or a smokestack, engineers are always striving to better predict the performance of their new devices. Traditionally they rely on building real prototypes, which is slow and expensive, along with experience from previous designs. With advances in supercomputers, engineers are entering a new era in which they can take advantage of simulations or virtual versions of their new designs, giving them vital information much more rapidly and at lower cost than ever before.

Using the 15.6 Teraflops (trillion calculations per second) DataStar supercomputer, Krishnan Mahesh and colleagues at the University of Minnesota are pioneering simulations of unprecedented detail that model several important engineering flows, including compressible turbulence in super- and hypersonic scramjet combustion, capturing instability in ship propeller crashback (sudden reversal), and modeling new details of an evolving jet in a crossflow, for example, a fuel injector or a smokestack. Using novel simulation methods, the researchers are now able to extend virtual engineering models into high Reynolds number flows that more accurately represent the real-world geometries that will be useful for engineers.

SDSC helped in this research when Dimitry Pekurosvsky and Mahidar Tatineni, from SDSC's Strategic Applications Collaboration team, assisted in performance analysis, benchmarking, memory usage reduction, and single-processor optimization. These efforts helped Mahesh’s group complete important simulations on DataStar, running on up to 1,024 processors, as well as on BlueGene Data and the distributed TeraGrid facility. More information about this research is at the Mahesh Group.

Recreating the Early Universe

Amit Chourasia, SDSC Visualization Services

Understanding how the universe formed has tremendous impact on today's most challenging questions in physics and astronomy. Using SDSC supercomputers, researchers have computed the most realistic simulations to date of the formation of the early universe. By comparing these simulations of the structure of the early universe with detailed observations from major observatories, astrophysicists can gain new insights into key parameters of cosmological models, guiding their search for new insights into how the universe began.

To simulate the formation of the universe, researchers used the ENZO computational cosmology code, developed by UCSD cosmologist Michael Norman and collaborators. The code simulates the Universe from first principles, starting with the Big Bang. ENZO is both compute- and data-intensive, pushing the envelope of computational methods, and requiring the largest supercomputers and data resources. The ENZO team ran the most highly defined simulation ever of the Universe on 2,048 processors of SDSC's DataStar. In addition to helping scientists understand the basic science of the early universe, due to its high resolution, the results of the simulation will benefit researchers involved in spatial mapping and simulated sky surveys.

To make this research possible, an ongoing SDSC Strategic Applications Collaboration with computational scientist Robert Harkness supported several major improvements in scaling and efficiency of the ENZO code, growing a factor of 8 in spatial resolution to a 2,0483 domain with more than 8 billion particles, and scaling to run on 2,048 processors of DataStar. This data-intensive computation requires 5 terabytes of memory, a capability available only on DataStar. SDSC efforts eliminated input/output as a bottleneck, resulting in speed gains up to 120 times on some systems. SDSC’s robust storage environment also enabled the research team to efficiently write the 26 terabytes of output (larger than the digital text equivalent of the Library of Congress printed collection). For more information on ENZO see

Identifying Brain Disorders

T. Brown, CIS/JHU

Brain diseases are challenging to diagnose, which can confuse or delay beneficial treatments. To address this problem, researchers in the Biomedical Informatics Research Network (BIRN) are using specific structural or shape differences in patients' brains to help better identify brain disorders. Their method has harnessed TeraGrid supercomputers at SDSC and other sites to successfully distinguish diagnostic categories such as Alzheimer's and Semantic Dementia from control subjects. This can potentially lead to a powerful new cyberinfrastructure tool clinicians can use to make earlier, more accurate diagnoses.

In a large-scale analysis of MRI brain scan data, the researchers used hippocampal timestep deformation computations with the Large Deformation Diffeomorphic Metric Mapping tool. Using a special SDSC-based TeraGrid General Parallel File System-Wide Area Network (GPFS-WAN) as way to pipeline data, the researchers increased the number of jobs accomplished by more than ten times on TeraGrid systems at SDSC and the National Center for Supercomputing Applications (NCSA) by seamlessly accessing data stored at SDSC.

Making GPFS-WAN available also allowed the researchers to run their simulation and visualization software locally at Johns Hopkins University on remote data at SDSC, giving them an effective method to explore the multi-terabyte data sets of high-resolution brain scans without the difficulties and delays of having to transfer huge data collections over the network.

The BIRN Network, under the direction of UC San Diego Professor Mark Ellisman, is funded by the National Institutes of Health/National Center for Research Resources. For more information see the Center for Imaging Science at Johns Hopkins University (, and the BIRN Network (

back to top