Skip to content

Current Collaborations

NekTar

NekTar - Navier-Stokes Solver

George Karniadakis, Brown University

Cardiovascular disease, including atherosclerosis, accounts for almost 50%of deaths in the western world. It is widely accepted that there is a casual relationship between the flow of blood and the formation of arterial disease such as atherosclerotic plaques. This area has received relatively little attention due to the the high computational demands of multiscale modelling of interactions of large-scale flow features coupled to cellular and sub-cellular biology.
Read more about NekTar >

Amber

QM/MM Molecular Dynamics

Adrian Roitberg, University of Florida; Dave Case, The Scripps Research Institute

This SAC aims to accelerate semi-empirical and DFTB QM/MM techniques that were recently added to the AMBER molecular dynamics code in order to enable nano-second long or greater QM/MM molecular dynamics simulations to be run. This effort also looks to enable explicit solvent QM/MM calculations to be run on proteins with a pure QM solute. This will allow investigations of protein reactivity and protein folding without the need for boundary approximations that plague current QM/MM approaches.
Read more about QM/MM >

DNS

DNS – Turbulence Simulation

P.K. Yeung, Georgia Tech

The study of how materials flow and mix is one of the most difficult physical problems in science and engineering. This SAC effort has been working on large-scale simulations of turbulent mixing which can some day utilize petascale resources. Yeung's Direct Numerical Simulation (DNS) code has been modified in a way to allow expanded scalability. This work focused on three-dimensional Fast Fourier Transform (FFT) module, which is the most time-consuming part of the code.
Read more about DNS >
See the presentation on 3D FFT performance on the BlueGene W system >

P3DFFT

Parallel Three-Dimensional Fast Fourier Transforms

Developed by Dmitry Pekurovsky, SDSC Scientific Computing Applications Group

This SAC/SCC project consists of open source software addressing three-dimensional fourier transforms in parallel (dubbed P3DFFT). The goal is to remove bottlenecks to scaling in preparation for petascale computing. As such P3DFFT implements two-dimensional (or pencil) domain decomposition, in contrast to many existing approaches that implement one-dimensional decomposition.
Read more about P3DFFT >
Download the latest version description and P3DFFT library >

SCEC

SCEC – Geological Sciences

Bernard Minster, UC San Diego; Kim Olsen and Steve Day, San Diego State University; Thomas Jordan, University of Southern California

In collaboration with the Southern California Earthquake Center (SCEC), this project aims to simulate a large earthquake, called TeraShake, occurring on the southern San Andreas Fault in California. Efforts include porting the AWM code to the IBM Power4 and IA-64 Cluster systems and resolution of parallel computing issues related to large simulations. Areas of focus are MPI and MPI-I/O performance improvement, single-processor tuning and optimization.
Read more about SCEC >

LES

LES – Turbulence Flows

Krishnan Mahesh, University of Minnesota

The Large-Eddy Simulations project has developed numerical methods and turbulence models that are flexible enough to handle real-world engineering geometries without compromising the accuracy needed to reliably simulate the complicated details of turbulence. The enormous power of SDSC's DataStar and IA-64 Cluster resources allow simulations of unprecedented realism. Massive parallel computing platforms such as DataStar have now made it possible to simulate complex flows that would have been inconceivable a decade ago.
Read more about LES >

CHARMM

CHARMM/AMBER – Molecular Dynamics

John Brady, Cornell; Mark Nimlos and Mike Himmel, NREL/DOE; Xianghong Qian, Colorado School of Mines; Linghao Zhong, Penn State; Charles L. Brooks III, TSRI

This project develops enhancements to CHARMM and other molecular dynamics software so that the simulations can scale up to millions of atoms and run on hundreds to thousands of processors on today's largest supercomputers. One specific application involves efficient, economical conversion of plant material to ethanol. This is a challenge that hinges on speeding up a key molecular reaction being investigated in a SAC effort between researchers at the San Diego Supercomputer Center (SDSC) at UC San Diego, the Department of Energy's National Renewable Energy Laboratory (NREL), Cornell University, The Scripps Research Institute, and the Colorado School of Mines.
Read more about CHARMM/AMBER >

Rosetta

Rosetta – Protein Structure Prediction

David Baker, University of Washington

The goal of current research in David Baker's lab at University of Washington is to develop an improved model of intramolecular and intermolecular interactions and to apply this improved model to the prediction and design of macromolecular structures and effects. The protein and design calculations are carried out using a computer program called Rosetta. A major challenge of computational protein design is the creation of novel proteins with arbitrarily chosen three-dimensional structures. The SAC effort has resulted in implementation of a parallel version of the code suitable for any NSF parallel computer. Current work focuses on single processor performance improvements.

MIT

MITGcm – MIT General Circulation Model

Carl Wunsch, Patrick Heimbach, and Matthew Mazloff, Massachusetts Institute of Technology

SDSC staff members have been working with the ECCO (Estimating the Circulation and Climate of the Ocean) Consortium to understand the state of the world's oceans, both in the past and present. The ocean's great capacity to store heat and greenhouse gases gives it a vital role in climate change studies. Climatic trends are only one motivator for ocean study; the oceans also play a significant role in many other issues of human concern. Carrying heat, salt, nutrients, pollutants, and icebergs, ocean currents affect fisheries, shipping, offshore mining, and international policy. To gain a complete picture of the ocean's state, ocean observations are interpolated into a highly scalable parallel simulation. The simulation code is called MITGcm (MIT General Circulation Model), which runs on SDSC's IBM supercomputer DataStar. The SAC effort has begun analysis of both single processor and parallel performance.
Read more about MITGcm >

NEES

NEES – Network for Earthquake Engineering Simulation

Ahmed Elgamal, UC San Diego

The National Science Foundation created the George E. Brown, Jr. Network for Earthquake Engineering Simulation (NEES) to improve our understanding of earthquakes and their effects. NEES is a shared national network of 15 experimental facilities, collaborative tools, a centralized data repository, and earthquake simulation software, all linked by the ultra-high-speed Internet2 connections of NEESgrid. Together, these resources provide the means for collaboration and discovery in the form of more advanced research based on experimentation and computational simulations of the ways buildings, bridges, utility systems, coastal regions, and geomaterials perform during seismic events. The SAC effort includes analysis of parallel scaling studies of finite element software, collaboration with NEES researchers to write allocation proposals, and helping with parallel implementation of NEES simulations.

Linguistics

Understanding Pronouns – Linguistics

Andy Kehler, UC San Diego

Andy Kehler from UCSD is applying maximum entropy models, naive bayesian models, neural nets, and unsupervised bootstrapping methods to identify the antecedents of pronouns in unrestricted data using a wide range of contextual features. A serial code has been parallelized at a higher level such that multiple data files can be parsed simultaneously.

Enzo

ENZO – Astronomy

Michael Norman, UC San Diego

The Enzo code is an advanced, 3-D time-dependent code for computational astrophysics, capable of simulating the evolution of the Universe from the beginning using first principles. Enzo is developed by Michael Norman and others at UCSD including Robert Harkness of SDSC. These simulations help scientists test theories against observations and provide new insights into cosmology, galaxy formation, star formation, and high-energy astrophysics. The SAC effort involves incorporation of computational methods to enable both unigir and AMR simulations.
Read more about ENZO >

TXBR

TxBR – Electron Microscope Tomography

Albert Lawrence, National Center for Microscopy and Imaging Research (NCMIR)

Alignment of the individual images of a tilt series from an electron microscope is a critical step in obtaining high-quality reconstructions. TxBR (Transform-based Backprojection for Volume Reconstruction) develops general mathematical algorithms for producing accurate alignments and for utilizing the alignment data in subsequent reconstruction steps. The SAC effort involves implementing these algorithms in parallel such that many 100's of processors can be used to speed up the simulations for biologists. This image shows a typical x-y section of a Flock house virus (FHV6) reconstruction using parallel TxBR code.

GEON

E3D – GEON

Mian Liu, University of Missouri; Ramon Arrowsmith, Arizona State

One of the national scale projects in this area is the GEON Cyberinfrastructure for the Geosciences Project funded by the NSF. As part of GEON's computational environment SAC has begun development on SYNSEIS (SYNthetic SEISmogram generation tool), a grid application to help seismologists calculate synthetic 3D regional seismic waveforms. SYNSEIS relies on E3D, a well-tested, finite difference code developed by the Lawrence Livermore National Laboratory. The SAC effort involves implementation of extended source functions, development of faster initial condition setup routines, and improvements on boundary conditions to provide a more realistic simulation. MPI-I/O has been added for large volume outputs and the serial output has been improved for 2D slice outputs. Parallel visualization rendering routines have been written, and additional work will be done in optimizing the output routines.

NVO

NVO – Mosaicking

Alex Szalay, Johns Hopkins University; Roy Williams, California Institute of Technology

The SAC effort for the NVO Montage astronomical mosaicking project has transformed the Montage routines into a large-scale mosaicking production service. Introduction and parallelization of MPI-I/O and development of workflow management allows sequential and parallel steps to be run together in a single job and increases multiple CPU efficiency. The Montage software uses HPC resources to mosaic the entire 8 TB of the Two Micron All Sky Survey (2MASS) data. A portion of the mosaic is shown in the image above.


Did You Get
What You
Wanted?
Yes No
Comments