Press Archive

First Phase of TeraGrid Goes into Production at SDSC and other sites

Published 01/26/2004

The first computing systems of the National Science Foundation's TeraGrid project are in production mode, making 4.5 teraflops of distributed computing power available to scientists across the country who are conducting research in a wide range of disciplines, from astrophysics to environmental science.

The TeraGrid is a multi-year effort to build and deploy the world's largest, most comprehensive distributed infrastructure for open scientific research. The TeraGrid also offers storage, visualization, database, and data collection capabilities. Hardware at multiple sites across the country is networked through a 40-gigabit per second backplane-the fastest research network on the planet.

The systems currently in production represent the first of two deployments, with the completed TeraGrid scheduled to provide over 20 teraflops of capability. The phase two hardware, which will add more than 11 teraflops of capacity, was installed in December 2003 and is scheduled to be available to the research community this spring.

"We are pleased to see scientific research being conducted on the initial production TeraGrid system," said Peter Freeman, head of NSF's Computer and Information Sciences and Engineering directorate. "Leading-edge supercomputing capabilities are essential to the emerging cyberinfrastructure, and the TeraGrid represents NSF's commitment to providing high-end, innovative resources."

The TeraGrid sites are: Argonne National Laboratory; the Center for Advanced Computing Research (CACR) at the California Institute of Technology; Indiana University; the National Center for Supercomputing Applications (NCSA) at the University of Illinois, Urbana-Champaign; Oak Ridge National Laboratory; the Pittsburgh Supercomputing Center (PSC); Purdue University; the San Diego Supercomputer Center (SDSC) at the University of California, San Diego; and the Texas Advanced Computing Center at The University of Texas at Austin.

"This is an exciting milestone for scientific computing - the TeraGrid is a new concept and there has never been a distributed computing system of its size and scope," said NCSA interim director Rob Pennington. "In addition to its immediate value in enabling new science, the TeraGrid project is a tool for the development of a national cyberinfrastructure, and the cooperative relationships forged through this effort provide a framework for future innovation and collaboration."

"The TeraGrid partners have worked extremely hard during the two-year construction phase of this project and are delighted that this initial phase of what will be an unprecedented level of computing and data resources is now online for the nation's researchers to use," said Fran Berman, SDSC director and co-principal investigator of the TeraGrid project. "The TeraGrid is one of the foundations of cyberinfrastructure that will provide even more computing resources later this year."

The computing systems that entered production this month consist of more than 800 Itanium-family IBM processors running Linux. NCSA maintains a 2.7-teraflop cluster, which was installed in spring 2003, and San Diego has a 1.3-teraflop cluster. The 6-teraflop, 3,000-processor HP AlphaServerSC Terascale Computing System (TCS) at PSC is also a component of the TeraGrid infrastructure.

"The launch of the National Science Foundation's TeraGrid project provides scientists and researchers across the nation with access to unprecedented computational power," said David Turek, vice president of Deep Computing with IBM. "Working with the NSF, IBM is committed to the continued development of breakthrough Grid technologies that benefit our scientific/technical and commercial customers."

Allocations for use of the TeraGrid were awarded by the NSF's Partnerships for Advanced Computational Infrastructure (PACI) last October. Among the first wave of researchers to use the TeraGrid are scientists studying the evolution of the universe and the cleanup of contaminated groundwater, simulating seismic events, and analyzing biomolecular dynamics.

  1. SDSC computational astrophysicist Robert Harkness has adapted an astrophysical simulation program called Enzo to run on the TeraGrid. Harkness is a member of SDSC's Strategic Applications Collaborations team, which works closely with scientific investigators to tune their programs to take maximum advantage of the power of supercomputers, and has collaborated closely with the TeraGrid effort for the past year.

Enzo was created by Michael Norman, a physics professor at the Center for Astrophysics and Space Science (CASS) at the University of California, San Diego (UCSD), with assistance from colleagues at CASS, SDSC, and other institutions. Enzo recently created the world's largest and most complex scientific simulation of the evolution of the universe, tracking the formation of enormous structures of galaxies and gas clouds during the billions of years following the Big Bang.

  1. TeraGyroid is an international TeraGrid project that employs computational steering and uses distributed computing, storage, and visualization facilities at PSC, NCSA, SDSC, and Argonne (along with resources at Daresbury Lab and Manchester, UK) to simulate complex materials shapes, known as gyroids, with properties in between solid and liquid. Gyroids have important applications in controlled drug release and biosensors. This project won the HPC Challenge for "Most Innovative Data-Intensive Application" at SC 2003 in Phoenix. Using the TCS at PSC via the TeraGrid, this project completed the largest simulation of its kind (the lattice-Boltzmann model) to date.

  1. Barbara Minsker, a research scientist at NCSA, began conducting groundwater remediation research on the TeraGrid during its friendly-user phase and is continuing her work now that the system has moved into full production. Her research - which is designed to help government agencies find the most effective and least costly methods to clean up polluted sites - employs computationally intensive genetic algorithms.

"With the TeraGrid, we can solve a much bigger problem," Minsker said. "It enables us to look at real-world problems that no one has been able to solve before."

To learn more about the TeraGrid, go to

TeraGrid Contact:
Greg Lund
San Diego Supercomputer Center