News Archive

TeraGrid Project Begins Accepting Computing Proposals

Published 06/11/2003

Researchers across the U.S. will be able to submit proposals for use of the first computing systems of the National Science Foundation's TeraGrid project beginning June 15.

Proposals requesting 200,000 or more CPU hours will be reviewed in September through the NSF's Partnerships for Advanced Computational Infrastructure (PACI) peer-review allocation process. The first computers in the TeraGrid distributed computing system-about four teraflops total-will be available for use in December.

The TeraGrid project is a multi-year effort to build and deploy the world's fastest, most comprehensive distributed computing infrastructure for open scientific research.

The "Phase I" TeraGrid machines designated to enter production by the start of the new year consist of more than 800 Itanium-family processors running Linux. In addition to offering four teraflops of computing power, these systems will provide more than a quarter petabyte of storage, visualization, database and data collection capabilities. Scientists at research institutions nationwide will use the systems to conduct research in a wide range of scientific and engineering disciplines, from environmental science to microbiology to astrophysics.

The new systems are located at four of the five TeraGrid sites: the National Center for Supercomputing Applications (NCSA) at the University of Illinois, Urbana-Champaign; the San Diego Supercomputer Center (SDSC); the Center for Advanced Computing Research (CACR) at the California Institute of Technology; and Argonne National Laboratory. In addition, the 3,000-processor HP AlphaServerSC Terascale Computing System at the Pittsburgh Supercomputing Center (PSC) will be partially allocated as part of the TeraGrid infrastructure during this allocation process. Researchers will be able to use TeraGrid computers and resources at multiple sites as a virtual machine through the high-speed TeraGrid network.

The TeraGrid partners are also part of PACI, an NSF project to build an advanced computational infrastructure for science and engineering.

"NSF is pleased to support the TeraGrid as one of the first components to become available to the nation's researchers as part of the emerging cyberinfrastructure that integrates computing, information and communication resources," said Richard Hilderbrandt, program director for PACI. "The scientific community has made clear that cyberinfrastructure is going to provide many opportunities to revolutionize the conduct of science and engineering." Both PACI and TeraGrid are NSF-funded initiatives.

Because the TeraGrid is unique among supercomputing systems, the TeraGrid management team foresees new and unique usage scenarios. As a result, the peer review process will look for new and different usages. Researchers with applications that can take advantage of this unique collection of resources will be given preference in the allocation process.

The TeraGrid will allow researchers to launch thousands of independent jobs using data from a single data source, or to use a number of resources-including massive amounts of storage, remote visualization systems and online data collections-to complete large, tightly coupled simulations. Other uses could include analyzing huge datasets using a Web-based portal to access specific TeraGrid resources, and on-demand computing-for example, using large computational resources to respond in real time to natural or man-made disasters.

When completed, the TeraGrid will include 20 teraflops of distributed computing power as well as facilities capable of managing and storing nearly 1 petabyte of data, high-resolution visualization environments, and toolkits for grid computing. All the TeraGrid components will be tightly integrated and connected through the new 40-gigabit-per-second TeraGrid dedicated network, the world's fastest research network.

For more information, see

Media Contacts:

Merry Maisel
San Diego Supercomputer Center

Greg Lund
San Diego Supercomputer Center