Press Archive

SDSC's Triton Resource Supercomputer Attracts 500+ Researchers in First Year

Next Information Session for Campus-wide Resource Set for October 6

Published 09/30/2010

Deployed in October 2009 by the San Diego Supercomputer Center (SDSC) as an integrated, data-intensive computing system primarily to support UC San Diego and UC researchers, the Triton Resource is marking its one-year anniversary of service with a roster of more than 500 users across campus.

In its first year, the Triton Resource has supported research projects in many disciplines, including drug discovery and design, economics, genomics and metagenomics, microprocessor design, nanoengineering of solar cells, phylogenetics, and weather forecasting. The system has also been used in courses and seminars in astrophysics and parallel computing.

"We're excited by the number and diversity of projects that have benefited from the capabilities of the Triton Resource," said Mike Norman, SDSC's director. "We look forward to building further collaborations across campus and providing the computing services that enable scientific advancement. For both SDSC and a research campus such as UC San Diego, this underscores the fact that data-intensive computing is now an essential part of scientific discovery across numerous domains."

SDSC will host an informational session to provide details about accessing and using the Triton Resource from 3:00 to 4:00 PM on Wednesday, October 6, in the SDSC Auditorium. The system - which features some of the most extensive data analysis power available commercially or at any research institution in the country because of its unique large-memory nodes - is also available to researchers throughout the larger academic community, as well as private industry and government-funded organizations.

SDSC's Triton Resource was designed and deployed by SDSC to bridge a gap for researchers between technical workstations or small "clusters" and large supercomputers such as those funded by the National Science Foundation (NSF) and Department of Energy (DOE). At the low end of the spectrum, researchers often start by purchasing a technical workstation to serve their computing needs. Such workstations today are quite capable, typically incorporating two to four "multi-core" processors and significant memory.

Some researchers go a step further and purchase a few workstations or servers, connecting them in a network to form a computing "cluster." However, eventually the capabilities of single workstations or small clusters are exceeded and the researcher must find additional capability. While the next step for some researchers would be to purchase and build their own clusters with 20-50 or so "nodes" or individual servers, the costs for technical support, power, air conditioning and facilities modifications to support larger clusters can make use of a central computing system more attractive.

At the high end of the spectrum, the NSF and DOE supercomputing centers provide computing systems of massive scale, with processors totaling in the tens of thousands, memory in the hundreds of terabytes, high-performance networking, and huge disk and tape storage systems.

While both the NSF and DOE have programs in which researchers can gain access to their systems, the availability of a local resource on which researchers can experiment, debug, and run computations can be a good steppingstone to the national-scale systems.

"The Triton Resource is perfect for the UCSD researcher who needs a small- to medium-scale computing capability - more capability than provided by a workstation but without tapping into a large and remote national system," said Norman, adding that in most cases, a researcher can be running computing jobs on the system within a day or two of contacting SDSC.

"The Triton Resource is at the level of the very best research computing facilities at leading universities," said J. Andrew McCammon, holder of the Joseph Mayer Chair of Theoretical Chemistry at UC San Diego and a Howard Hughes Medical Institute Investigator in the Department of Pharmacology, UCSD School of Medicine, whose group used the system to enable research concerning biomolecular machines involved in intracellular transport and signal transduction. "It is critical that UCSD maintain such resources and encourage their use by UCSD faculty and students."

Paul Siegel, director of UCSD's Center for Magnetic Recording Research (CMRR), praised the Triton Resource for accelerating research related to the performance of advanced signal processing and coding algorithms for data storage and transmission systems. "Our research was reduced from many months to only a few days," he said.

The Triton Resource offers significant technical and performance capabilities for researchers, and consists of three subsystems: the Triton Compute Cluster (TCC), the Petascale Data Analysis Facility (PDAF), and the Data Oasis.

The TCC is a medium-scale cluster with 256 computing nodes, 2048 processors, and over 6,000 gigabytes of main memory, making it well-suited to a wide range of computing tasks. The PDAF is a unique computing resource designated for "data-intensive" computing - tasks involving a relatively small number of computing processors but requiring the ability to hold large amounts of data in memory. The PDAF is a 28-node cluster, with each node having 32 processors and either 256 gigabytes or 512 gigabytes of main memory. Both the TCC and PDAF are interconnected by a low-latency, high-bandwidth (10 gigabits per second) network that facilitates high performance parallel computing.

The Data Oasis, just achieving initial operational capability, is a large-scale, high-performance disk storage system that will scale to one to two petabytes (1 petabyte = 1,000,000 gigabytes) of storage when fully populated. Data Oasis provides parallel movement of data to and from the TCC and PDAF nodes, which is essential to the emerging paradigm of data-intensive computing.

"We drew on our years of experience in serving researchers nationwide to design the key features of the Triton Resource," said Phil Papadopoulos, SDSC's program director for UC computing systems, adding that the system already contains many common software packages and is maintained and operated around the clock by an experienced team.

"All told, the Triton Resource is providing UCSD researchers with ready access to some serious computing power to both enhance and accelerate research right here on campus."

Research inquiries may be directed to Ron Hawkins, Triton Affiliates Program Manager, at (858) 534-5045 or rhawkins@sdsc.edu. Click to read about a featured project using the Triton Resource.

About SDSC
As an Organized Research Unit of UC San Diego, SDSC is a national leader in creating and providing cyberinfrastructure for data-intensive research, and is celebrating its 25th anniversary this year as one of the National Science Foundation's first supercomputer centers. Cyberinfrastructure refers to an accessible and integrated network of computer-based resources and expertise, focused on accelerating scientific inquiry and discovery. SDSC is a founding member of TeraGrid, the nation's largest open-access scientific discovery infrastructure.

Media Contacts:
Jan Zverina, SDSC Communications
858 534-5111 or jzverina@sdsc.edu

Warren R. Froelich, SDSC Communications
858 822-3622 or froelich@sdsc.edu

Related Links

SDSC: http://www.sdsc.edu/
SDSC Triton Resource: http://tritonresource.sdsc.edu/
UC San Diego: http://www.ucsd.edu/
National Science Foundation: http://www.nsf.gov/