Skip to content

Multimedia Gallery - Photo

A Star for Cyberinfrastructure


SDSC operates powerful high-end computing resources including DataStar, a 15.6 teraflop IBM Power4+ supercomputer with total aggregate memory of 7.3 terabytes. DataStar is ranked among the top supercomputers in the world and is used for large-scale, data-intensive scientific research applications.
Photo: Alan Decker
Source: San Diego Supercomputer Center, UC San Diego

Download hi-res
Right-click link in Windows or option-click on a Mac and choose "Save Target As..." to download file.


Additional info:

Tech Specs
DataStar is an IBM terascale machine built in a configuration especially suitable for data intensive computations. DataStar has 272 (8-way) P655+ and 6 (32-way) P690 compute nodes. The 1.5 GHz 8-way nodes (176 in number) have 16 GB, the 1.7 GHz 8-way nodes (96 in number) have 32 GB, while four 32-way nodes have 128 GB of memory. There are also two 32-way nodes with 256 GB of memory for applications requiring unusually large memory space. DataStar has a nominal theoretical peak performance of 15.6 TFlops. DataStar nodes are suitable for both shared-memory (e.g., OpenMP or Pthreads) and message-passing (e.g., MPI) programming models, as well as the mixture of the two.

The use of 8-way nodes is exclusive: only one user is allowed at a node at any time, regardless of the number of CPUs one needs on that node. The use of 32-way nodes is shared among users. CPU and memory usage on these nodes is subject to limits specified by users in their batch scripts.

Recommended Use Guidelines
SDSC's DataStar p655 partition (with 272 eight-processor nodes) is primarily intended to run applications of very high levels of parallelism or concurrency, especially those with high parallel I/O requirements. The queuing policies favor jobs with higher processor count.

SDSC's DataStar p690 partition (with six 32-processor nodes) is primarily intended to run applications that require large amounts of shared memory such as pre- or post-processing data for large-scale calculations or repeated database operations. Four nodes have 128 GB of shared memory, and two of the 32-processor nodes have 256 GB of memory.


Special Restrictions:None