Retired Resources

The following resources have been provided by SDSC in the past, and have been retired and decomissioned.

Retired Compute Resources

Resource Highlights Date Retired


Trestles was exclusively an XSEDE resource. It had a theoretical peak speed of 100 TF from 324 nodes, each with four 8-core Magny core processors and 64GB of memory. Trestles also incorporated 120 GB of local flash storage on every node. It targeted XSEDE users requiring 1024 cores or fewer and long running jobs (up to multiple weeks). May 1, 2015

Sierra and Lima

These were the systems SDSC contributed to FutureGrid. Sierra comprised 84 compute nodes and two storage nodes. Each compute node had two sockets, each with a 4-core 2.5 GHz Xeon processor, for a total of 8 cores per node and 672 total cores for the system. Each node had 32 GB of DDR2 and was connected via 20-Gbps DDR InfiniBand interconnect. Sierra had a theoretical peak performance of 7 TFlop/s. It also had two Sun x4540 storage servers at 48 TB each for a total of 96 TB of raw storage. October 1, 2014

Triton Resource

SDSC's precursor to the UCSD-RCI TSCC, Triton Resource had 256 general compute nodes, 28 high-memory nodes and a parallel file system. The general compute cluster had a peak theoretical throughput of approximately 20 teraflops and contained six terabytes of memory, while the high-memory cluster had a total peak throughput of nine teraflops and a total memory capacity of nine terabytes. July 1, 2013
OnDemand Cluster The OnDemand cluster was a Rocks cluster with Intel dual-socket, dual-core compute nodes. The 2.0 GHz, 4-way nodes have 8 GB of memory. OnDemand had a nominal theoretical peak performance of 2.4 TFlops. April 2010

Appro Dash

SDSC's prototype of the big data machine, Gordon, with 68 compute nodes 512 total cores and 3 TB of memory. Peak performance - 5.2 teraflops. February 10, 2012

IBM BlueGene/L

Formerly SDSC's largest system with 3,072 compute nodes and 384 I/O nodes, with a total of 6,144 processors and 1572.9 GB of memory. Peak performance - 17.2 teraflops. June 30th, 2009
IA-64 Linux Cluster

IA-64 Linux Cluster consisted of 262 IBM cluster nodes, each with dual 1.5 GHz Intel® Itanium® 2 processors, for a peak performance of 3.1 teraflops.

June 30th, 2009
IBM DataStar An IBM terascale machine built in a configuration especially suitable for data intensive computations. It included 96 8-way P655+ nodes with 16 or 32 Gb of memory and 32-way P690 nodes with 128 or 256 Gb of memory. Peak performance - 15.6 teraflops October 1st, 2008

Retired Data Resources

Resource Highlights Date Retired
GPFS-WAN A centralized file system for long- or short-term data storage of high-volume multi-site runs and large XSEDE-based data collections, GPFS-WAN was a 613-TB storage system mounted on several XSEDE compute resources. June 2012
HPSS Part of the SDSC Tape Storage platform, HPSS was a centralized, long-term data storage system for national users. With a 25 PB capacity, its content increased by over 100 TB a month while in production. June 27, 2012
SAM-QFS A high-performance archival storage system, SAM-QFS allowed users to directly access data using a disk cache file system, then automatically migrated the data to tape. For national users, it was part of the SDSC Tape Storage platform. July 1, 2012

Retired Software Resources

Resource Highlights Date Retired
Tecplot Tecplot 360 is CFD & Numerical Simulation Visualization Software. It allowed users to analyze and explore complex datasets, arrange multiple XY, 2D and 3D plots, create animations and communicate results with high-quality output. May 2010