HPC Systems

For almost 30 years, SDSC has led the way in deploying and supporting cutting-edge high performance computing systems for a wide range of users, from the campus to the national research community. From the earliest Cray systems to today’s data-intensive systems, SDSC has focused on providing innovative architectures designed to keep pace with the changing needs of science and engineering.

Whether you’re looking to expand computing beyond your lab or a business looking for that competitive advantage, SDSC’s HPC experts will guide potential users in selecting the right resource, thereby reducing time to solution while taking your science to the next level.

Take a look at what SDSC has to offer and let us help you discover your computing potential.



Key Features



User Guide

~2 Pflop/s peak; 47,776 compute cores; 247 TB total memory; 634 TB total flash memory

Compute Nodes (1944 total)
Intel Xeon E5-2680v3 2.5 GHz dual socket, 12 cores/socket; 320 GB SSD local scratch memory; 120 GB/s memory bandwidth

GPU Nodes (36 total)
2 NVIDIA K-80 GPUs per node; dual socket, 12 cores/socket; 128 GB DDR4 DRAM; 120GB/s memory bandwidth; 320 GB flash memory

Large-memory Nodes (4 total)
1.5 TB total memory; 4 sockets, 16 cores/socket; 2.2 GHz

Hybrid Fat-Tree topology; 56 Gb/s (bidirectional) link bandwidth; 1.03-1.97 µs MPI latency

Lustre-based Parallel File System
Access to Data Oasis



User Guide

341 Tflop/s peak
560,000 IOPS

Compute Nodes
Intel XEON E5 (Sandy Bridge) 2.6 GHz dual socket; 16 cores/node; 64 GB 1333 MHz RAM (64 TB total); 80 GB Intel SSD per node

Flash-based I/O Nodes
64 Intel Westmere; dual socket; 12 cores/node; 48 GB 1330 DDR3 1333 MHz memory; 4.8 TB Intel 710 SSD/node (300 TB total)

Dual Rail, QDR, 3D torus of switches

Lustre-based Parallel File System
Access to Data Oasis

Triton Shared
Computing Cluster


User Guide

80+ Tflop/s
(and growing!)

General Computing Nodes
Dual-socket, 12-core, 2.5GHz Intel Xeon E5-2680 (coming) and Dual-socket, 8-core, 2.6GHz Intel Xeon E5-2670

GPU Nodes
Host Processors: Dual-socket, 6-core, 2.6GHz Intel Xeon E5-2630v2
GPUs: 4 NVIDIA GeForce GTX 980


10GbE (QDR InfiniBand optional)

Lustre-based Parallel File System
Access to Data Oasis


HPC for the 99 Percent

Comet is SDSC’s newest HPC resource, a petascale supercomputer designed to transform advanced scientific computing by expanding access and capacity among traditional as well as non-traditional research domains. The result of a National Science Foundation award currently valued at $21.6 million including hardware and operating funds, Comet is capable of an overall peak performance of two petaflops, or two quadrillion operations per second.

Comet joins SDSC’s Gordon supercomputer as another key resource within the NSF’s XSEDE (Extreme Science and Engineering Discovery Environment) program, which comprises the most advanced collection of integrated digital resources and services in the world.

Comet is really all about providing high-performance computing to a much larger research community – what we call ‘HPC for the 99 percent’ – and serving as a gateway to discovery,” said Norman, the project’s principal investigator. “Comet has been specifically configured to meet the needs of underserved researchers in domains that have not traditionally relied on supercomputers to help solve problems, as opposed to the way such systems have historically been used.”

Comet is configured to provide a solution for emerging research requirements often referred to as the ‘long tail’ of science, which describes the idea that the large number of modest-sized, computationally-based research projects still represents, in aggregate, a tremendous amount of research and resulting scientific impact and advance. Comet can support modest-scale users across the entire spectrum of NSF communities while also welcoming research communities that are not typically users of more traditional HPC systems, such as genomics, the social sciences, and economics.

Comet is a Dell-integrated cluster using Intel’s Xeon® Processor E5-2600 v3 family, with two processors per node and 12 cores per processor running at 2.5GHz. Each compute node has 128 GB (gigabytes) of traditional DRAM and 320 GB of local flash memory. Since Comet is designed to optimize capacity for modest-scale jobs, each rack of 72 nodes (1,728 cores) has a full bisection InfiniBand FDR interconnect from Mellanox, with a 4:1 over-subscription across the racks. There are 27 racks of these compute nodes, totaling 1,944 nodes or 46,656 cores.

In addition, Comet has four large-memory nodes, each with four 16-core sockets and 1.5 TB of memory, as well as 36 GPU nodes, each with four NVIDIA GPUs (graphic processing units).  The GPUs and large-memory nodes are for specific applications such as visualizations, molecular dynamics simulations, or de novo genome assembly.

Comet users will have access to 7.6 PB of storage as SDSC’s Data Oasis parallel file storage system is substantially upgraded. The system is being configured with 100 Gbps (Gigabits per second) connectivity to Internet2 and ESNet, allowing users to rapidly move data to SDSC for analysis and data sharing, and to return data to their institutions for local use. By the summer of 2015,Comet will be the first XSEDE production system to support high-performance virtualization at the multi-node cluster level. Comet’s use of Single Root I/O Virtualization (SR-IOV) means researchers can use their own software environment as they do with cloud computing, but can achieve the high performance they expect from a supercomputer.

Comet replaces Trestles, which entered production in early 2011 to provide researchers not only significant computing capabilities, but to allow them to be more computationally productive.


Meeting the Demands of Data-Intensive Computing

Gordon is the first HPC system built specifically for the challenges data-intensive computing. Debuting in early 2012 as one of the 50 fastest supercomputers in the world, Gordon is the first HPC system to use massive amounts of flash-based memory. Gordon contains 300TB (terabytes) of flash-based storage along with large-memory “supernodes” based on ScaleMP’s vSMP Foundation software. The standard supernode has approximately 1TB of DRAM, but larger memory configurations can be deployed as needed.

As a well-balanced resource between speed and large memory capabilities, Gordon is an ideal platform for tackling data-intensive problems, and designed to help advance science in domains such as genomics, graph analysis, computational chemistry, structural mechanics, image processing, geophysics, and data mining applications. The system’s supernodes are ideal for users with serial or threaded applications that require significantly more memory than is available on a single node of most other HPC systems. Gordon’s flash-based I/O nodes offer significant performance improvement for applications that exhibit random access data patterns or require fast access to significant amounts of scratch space.

Gordon is connected to SDSC’s Data Oasis parallel file storage system, providing researchers with a complete array of compute and storage resources. Allocations on Gordon are available through the National Science Foundation’s XSEDE program. 

TSCC Computing “Condo”

Affordable Computing for Campus & Corporate

In mid-2013 SDSC launched the Triton Shared Computing Cluster (TSCC) after recognizing that UC San Diego investigators could benefit from an HPC system dedicated to their needs and with near-immediate access and reasonable wait times instead of accessing a national system entailing competitive proposals and often longer wait times. Following an extensive study of successful research computing programs across the country, SDSC selected the “condo computing” model as the main business model for TSCC. Condo computing is a shared ownership model in which researchers use equipment purchase funds from grants or other sources to purchase and contribute compute “nodes” (servers) to the system. The result is a researcher-owned computing resource of medium to large proportions.

By mid-2016 TSCC had 365 users across 27 labs/groups, with a total of 230 nodes (approximately 4,000 processors) and 100+ teraflops of computing power. Participating researchers/labs span a wide diversity of domains, including engineering, computational chemistry, genomics, oceanography, high-energy physics, and others.

TSCC Participant Info

Condo plan summary

The condo plan gives participants access to additional computing capability through the pooling of computing resources, offering participants significantly greater computational power and higher core counts than if limited to their own hardware or individual laboratory cluster. Researchers who contribute to the TSCC cluster have priority access to the nodes that they contribute (via the home queue). In addition, they can run jobs on any available nodes, including hotel and other condo nodes (via the condo queue). This effectively increases their computing capability and flexibility, which can be extremely valuable during times of peak research needs.

Condo participants may purchase general computing nodes, Graphics Processing Unit (GPU) nodes, or both. See the current price structure in the TSCC Node Expense Table (costs and configurations are subject to change annually). The operations fee is supplemented for UCSD participants by the UCSD Administration, and pays for labor, software licensing, administration hardware, and colocation fees.

Condo participants may take possession of their nodes and remove them from the cluster at any time; however, once equipment is removed it cannot be reinstalled in the TSCC.

Condo nodes come with a three‐year warranty. After expiration, participants may continue to run their operational nodes in the TSCC for an additional year (equipment failing in the fourth year may be idled without repair). At the end of four years, participants must take possession of or surplus their equipment.

Usage model

Charges for computing time are calculated on a Service Unit (SU) basis. One SU = 1 core–hour of computing time on all queues for which charges are calculated. The cost in SUs is the same regardless of which actual nodes a job runs on.

Most of the system administration, user support, software licensing, and other operating costs are supplemented by the UCSD Administration. The system is housed at the San Diego Supercomputer Center on the UCSD campus.

Cost per SU based on Organization and Participation Status
Organization / Node TypeHotelCondo
UCSD Users $0.025/SU $0.015/SU (equivalent)
Other UC Campuses $0.03/SU Please inquire
Public $0.06/SU Please inquire


Each year, condo cluster participants will receive an allocation of cluster computing time proportional to the capacity of their purchased nodes. For example, a participant that purchases eight general computing nodes will receive just under 1.6 million SUs. This amount is based on 24x365 usage of 8 x 24 = 192 cores, allowing for 3% maintenance downtime. These core–hours can be used any time during the year on any of the computing nodes. Unused core–hours by condo participants expire at the end of each year. Note: heavy computing workloads may exhaust the annual allocation in less than one year.

Time-sharing on condo nodes

Condo jobs that require a number of cores less than or equal to their purchased nodes are guaranteed to start within eight hours of submission and can run for an unlimited amount of time. Jobs that utilize the hotel nodes have a 168–hour time limit, while jobs that extend to other participants’ condo nodes have an eight–hour time limit. See the Jobs section for submission details and examples.

Condo participants may submit "glean" jobs (via the glean queue) to run on idle computing nodes. These jobs are not charged against the submitter's SU balance, but they may be terminated at any time by the scheduler if the nodes where they are running are needed to run higher–priority jobs.

Because the capabilities and purpose of the GPU nodes differ significantly from the general computing nodes, SUs received for contributed GPU nodes and general computing nodes cannot be interchanged.

Terms of Participation

Participants in the Condo Program are requested to sign a Memorandum of Understanding (MOU) containing the basic terms of their participation in the program.


Pay–as–you–go jobs can only run on the hotel nodes. Currently, there are 48 general computing nodes (768 cores); additional nodes may be added based on demand. Hotel nodes are configured with 64 GB of memory and an Infiniband interface. The general computing nodes are allocated per-core, allowing up to 16 jobs to run on each node simultaneously.

Acceptable Use Policy

Download the Acceptable Use Policy. When using the Triton Shared Computing Cluster and associated resources, you agree to comply with these conditions.

Condo/Hotel Cost Details

Condo computing

The TSCC condo cost structure is based on condo participants purchasing their nodes, paying a one–time fee for their pro rata share of the common networking and storage infrastructure, and a modest annual operating expense that is subsidized by the UCSD Administration.

Hotel computing

Pay-as-you-go hotel users purchase cycles that reflect the total cost-of-ownership, albeit leveraging the economies of scale afforded by TSCC. For UCSD affiliates, the cost for the general computing hotel nodes is $0.025 per SU. The minimum hotel purchase is $250 (10,000 SUs).

Additional UCSD/Non-UCSD Cost Details

Cost for UCSD condo users

For condo participants, the primary cost is purchasing the computing nodes, plus a one-time fee of $939 per node to cover the costs of shared infrastructure such as interconnects, home file systems and the parallel file system. In addition, there is a modest IDC-bearing operations fee of $495/node/year, which will allow for ongoing operations, user services support, and expansion of the cluster.

Cost for non-UCSD hotel users

The TSCC is available to researchers from other UC campuses, other educational institutions and industry. Costs are competitive but higher than those cited above for UCSD researchers because the UCSD Administration is supplementing the program. Please get in touch with the TSCC Participant Contact for information on the rate structure for your organization.

TSCC User Documentation

The TSCC User Guide has complete information on accessing and running jobs on TSCC.