Join the Triton Shared Computing Cluster!

The Triton Shared Computing Cluster (TSCC) allows researchers to purchase compute nodes (servers) and add them to a community cluster, affording access to a much larger resource.

TSCC provides centralized colocation and systems administration for purchased cluster nodes via its condo cluster, and a hotel service for those with temporary or bursty high-performance computing (HPC) needs. The platform provides three kinds of clustered compute nodes: General Computing Nodes, Graphics Processing Unit (GPU) Nodes, and Large Memory Nodes.

TSCC free trial

For a free trial, email and provide your:

  • Name
  • Contact Information
  • Department
  • UCSD affiliation (grad student, post-doc, faculty, etc.)

Note: New users will require a UCSD Active Directory (AD) username, or will need to provide a public key (emailed as an attached file) for secure login.

Trial accounts are 250 core-hours valid for 90 days.

Program in a nutshell

  • Researchers use equipment purchase funds (e.g., from grants or startup packages) to buy compute servers (“nodes”) that will be operated as part of the cluster. An additional one-time “infrastructure” fee covers the purchase of shared components (racks, network switches, cables, etc.).
  • Participating researchers may then have dedicated use of their purchased nodes, or they may run larger computing jobs by sharing idle nodes owned by other researchers.1 The main benefit is access to a much larger cluster than would typically be available to a single lab.
  • Researchers also pay an annual operations fee for each of their purchased nodes. This fee covers labor to maintain the system, utilities, software licenses, etc. Currently, the UCSD Administration substantially subsidizes this fee for UCSD researchers. Other campus administrations may wish to consider subsidizing the fee for their researchers.
  • Researchers may run jobs in a glean queue2 which does not count against their annual allocation of computing time.
  • Participation in the program runs nominally for three years, which is the duration of the equipment warranty. Researchers may leave their nodes in the system for a fourth year, though equipment failing during this period may not be repaired.
  • Researchers may remove and take possession of their nodes at any time; once equipment is removed from the cluster it may not be returned.3

Program Features and Benefits

  • Access to a much larger cluster than most labs could typically afford
  • Professionally administered – no need to have postdocs or grad students maintain your computing system
  • Housed in a secure, climate-controlled, energy-efficient data center
  • High-performance hardware with latest-generation Intel server processors and optional high-bandwidth, low-latency Infiniband network for maximum parallel computing performance
  • TSCC provides many installed software packages, or you install your own
  • Access to a community of researchers and users that can share tips and information

Current Status

  • We presently have 33 participating groups contributing over 300 nodes with hundreds of users.
  • Participating researchers/labs are working in the fields of engineering, chemistry, genomics, oceanography, physics, and many others.

Participation/Purchase Opportunity

  • Currently, TSCC has a purchasing agreement with Dell-EMC with attractive pricing on state-of-the-art General Computing nodes. GPU nodes are procured from a small cohort of vendors on a case basis.

Pricing and Fees

Hardware purchase prices and fees
Single compute node with 28 cores, 2.5GHz, 192GB main memory, and 10GbE network $8,834
Add EDR Infiniband networking $900
Infrastructure Fee (covers racks, switches, cables, etc.) $939
IB Infrastructure Fee (covers additional IB switches) $200
Annual Operations Fee
(per purchased node)4
UCSD Participants $495
Other UC Campus Participants $1,805

System Specifications

System configuration

For users purchasing nodes in multiples of 4:

  • Dell C6400 2RU rack mount chassis with four C6420 compute nodes per chassis; 1600 Watt redundant power supplies

For users purchasing 1-3 nodes:

  • Dell R640 1RU rack mount server; 750 Watt redundant power supplies

Compute node

  • Dual Intel Xeon Skylake 6132, 14-cores and 133 watts per processor, 192GB of DDR4 2666 ECC server memory
  • Single port 10GbE network interface or upgrade to EDR Infiniband for low-latency parallel computing


Three years, next business day on-site service


  1. Researchers receive an annual allocation of computing time equivalent to running their purchased nodes continuously. Running larger or more jobs would draw down the allocation proportionately faster.
  2. Glean queue jobs are subject to immediate preemption so should be short jobs or capable of being restarted.
  3. Obtaining any switches, cables, and other ancillary components necessary to make removed nodes operational in a new location is the responsibility of the research group.
  4. Currently, the UCSD Administration subsidizes a substantial portion of the annual operating fee for UCSD researchers. Researchers participating from other UC campuses may wish to see if their administration would be willing to provide a subsidy as well.

Interested? Contact at the San Diego Supercomputer Center.