Allocations

Gordon is available through XSEDE online allocation process.   For more information, please visit the XSEDE website at: https://www.xsede.org/

The following describes the allocable Gordon resources and provides some high-level guidance on what constitutes and appropriate Gordon allocation.

SDSC Appro Linux Intel Sandy Bridge Cluster (Gordon Compute Cluster) 

SDSC Appro Linux Flash-based I/O Nodes (Gordon ION)

 

SDSC Appro Linux Intel Xeon E5 Cluster (Gordon Compute Cluster)

The Gordon Compute Cluster is an Appro-integrated, data-intensive computing resource composed of 1,024 dual-socket compute nodes and 64 I/O nodes connected by a dual rail, QDR InfiniBand, 3D torus network. Each compute node has two 8-core, Intel Xeon EM64T E5, 2.6 GHz processors and 64 GB of DRAM.

The Intel processor uses Intel’s Advanced Vector Extensions (AVX) to achieve 8 floating point operations/clock cycle. This provides up to twice the performance on numerically intensive applications than current processors of the same core count and frequency. The full 1,024 node cluster has been measured at 285 Tflop/s. 

Gordon has three distinct features that make it ideal for data intensive computing: 1) 300 TB of high performance Intel solid state drives served via 64 I/O nodes, each of which is capable of over 560K IOPS, or 35M IOPS for the full system; 2) large memory super nodes that provide up to 2 TB of cache-coherent memory via a high performance software aggregation layer; and 3) access to a 4 PB Lustre-based file system capable of sustained rates of 100GB/s.

Recommended Use

The Gordon Compute Cluster is designed for data-intensive applications spanning domains such as genomics, graph problems, geophysics, and data mining. The large-memory supernodes are ideal for users with serial or threaded applications that require significantly more memory than is available on a single node of most systems, while the flash-based I/O nodes may offer significant performance improvement for applications that exhibit random access data patterns or require fast access to significant amounts of scratch space.

Allocations of the Gordon compute cluster provide access to the flash memory roughly in proportion to the number of compute nodes requested by a given user job. However, users may also apply separately for long-term dedicated use of the I/O nodes. Please see the Gordon ION resource entry for a more thorough description of this unique resource and how to apply for dedicated access.

Users interested in the large memory nodes should request SUs in proportion to the amount of memory required. For example, 1TB corresponds to the memory on 16 compute nodes (64 GB per node) and the request should account for the resulting number of SUs (exact processor counts will be published prior to the proposal deadline for computing SUs).

Allocation requests must describe in detail how you will make use of the distinctive features of Gordon. If you are new to high performance computing and do not yet have benchmarking data to support the use of Gordon, we encourage you to apply for a startup allocation.

If you require special help with using the Gordon Compute Cluster, we encourage you to also request Extended Advance User Support.

Status

Accepting allocation requests on an ongoing basis.

Startup Allocation Limit

50,000 SUs

SDSC Appro Linux Flash-based I/O Nodes (Gordon ION)

There are 64 I/O nodes in the Gordon compute cluster, some of which are available as dedicated resources outside the batch scheduler. You must request allocations for these separately from your Gordon Compute Cluster allocation. This I/O node resource is referred to as Gordon ION.

Gordon ION is a 64-node, flash-based I/O resource that is an integral part of Gordon Compute Cluster, as well as a distinct and individually allocated resource. Each node is composed of two hex-core, 2.66GHz Westmere processors, 48GB of DRAM memory, and 4 TB of high-performance Intel solid state disk capable of delivering over 560K I/O IOPS. Each I/O node can access Data Oasis, SDSC’s 4 PB Lustre-based file system, via two 10 GbE connections resulting in a sustained bandwidth into Gordon ION of 100 GB/s.

Recommended Use

Gordon ION is particularly well suited to database and data-mining applications where high levels of I/O concurrency exist, or where the I/O is dominated by random access data patterns. The resource should be of interest to those who, for example, want to provide a high-performance query engine for scientific or other community databases. Consequently, Gordon ION allocations are for long-term, dedicated use of the awardee.

You may request up to 2 I/O nodes, though it is expected that most will request one unless scaling can be demonstrated that justifies 2. You should also request dedicated compute nodes if they are part of the application architecture. One compute node will be provided for each I/O node unless there is justification for more. In any case, no more than 16 compute nodes will be provided per 1 I/O node, or 32 for allocations of 2 I/O nodes.

Successful allocation requests must describe how you will make use of the I/O nodes. This should include relevant benchmarks on spinning disks, with projections of how the applications will scale when using flash drives. Additionally, the request should include a strong justification for why these should be provided as a dedicated resource—for example, providing long-term access to data for a large community.

If you are new to high performance computing and do not yet have benchmarking data to support the use of Gordon ION, we encourage you to apply for a startup allocation.

If you require special help with using the Gordon ION resource, we encourage you to contact Advanced User Support.

You may also apply for time on the Gordon Compute Cluster.

Status

Available via startup allocations.


Startup Allocation Limit

2 I/O nodes.

Recent Gordon News & Events

Gordon Ranks No. 88

October 24, 2014
Still One of World's Fastest HPC Systems Gordon, the unique data-intensive flash-based HPC resource at the San Diego Supercomputer Center (SDSC) at t... more news...


Past Events


Quick Links