SDSC: San Diego Supercomputer Center
Established: November 14, 1985
Web site: www.sdsc.edu
Leadership: Michael L. Norman, interim director
Trestles - The newest SDSC addition to XSEDE (Extreme Science and Engineering Discovery Environment), Trestles is a high-performance system that offers modest-scale users rapid job turnaround aimed at increasing researcher productivity. With a theoretical peak speed of 100 teraflops Trestles has 324 nodes, each with four 8-core Magny-Cours compute nodes and 64GB of memory. Trestles also incorporates 120GB of local flash storage on every node. Trestles is aimed at TeraGrid users requiring 1024 cores or fewer and long running jobs (up to multiple weeks). Among the top five largest systems in XSEDE, Trestles debuts at #111 on the top 500 list of the world's fastest supercomputers.
Dash - An XSEDE resource with a peak speed of 5.2 teraflops, 68 compute nodes with two quad core Nehalem processors, and 48GB of memory per node. Dash is being initially targeted to users wishing to evaluate and adapt their applications to emerging technologies, including flash memory and virtual SMP nodes.
Triton Resource - an integrated, data-intensive computing system designed to support UC San Diego and UC researchers, as well as researchers throughout the larger academic community, private industry and government-funded organizations. The Triton Resource has three key components:
- Triton Compute Cluster (TCC) : A scalable cluster designed as a centralized resource, and a highly affordable alternative to less energy-efficient 'closet computers.' Provides an aggregate of 6 terabytes of RAM memory across 256 nodes, and a peak performance of 24 teraflops.
- Petascale Data Analysis Facility (PDAF) : Consists of unique, large-memory (20 256GB and eight 512GB) 32-core nodes, with an aggregate of 9 terabytes of memory and a peak speed of 9 teraflops.
- Data Oasis : Large-scale disk storage. First phase completed in late 2010, with plans to provide up to 4 petabytes of extensible storage when fully deployed.
FutureGrid - SDSC is a resource partner in the FutureGrid project, hosting the Sierra resource which has 7 teraflops of compute power, 96 terabytes of raw storage, and is connected via a 10GB link. Sierra provides both a Eucalyptus and Nimbus cloud, as well as supports HPC “bare metal” applications.
Coming Second Half of 2011
Gordon - A new 1024-node date-intensive HPC resource with 64 terabytes of DRAM, 256 terabytes of flash memory, and 4 petabytes of disk storage. Like SDSC's smaller Dash system, Gordon is unique among HPC systems in that it incorporates flash memory into its architecture, allowing it to solve data-intensive problems up to 10 times faster than conventional spinning disk systems.
The anatomy of a byte:
- Byte: A unit of computer information equal to one typed character.
- Megabyte: A million bytes; equal in size to a short novel.
- Gigabyte:A billion bytes; equal to information contained in a stack of books almost three stories high.
- Terabyte: A trillion bytes; about equal to the information printed on paper made from 50,000 trees.
- Petabyte: A quadrillion bytes. It would take 1,900 years to listen to a petabyte's worth of songs – if you had a large enough MP3 player.
- Exabyte: One quintillion bytes; every word ever spoken by humans could be stored on five exabytes.
- Zettabtye: One sextillion bytes; enough data to fill a stack of DVDs reaching halfway to Mars.
Rating a supercomputer's performance:
- Megaflops: A million floating point operations per second. The original Cray-1 supercomputer was capable of 80 megaflops.
- Gigaflops: A billion floating point operations per second. Today's personal computers are capable of gigaflops performance.
- Teraflops: A trillion (1012) floating point operations per second. Most of today's supercomputers are capable of teraflops performance.
- Petaflops: A quadrillion (1015) floating point operations per second. The latest supercomputer barrier to be broken. The fastest systems can now achieve about 2.5 petaflops.
- Exaflops: A quintillion (1018) floating point operations per second, and the new frontier for supercomputers, provided we can make exascale supercomputers 100 to 1,000 times as energy-efficient as today's fastest machines.
Some common uses for supercomputers: