Expanse User Guide

Technical Summary

Expanse is a dedicated eXtreme Science and Engineering Discovery Environment (XSEDE) cluster designed by Dell and SDSC delivering 3,373 peak petaflops, and will offer Composible Systems and Cloud Bursting

Expanse's standard compute nodes are each powered by two 64-core AMD EPYC 7742 processors and contain 256 GB of DDR4 memory, while each GPU node contains four NVIDIA V100s (32 GB SMX2), connected via NVLINK, and dual 20-core Intel Xeon 6248 CPUs. Expanse also has four 2 TB large memory nodes.

Expanse is organized into 13 SDSC Scalable Compute Units (SSCUs), comprising 728 standard nodes, 54 GPU nodes and 4 large-memory nodes. Every Expanse node has access to a 12 PB Lustre parallel file system (provided by Aeon Computing) and a 7 PB Ceph Object Store system. The Expanse cluster will be managed using the Bright Computing HPC Cluster management system, and will use the SLURM workload manager for job scheduling.

Expanse supports the XSEDE core software stack, which includes remote login, remote computation, data movement, science workflow support, and science gateway support toolkits.

Expanse is an NSF-funded system operated by the San Diego Supercomputer Center at UC San Diego, and is available through the XSEDE program.

Expanse is now accepting proposals.

Resource Allocation Policies

  • The maximum allocation for a Principle Investigator on Expanse is 15M core-hours and 100K GPU hours. Limiting the allocation size means that Expanse can support more projects, since the average size of each is smaller.
  • Access via Science Gateways can request more than the 15M core-hour limit.

Job Scheduling Policies

  • The maximum allowable job size on Expanse is 4,096 cores – a limit that helps shorten wait times since there are fewer nodes in idle state waiting for large number of nodes to become free.
  • Expanse supports long-running jobs - run times can be extended to one week.  Users requests will be evaluated based on number of jobs and job size. 
  • Expanse supports shared-node jobs (more than one job on a single node). Many applications are serial or can only scale to a few cores. Allowing shared nodes improves job throughput, provides higher overall system utilization, and allows more users to run on Expanse.

Technical Details

System Component Configuration
Compute Nodes
CPU Type AMD EPYC 7742
Nodes 726
Sockets 2
Cores/socket 64
Clock speed 2.25 GHz
Flop speed 4608 GFlop/s
Memory capacity

* 256 GB DDR4 DRAM

Local Storage

1TB Intel P4510 NVMe PCIe SSD

Max CPU Memory bandwidth 409.5 GB/s
GPU Nodes
GPU Type NVIDIA V100 SMX2
Nodes 52
GPUs/node 4
CPU Type Xeon Gold 6248
Cores/socket 20
Sockets 2
Clock speed 2.5 GHz
Flop speed 34.4 TFlop/s
Memory capacity *384 GB DDR4 DRAM
Local Storage

1.6TB Samsung PM1745b NVMe PCIe SSD

Max CPU Memory bandwidth 281.6 GB/s
Large-Memory
CPU Type AMD EPYC 7742
Nodes 4
Sockets 2
Cores/socket 64
Clock speed 2.25 GHz
Flop speed 4608 GFlop/s
Memory capacity 2 TB
Local Storage

3.2 TB  (2 X 1.6 TB Samsung PM1745b NVMe PCIe SSD)

STREAM Triad bandwidth ~310 GB/sec
Full System
Total compute nodes 728
Total compute cores 93,184
Total GPU nodes 52
Total V100 GPUs 208
Peak performance 5.16 PFlop/s
Total memory 247 TB
Total memory bandwidth 215 TB/s
Total flash memory 824 TB
HDR InfiniBand Interconnect
Topology Hybrid Fat-Tree
Link bandwidth 56 Gb/s (bidirectional)
Peak bisection bandwidth 8.5 TB/s
MPI latency 1.17-1.69 µs
DISK I/O Subsystem
File Systems NFS, Ceph
Lustre Storage(performance) 12 PB
Ceph Storage 7 PB
I/O bandwidth (performance disk) 140 GB/s, 200K IOPs

 

Systems Software Environment

Software Function Description
Cluster Management Bright Cluster Manager
Operating System CentOS Linux
File Systems Lustre, Ceph
Scheduler and Resource Manager SLURM
XSEDE Software CTSS
User Environment Lmod
Compilers AOCC, GCC, Intel, PGI
Message Passing Intel MPI, MVAPICH, Open MPI

System Access

As an XSEDE computing resource, Expanse is accessible to XSEDE users who are given time on the system. To obtain an account, users may submit a proposal through the XSEDE Allocation Request System or request a Trial Account.

Interested parties may contact SDSC User Support for help with an Expanse proposal (see sidebar for contact information).

Logging in to Expanse

Expanse supports Single Sign-On through the XSEDE User Portal, from the command line using an XSEDE-wide password, and coming soon, the Expanse User Portal. While CPU and GPU resources are allocated separately, the login nodes are the same. To log in to Expanse from the command line, use the hostname:

login.expanse.sdsc.edu

The following are examples of Secure Shell (ssh) commands that may be used to log in to Expanse:

ssh <your_username>@login.expanse.sdsc.edu
ssh -l <your_username> login.expanse.sdsc.edu

Notes and hints

  • When you log in to expanse.sdsc.edu, you will be assigned one of the two login nodes login0[1-2]-expanse.sdsc.edu. These nodes are identical in both architecture and software environment. Users should normally log in through login.expanse.sdsc.edu, but may specify one of the two nodes directly if they see poor performance.
  • Please feel free to append your public key to your ~/.ssh/authorized_keys file to enable access from authorized hosts without having to enter your password. We accept RSA, ECDSA and ed25519 keys. Make sure you have a strong passphrase on the private key on your local machine.
    • You can use ssh-agent or keychain to avoid repeatedly typing the private key password.
    • Hosts which connect to SSH more frequently than ten times per minute may get blocked for a short period of time
  • Do not use the login nodes for computationally intensive processes, as hosts for running workflow management tools, as primary data transfer nodes for large or numerous data transfers or as servers providing other services accessible to the Internet. The login nodes are meant for file editing, simple data analysis, and other tasks that use minimal compute resources. All computationally demanding jobs should be submitted and run through the batch queuing system.
  • Login nodes are not the same as the batch nodes, Users should request an interactive sessions to compile programs.

Expanse Portal

Coming Soon! The Expanse User Portal provides a quick and easy way for Expanse users to login, transfer and edit files and submit and monitor jobs. The Expanse User Portal will provide a gateway for launching interactive applications such as MATLAB, RStudio, and an integrated web-based environment for file management and job submission. All Users with valid Expanse allocation and XSEDE Based credentials have access via their XSEDE credentials.. 

Modules

Environment Modules provide for dynamic modification of your shell environment. Module commands set, change, or delete environment variables, typically in support of a particular application. They also let the user choose between different versions of the same software or different combinations of related codes.

Expanse uses Lmod, a Lua-based module system. Users will now need to setup their own environment by loading available modules into the shell environment, including compilers and libraries and the batch scheduler.

Users will not see all the available modules when they run the module available command without loading a compiler. Users should use the command module spider to see if a particular package exists and can be loaded on the system. For additional details, and to identify dependents modules, use the command:

module spider <application_name>

The module paths are different for the CPU and GPU nodes. Users can enable the paths by loading the following modules:

module load cpu (for cpu nodes)

module load gpu (for gpu nodes)

Users are requested to ensure that both sets are not loaded at the same time in their build/run environment (use the module list command to check in an interactive session).

On the GPU nodes, the gnu compiler used for building packages is the default version 8.3.1 from the OS. Hence, no additional module load command is required to use them. For example, if one needs OpenMPI built with gnu compilers, the following is sufficient:

module load openmpi

Useful Modules Commands

Here are some common module commands and their descriptions:

Command Description
module list

List the modules that are currently loaded

module avail

List the modules that are available in environment

module spider

List of the modules and extensions currently available

module display <module_name>

Show the environment variables used by <module name> and how they are affected

module unload <module name>

Remove <module name> from the environment

module load <module name>

Load <module name> into the environment

module swap <module one> <module two>

Replace <module one> with <module two> in the environment

Loading and unloading modules

Some modules depend on others, so they may be loaded or unloaded as a consequence of another module command. If a model has dependencies, the command module spider <module_name> will provide additional details.

Module: command not found

The error message module: command not found is sometimes encountered when switching from one shell to another or attempting to run the module command from within a shell script or batch job. The reason the module command may not be inherited as expected is that it is defined as a function for your login shell. If you encounter this error, execute the following from the command line (interactive shells) or add to your shell script (including SLURM batch scripts):

source /etc/profile.d/modules.sh

Managing Your Accounts

The expanse-client script provides additional details regarding User and Project availability and usage located at:

/cm/shared/apps/sdsc/current/bin/expanse-client

To use:

[user@expanse-login02 ~]$ module load sdsc
[user@expanse-login02 ~]$ expanse-client user -p
NAME PROJECT USED AVAILABLE USED_BY_PROJECT
─────────────────────────────────────────────────────────────────
<user> <project> <SUs used by user> <SUs available for user> <SUs used by project>

Usage:
expanse-client [command]

Available Commands:
help Help about any command
project Get project information
user Get user information

Flags:
-h, --help help for expanse-client
-p, --plain plain no graphics output
-v, --verbose verbose output

Use expanse-client [command] --help for more information about a command.

Many users will have access to multiple accounts (e.g. an allocation for a research project and a separate allocation for classroom or educational use). Users should verify correct project is designated for all batch jobs. Awards are granted for a specific purposes and should not be used for other projects. To charge your job to one of these projects, replace  << project >> with one from the list and put this PBS directive in your job script:

  #SBATCH -A << project >>

Adding Users to an Account

Project PIs and co-PIs can add or remove users from an account. To do this, log in to your XSEDE portal account and go to the Add User page.

Job Charging

The charge unit for all SDSC machines, including Expanse, is the Service Unit (SU). This corresponds to the use of one compute core utilizing less than or equal to 2G of data for one hour, or 1 GPU using less than 96G of data for 1 hour. Keep in mind that your charges are based on the resources that are tied up by your job and don't necessarily reflect how the resources are used. Charges are based on either the number of cores or the fraction of the memory requested, whichever is larger. The minimum charge for any job is 1 SU.

Job Charge Considerations

  • A node-exclusive job that runs on a compute node for one hour will be charged 128 SUs (128 cores x 1 hour)
  • A node-exclusive job that runs on a GPU node for one hour will be charge 4GPU hours (4 GPU x 1 hour)
  • A serial job in the shared queue that uses less than 2 GB memory and runs for one hour will be charged 1 SU (1 core x 1 hour)
  • Each standard compute node has ~256 GB of memory and 128 cores
    • Each standard node core will be allocated 1 GB of memory, users should explicitly include the --mem directive to request additional memory
    • Max. memory per compute node --mem = 248G
  • Each GPU node has 4 GPUs,  ~384GB of memory and 128 cores
    • Default resource allocation for 1 GPU =  1 GPU, 1 CPU, and 1G of memory,  users will need to explicitly ask for additional resources in their job script.
    • For max memory on a GPU node, users should request --mem = 374G
    • A GPU SU is equivalent to 1GPU, >10CPUs, and >96G of memory.
  • Multicore jobs will scale according to resource utilization
  • Each large memory node has ~2 TB of memory and 128 cores
    • By default the system will only allocate 1 GB of memory per core,  explicitly use the --mem directive to request additional memory
    • Max. memory per large memory node --mem = 2007G

Compiling Codes

Expanse CPU nodes have GNU, Intel, and AOCC (AMD) compilers available along with multiple MPI implementations (OpenMPI, MVAPICH2, and IntelMPI). The majority of the applications on Expanse have been built using gcc/10.2.0 which features AMD Rome specific optimization flags (-march=znver2). Users should evaluate their application for best compiler and library selection. GNU, Intel, and AOCC compilers all have flags to support Advanced Vector Extensions 2 (AVX2). Using AVX2, up to eight floating point operations can be executed per cycle per core, potentially doubling the performance relative to non-AVX2 processors running at the same clock speed. Note that AVX2 support is not enabled by default and compiler flags must be set as described below.

Expanse GPU nodes have GNU, Intel, and PGI compilers available along with multiple MPI implementations (OpenMPI, IntelMPI, and MVAPICH2). The gcc/10.2.0, Intel, and PGI compilers have specific flags for the Cascade Lake architecture. Users should evaluate their application for best compiler and library selections.

Note that the login nodes are not the same as the GPU nodes, therefore all GPU codes must be compiled by requesting an interactive session on the GPU nodes.

Using AMD Compilers

The AMD Optimizing C/C++ Compiler (AOCC) is only available on CPU nodes. AMD compilers can be loaded by executing the following commands at the Linux prompt:

module load aocc

For more information on the AMD compilers: [flang | clang ] -help

Serial MPI OpenMP MPI+OpenMP

Fortran

flang

mpif90

ifort -fopenmp

mpif90 -fopenmp

C

clang

mpiclang

icc -fopenmp

mpicc -fopenmp

C++

clang++

mpiclang

icpc -fopenmp

mpicxx -fopenmp

Using the Intel Compilers

The Intel compilers and the MVAPICH2 MPI compiler wrappers can be loaded by executing the following commands at the Linux prompt:

module load intel mvapich2

For AVX2 support, compile with the -march=core-avx2 option. Note that this flag alone does not enable aggressive optimization, so compilation with -O3 is also suggested.

Intel MKL libraries are available as part of the "intel" modules on Expanse. Once this module is loaded, the environment variable INTEL_MKLHOME points to the location of the mkl libraries. The MKL link advisor can be used to ascertain the link line (change the INTEL_MKLHOME aspect appropriately).

For example to compile a C program statically linking 64 bit scalapack libraries on Expanse:

mpicc -o pdpttr.exe pdpttr.c \
    -I$INTEL_MKLHOME/include \
    ${INTEL_MKLHOME}/lib/intel64/libmkl_scalapack_lp64.a \
    -Wl,--start-group ${INTEL_MKLHOME}/lib/intel64/libmkl_intel_lp64.a \
    ${INTEL_MKLHOME}/lib/intel64/libmkl_core.a \
    ${INTEL_MKLHOME}/lib/intel64/libmkl_sequential.a \
    -Wl,--end-group ${INTEL_MKLHOME}/lib/intel64/libmkl_blacs_intelmpi_lp64.a \
    -lpthread -lm

For more information on the Intel compilers: [ifort | icc | icpc] -help

Serial

MPI

OpenMP

MPI+OpenMP

Fortran

ifort

mpif90

ifort -qopenmp

mpif90 -qopenmp

C

icc

mpicc

icc -qopenmp

mpicc -qopenmp

C++

icpc

mpicxx

icpc -qopenmp

mpicxx -qopenmp

Using the PGI Compilers

The PGI compilers are only available on the GPU nodes, and can be loaded by executing the following commands at the Linux prompt

module load pgi

Note that the openmpi build is integrated into the PGI install so the above module load provides both PGI and openmpi.

For AVX support, compile with -fast.

For more information on the PGI compilers: man [pgf90 | pgcc | pgCC]

Serial

MPI

OpenMP

MPI+OpenMP

Fortran

pgf90

mpif90

pgf90 -mp

mpif90 -mp

C

pgcc

mpicc

pgcc -mp

mpicc -mp

C++

pgCC

mpicxx

pgCC -mp

mpicxx -mp

Using the GNU Compilers

The GNU compilers can be loaded by executing the following commands at the Linux prompt:

module load gcc openmpi

For AVX support, compile with -march=core-avx2. Note that AVX support is only available in version 4.7 or later, so it is necessary to explicitly load the gnu/4.9.2 module until such time that it becomes the default.

For more information on the GNU compilers: man [gfortran | gcc | g++]

Serial

MPI

OpenMP

MPI+OpenMP

Fortran

gfortran

mpif90

gfortran -fopenmp

mpif90 -fopenmp

C

gcc

mpicc

gcc -fopenmp

mpicc -fopenmp

C++

g++

mpicxx

g++ -fopenmp

mpicxx -fopenmp

Notes and Hints

  • The mpif90, mpicc, and mpicxx commands are actually wrappers that call the appropriate serial compilers and load the correct MPI libraries. While the same names are used for the Intel, PGI and GNU compilers, keep in mind that these are completely independent scripts.
  • If you use the PGI or GNU compilers or switch between compilers for different applications, make sure that you load the appropriate modules before running your executables.
  • When building OpenMP applications and moving between different compilers, one of the most common errors is to use the wrong flag to enable handling of OpenMP directives. Note that Intel, PGI, and GNU compilers use the -openmp, -mp, and -fopenmp flags, respectively.
  • Explicitly set the optimization level in your makefiles or compilation scripts. Most well written codes can safely use the highest optimization level (-O3), but many compilers set lower default levels (e.g. GNU compilers use the default -O0, which turns off all optimizations).
  • Turn off debugging, profiling, and bounds checking when building executables intended for production runs as these can seriously impact performance. These options are all disabled by default. The flag used for bounds checking is compiler dependent, but the debugging (-g) and profiling (-pg) flags tend to be the same for all major compilers.

Running Jobs on Expanse

Expanse uses the Simple Linux Utility for Resource Management (SLURM) batch environment. When you run in the batch mode, you submit jobs to be run on the compute nodes using the sbatch command as described below. Remember that computationally intensive jobs should be run only on the compute nodes and not the login nodes.

Expanse places limits on the number of jobs queued and running on a per group (allocation) and partition basis. Please note that submitting a large number of jobs (especially very short ones) can impact the overall  scheduler response for all users. If you are anticipating submitting a lot of jobs, please contact the SDSC consulting staff before you submit them. We can work to check if there are bundling options that make your workflow more efficient and reduce the impact on the scheduler.

The limits are provided for each partition in the table below.

***Note: Partition limits are subject to change based on Early User Period evaluation.***

Partition Name Max Walltime Max Nodes/Job Max RunningJobs Max Running + Queued Jobs Charge Factor Comments
compute 48 hrs 32 64 128 1 * Used for exclusive access to regular compute nodes
shared 48 hrs 1 4096 4096 1 Single-node jobs using fewer than 128 cores
gpu 48 hrs 4 16 24 1 Used for exclusive access to the GPU nodes
gpu-shared 48 hrs 1 16 24 1 Single-node job using fewer than 4 GPUs
large-shared 48 hrs 1 1 4 1 Single-node jobs using large memory up to 2 TB (minimum memory required 256G)
debug 15 min 2 1 2 1 Priority access to compute nodes set aside for testing of jobs with short walltime and limited resources
gpu-debug 15 min 2 1 2 1 ** Priority access to gpu nodes set aside for testing of jobs with short walltime and limited resources
preempt 7 days 32 128 .8 Discounted jobs to run on free nodes that can be pre-empted by jobs submited to any other queue (NO REFUNDS)
preempt-gpu 7 days 1 .8 Discounted jobs to run on unallocated nodes that can be pre-empted by jobs submitted to higher priority queues (NO REFUNDS)

* limit applies per group

**gpu-debug users can only use up to two gpus per job.

Requesting interactive resources using srun

You can request an interactive session using the srun command. The following example will request one regular compute node, 128 cores, in the debug partition for 30 minutes.

srun --partition=debug  --pty --nodes=1 --ntasks-per-node=128 \
    --mem=248 -t 00:30:00 --wait=0 --export=ALL /bin/bash

The following example will request a GPU node, 40 cores, 4 GPU and 374G (all the memory) in the debug partition for 30 minutes

srun --partition=gpu-debug  --pty --nodes=1 --ntasks-per-node=40 \
    --mem=374 --gpus=4  -t 00:30:00 --wait=0 --export=ALL /bin/bash

Submitting Jobs Using sbatch

Jobs can be submitted to the sbatch partitions using the sbatch command as follows:

 sbatch jobscriptfile

where jobscriptfile is the name of a UNIX format file containing special statements (corresponding to sbatch options), resource specifications and shell commands. Several example SLURM scripts are given below:

Basic MPI Job

#!/bin/bash
#SBATCH --job-name="hellompi"
#SBATCH --output="hellompi.%j.%N.out"
#SBATCH --partition=compute
#SBATCH --nodes=2 #SBATCH --ntasks-per-node=128
#SBATCH --mem=248G
#SBATCH --account=*ABC123
#SBATCH --export=ALL #SBATCH -t 01:30:00 #This job runs with 2 nodes, 128 cores per node for a total of 256 tasks.

module purge
module load cpu
#Load module file(s) into the shell environment
module load gcc
module load mvapich2
module load slurm

srun --mpi=pmi2 -n 256 ../hello_mpi

* Expanse will require users to enter a valid project name; users can list valid project by running the expanse-client script.

Basic OpenMP Job

#!/bin/bash
#SBATCH --job-name="hello_openmp"
#SBATCH --output="hello_openmp.%j.%N.out"
#SBATCH --partition=compute
#SBATCH --nodes=1 #SBATCH --ntasks-per-node=24
#SBATCH --mem=248G
#SBATCH --account=*ABC123 #SBATCH --export=ALL #SBATCH -t 01:30:00

module purge
module load cpu
module load slurm
module load gcc
module load openmpi #SET the number of openmp threads export OMP_NUM_THREADS=24 #Run the job using mpirun mpirun -np 24 ./hello_openmp

* Expanse will require users to enter a valid project name; users can list valid project by running the expanse-client script.

Hybrid MPI-OpenMP Job

#!/bin/bash
#SBATCH --job-name="hellohybrid"
#SBATCH --output="hellohybrid.%j.%N.out"
#SBATCH --partition=compute
#SBATCH --nodes=2 #SBATCH --ntasks-per-node=24
#SBATCH --mem=248G
#SBATCH --account=*ABC123 #SBATCH --export=ALL #SBATCH -t 01:30:00 #This job runs with 2 nodes, 24 cores per node for a total of 48 cores. # We use 8 MPI tasks and 6 OpenMP threads per MPI task

module purge
module load cpu
module load slurm export OMP_NUM_THREADS=6 srun --mpi=pmi2 --cpus-per-task=$OMP_NUM_THREADS -n 4 ./hello_hybrid

* Expanse will require users to enter a valid project name; users can list valid project by running the expanse-client script.

Using the Shared Partition

#!/bin/bash
#SBATCH -p shared
#SBATCH --nodes=1 #SBATCH --ntasks-per-node=8 #SBATCH --mem=40G #SBATCH -t 01:00:00 #SBATCH -J job.8
#SBATCH -A *ABC123 #SBATCH -o job.8.%j.%N.out #SBATCH -e job.8.%j.%N.err #SBATCH --export=ALL

module purge
module load cpu
module load gcc
module load mvapich2
module load slurm

srun -n 8 ../hello_mpi

* Expanse will require users to enter a valid project name; users can list valid project by running the expanse-client script.

The above script will run using 8 cores and 40 GB of memory. Please note that the performance in the shared partition may vary depending on how sensitive your application is to memory locality and the cores you are assigned by the scheduler. It is possible the 8 cores will span two sockets for example.

Using Large Memory Nodes

The large memory nodes can be accessed via the "large-shared" partition. Charges are based on either the number of cores or the fraction of the memory requested, whichever is larger. By default the system will only allocate 1 GB of memory per core. If additional memory is required, users should explicitly use the --mem directive.   

For example, on the "large-shared" partition, the following job requesting 16 cores and 445 GB of memory (about 31.3% of 2TB of one node's available memory) for 1 hour will be charged 20 SUs:

455/1455(memory) * 64(cores) * 1(duration) ~= 20

#SBATCH --partition=large-shared
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=128
#SBATCH --cpus-per-task=1
#SBATCH --mem=2000G

export OMP_PROC_BIND='true'

While there is not a separate 'large' partition, a job can still explicitly request all of the resources on a large memory node. Please note that there is no premium for using Expanse's large memory nodes. Users are advised to request the large nodes only if they need the extra memory.

Using GPU Nodes

GPU nodes are allocated as a separate resource. The GPU nodes can be accessed via either the "gpu" or the "gpu-shared" partitions.

#SBATCH -p gpu

or

#SBATCH -p gpu-shared

When users request 1 GPU, in gpu-shared partition, by default they will also receive, 1 CPU, and 1G memory.  Here is an example AMBER script using the gpu-shared queue.

GPU job

#!/bin/bash
#SBATCH --job-name="ambergpu-shared"
#SBATCH --output="ambergpu-shared.%j.%N.out"
#SBATCH --partition=gpu
#SBATCH --nodes=1 #SBATCH --gpus=4
#SBATCH --mem=374
#SBATCH --account=*ABC123 #SBATCH --no-requeue #SBATCH -t 01:00:00

module purge
module load gpu module load slurm
module load openmpi module load amber
pmemd.cuda -O -i mdin.GPU -o mdout.GPU.$SLURM_JOBID -x mdcrd.$SLURM_JOBID -nf mdinfo.$SLURM_JOBID -1 mdlog.$SLURM_JOBID -p prmtop -c inpcrd

* Expanse will require users to enter a valid project name; users can list valid project by running the expanse-client script.

Users can find application specific example job script on the system in directory /cm/shared/examples/gpu.

GPU modes can be controlled for jobs in the "gpu" partition.  By default, the GPUs are in non-exclusive mode and the persistence mode is 'on'.  If a particular "gpu"  partition job needs exclusive access the following options should be set in your batch script:

#SBATCH --constraint=exclusive

To turn persistence off add the following line to your batch script:

#SBATCH --constraint=persistenceoff

The charging equation will be:

GPU SUs = (Number of GPUs) x (wallclock time)

SLURM No-Requeue Option

SLURM will requeue jobs if there is a node failure. However, in some cases this might be detrimental if files get overwritten. If users wish to avoid automatic requeue, the following line should be added to their script:

#SBATCH --no-requeue

The 'requeue' count limit is currently set to 5. The job will be requeued 5 times after which the job will be placed in the REQUEUE_HOLD state and the job must be canceled and resubmitted.

Example Scripts for Applications

SDSC User Services staff have developed sample run scripts for common applications. They are available in the /cm/shared/examples directory on Expanse.

Job Dependencies

There are several scenarios (e.g. splitting long running jobs, workflows) where users may require jobs with dependencies on successful completions of other jobs. In such cases, SLURM's --dependency option can be used. The syntax is as follows:

[user@login01-expanse ~]$ sbatch --dependency=afterok:jobid jobscriptfile

Job Monitoring and Management

Users can monitor jobs using the squeue command.

[user@expanse ~]$ squeue -u user1

             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)
            256556   compute raxml_na user1     R    2:03:57      4 expanse-14-[11-14]
            256555   compute raxml_na user1     R    2:14:44      4 expanse-02-[06-09]

In this example, the output lists two jobs that are running in the "compute" partition. The jobID, partition name, job names, user names, status, time, number of nodes, and the node list are provided for each job. Some common squeue options include:

Option Result
-i <interval> Repeatedly report at intervals (in seconds)
-ij<job_list> Displays information for specified job(s)
-p <part_list> Displays information for specified partitions (queues)
-t <state_list> Shows jobs in the specified state(s)

Users can cancel their own jobs using the scancel command as follows:

[user@expanse ~]$ scancel <jobid>

Info on Globus Endpoints, Data Movers and Mount Points

All of Expanse's NFS and Lustre filesystems are accessible via the Globus endpoint xsede#expanse. The servers also mount Comets's filesystems, so the mount points are a different for each system. The following table shows the mount points on the data mover nodes (that are the backend for xsede#comet and xsede#expanse).

Machine Location on machine Location on Globus/Data Movers
Expanse /home/$USER /home/$USER
Expanse /expanse/lustre/projects /expanse/lustre/projects/
Expanse /expanse/lustre/scratch /expanse/lustre/scratch/...
Comet /oasis/projects/nsf /oasis/projects/nsf
Comet /oasis/scratch/comet /oasis/scratch

Storage Overview

SDSC will enforce a strict purge policy on Expanse for /scratch and /project file systems. /projects will be purged 90 days after allocation expires. /scratch files will be purged 90 days since last used.

Local Scratch Disk

The compute nodes on Expanse have access to fast flash storage. There is 250GB of SSD space available for use on each compute node. The latency to the SSDs is several orders of magnitude lower than that for spinning disk (<100 microseconds vs. milliseconds) making them ideal for user-level check pointing and applications that need fast random I/O to large scratch files. Users can access the SSDs only during job execution under the following directories local to each compute node:

/scratch/$USER/job_$SLURM_JOB_ID

Partition Space Available
compute,shared 212 GB
gpu, gpu-shared 286 GB
large-shared 286 GB

A limited number of nodes in the "compute" partition have larger SSDs with a total of 1464 GB available in local scratch. They can be accessed by adding the following to the SLURM script:

#SBATCH --constraint="large_scratch"

Parallel Lustre Filesystems

In addition to the local scratch storage, users will have access to global parallel filesystems on Expanse. Every Expanse node has access to a 12 PB Lustre parallel file system (provided by Aeon Computing) and a 7 PB Ceph Object Store system, 140 GB/second performance storage. SDSC limits the number of files that can be stored in the /lustre/scratch filesystem to 2 million files per user. Users should contact support for assistance at the XSEDE Help Desk if their workflow requires extensive small I/O, to avoid causing system issues assosiated with load on the metadata server.

 The two Lustre filesystems available on Expanse are:

  • Lustre Expanse scratch filesystem: /expanse/lustre/scratch/$USER/temp_project
  • Lustre NSF projects filesystem: /expanse/lustre/projects/

Home File System

After logging in, users are placed in their home directory, /home, also referenced by the environment variable $HOME. The home directory is limited in space and should be used only for source code storage. User will have access to 100GB in /home. Jobs should  never be run from the home file system, as it is not set up for high performance throughput. Users should keep usage on $HOME under 100GB. Backups are currently being stored on a rolling 8-week period. In case of file corruption/data loss, please contact us at XSEDE Help Desk to retrieve the requested files.

Composable Systems

Expanse also supports Composable Systems, allowing reserachers to create a virtual 'tool set' of resources, such as Kubernetes resources, for a specific project and then re-compose it as needed. Expanse will also feature direct scheduler integration with the major cloud providers, leveraging high-speed networks to ease data movement to and from the cloud.

All Composable System requests must include a brief justification, specifically describing why a Composable System is required for the project.

Software

Expanse supports a broad application base with installs and modules for commonly used packages in bioinformatics, molecular dynamics, machine learning, quantum chemistry, structural mechanics, and visualization, and will continue to support Singularity-based containerization in Expanse. Users can search for available software on XSEDE resources with the XSEDE software search tool.

Publications