Expanse is a dedicated eXtreme Science and Engineering Discovery Environment (XSEDE) cluster designed by Dell and SDSC delivering 5.16 peak petaflops, and will offer Composible Systems and Cloud Bursting
Expanse's standard compute nodes are each powered by two 64-core AMD EPYC 7742 processors and /contain 256 GB of DDR4 memory, while each GPU node contains four NVIDIA V100s (32 GB SMX2), connected via NVLINK, and dual 20-core Intel Xeon 6248 CPUs. Expanse also has four 2 TB large memory nodes.
Expanse is organized into 13 SDSC Scalable Compute Units (SSCUs), comprising 728 standard nodes, 54 GPU nodes and 4 large-memory nodes. Every Expanse node has access to a 12 PB Lustre parallel file system (provided by Aeon Computing) and a 7 PB Ceph Object Store system. The Expanse cluster will be managed using the Bright Computing HPC Cluster management system, and will use the SLURM workload manager for job scheduling.
Expanse supports the XSEDE core software stack, which includes remote login, remote computation, data movement, science workflow support, and science gateway support toolkits.
Expanse is an NSF-funded system operated by the San Diego Supercomputer Center at UC San Diego, and is available through the XSEDE program.
Expanse is now accepting proposals.
System Component | Configuration |
---|---|
Compute Nodes | |
CPU Type | AMD EPYC 7742 |
Nodes | 726 |
Sockets | 2 |
Cores/socket | 64 |
Clock speed | 2.25 GHz |
Flop speed | 4608 GFlop/s |
Memory capacity |
* 256 GB DDR4 DRAM |
Local Storage |
1TB Intel P4510 NVMe PCIe SSD |
Max CPU Memory bandwidth | 409.5 GB/s |
GPU Nodes | |
GPU Type | NVIDIA V100 SMX2 |
Nodes | 52 |
GPUs/node | 4 |
CPU Type | Xeon Gold 6248 |
Cores/socket | 20 |
Sockets | 2 |
Clock speed | 2.5 GHz |
Flop speed | 34.4 TFlop/s |
Memory capacity | *384 GB DDR4 DRAM |
Local Storage |
1.6TB Samsung PM1745b NVMe PCIe SSD |
Max CPU Memory bandwidth | 281.6 GB/s |
Large-Memory |
|
CPU Type | AMD EPYC 7742 |
Nodes | 4 |
Sockets | 2 |
Cores/socket | 64 |
Clock speed | 2.25 GHz |
Flop speed | 4608 GFlop/s |
Memory capacity | 2 TB |
Local Storage |
3.2 TB (2 X 1.6 TB Samsung PM1745b NVMe PCIe SSD) |
STREAM Triad bandwidth | ~310 GB/sec |
Full System | |
Total compute nodes | 728 |
Total compute cores | 93,184 |
Total GPU nodes | 52 |
Total V100 GPUs | 208 |
Peak performance | 5.16 PFlop/s |
Total memory | 247 TB |
Total memory bandwidth | 215 TB/s |
Total flash memory | 824 TB |
HDR InfiniBand Interconnect | |
Topology | Hybrid Fat-Tree |
Link bandwidth | 56 Gb/s (bidirectional) |
Peak bisection bandwidth | 8.5 TB/s |
MPI latency | 1.17-1.69 µs |
DISK I/O Subsystem | |
File Systems | NFS, Ceph |
Lustre Storage(performance) | 12 PB |
Ceph Storage | 7 PB |
I/O bandwidth (performance disk) | 140 GB/s, 200K IOPs |
Software Function | Description |
---|---|
Cluster Management | Bright Cluster Manager |
Operating System | CentOS Linux |
File Systems | Lustre, Ceph |
Scheduler and Resource Manager | SLURM |
XSEDE Software | CTSS |
User Environment | Lmod |
Compilers | AOCC, GCC, Intel, PGI |
Message Passing | Intel MPI, MVAPICH, Open MPI |
As an XSEDE computing resource, Expanse is accessible to XSEDE users who are given time on the system. To obtain an account, users may submit a proposal through the XSEDE Allocation Request System or request a Trial Account.
Interested parties may contact SDSC User Support for help with an Expanse proposal (see sidebar for contact information).
Expanse supports Single Sign-On through the XSEDE User Portal, from the command line using an XSEDE-wide password, and coming soon, the Expanse User Portal. While CPU and GPU resources are allocated separately, the login nodes are the same. To log in to Expanse from the command line, use the hostname:
login.expanse.sdsc.edu
The following are examples of Secure Shell (ssh) commands that may be used to log in to Expanse:
ssh <your_username>@login.expanse.sdsc.edu ssh -l <your_username> login.expanse.sdsc.edu
Expanse allows user to use two-factor authentication (2FA) when using a password to log in. 2FA adds a layer of security to your authentication process. Expanse uses Google Authenticator, which is a standards-based implementation.Install Authenticator App
Users will first need to Install an authenticator app on their smartphone or other device. Users can use any app that supports importing TOTP 2FA codes with a QR code. (Google Authenticator, DUO Mobile App, LastPass Authenticator App, etc) We suggest using the Google Authenticator app if you do not an athenticator application already istalled on your mobile device.Google Authenticator for Apple IOS
Google Authenticator for Android Once the authenticator app has been installed, users will need to enroll and pair the 2FA device with their Expanse Account.
1) Log in to login.expanse.sdsc.edu
2) On the command line load the sdsc module:
>module load sdsc
3) Resize your terminal window and/or font size so it can display at least 82 columns by 40 lines
4) On the command line run to command:
>otp-enroll
5) Using your smart phone, scan the QR code with your OTP/2FA application
6) Confirm the scan by entering the 6-digit code from the OTP/2FA application
7) Save your emergency scratch codes, in case you need to log in and don't have access to your mobile. (You can always log in from the XSEDE SSO hub or with SSH keys instead of using an emergency code)
8) Answer 'y' to the prompt asking if you want to update your .google_authenticator file.
At this time 2FA is optional, users may un-enroll at any time.
To un-enroll:
1) Log in to login.expanse.sdsc.edu.
2) Remove the file ~/.google_authenticator
3) Once you have removed the .google_authenticaor file from the server side, you can remove the entry on your smart phone or other device
Coming Soon! The Expanse User Portal provides a quick and easy way for Expanse users to login, transfer and edit files and submit and monitor jobs. The Expanse User Portal will provide a gateway for launching interactive applications such as MATLAB, RStudio, and an integrated web-based environment for file management and job submission. All Users with valid Expanse allocation and XSEDE Based credentials have access via their XSEDE credentials.
Environment Modules provide for dynamic modification of your shell environment. Module commands set, change, or delete environment variables, typically in support of a particular application. They also let the user choose between different versions of the same software or different combinations of related codes.
Expanse uses Lmod, a Lua-based module system. Users will now need to setup their own environment by loading available modules into the shell environment, including compilers and libraries and the batch scheduler.
Users will not see all the available modules when they run the module available
command without loading a compiler. Users should use the command module spider
to see if a particular package exists and can be loaded on the system. For additional details, and to identify dependents modules, use the command:
module spider <application_name>
The module paths are different for the CPU and GPU nodes. Users can enable the paths by loading the following modules:
module load cpu (for cpu nodes)
module load gpu (for gpu nodes)
Users are requested to ensure that both sets are not loaded at the same time in their build/run environment (use the module list
command to check in an interactive session).
On the GPU nodes, the gnu compiler used for building packages is the default version 8.3.1 from the OS. Hence, no additional module load
command is required to use them. For example, if one needs OpenMPI built with gnu compilers, the following is sufficient:
module load openmpi
Here are some common module commands and their descriptions:
Command | Description |
---|---|
module list |
List the modules that are currently loaded |
module avail |
List the modules that are available in environment |
module spider |
List of the modules and extensions currently available |
module display <module_name> |
Show the environment variables used by <module name> and how they are affected |
module unload <module name> |
Remove <module name> from the environment |
module load <module name> |
Load <module name> into the environment |
module swap <module one> <module two> |
Replace <module one> with <module two> in the environment |
Some modules depend on others, so they may be loaded or unloaded as a consequence of another module command. If a model has dependencies, the command module spider <module_name>
will provide additional details.
Module: command not found
The error message module: command not found is sometimes encountered when switching from one shell to another or attempting to run the module command from within a shell script or batch job. The reason the module
command may not be inherited as expected is that it is defined as a function for your login shell. If you encounter this error, execute the following from the command line (interactive shells) or add to your shell script (including SLURM batch scripts):
source /etc/profile.d/modules.sh
The expanse-client
script provides additional details regarding User and Project availability and usage located at:
/cm/shared/apps/sdsc/current/bin/expanse-client
To use:
[user@expanse-login02 ~]$ module load sdsc
[user@expanse-login02 ~]$ expanse-client user -p
NAME PROJECT USED AVAILABLE USED_BY_PROJECT
─────────────────────────────────────────────────────────────────
<user> <project> <SUs used by user> <SUs available for user> <SUs used by project>
Usage:
expanse-client [command]
Available Commands:
help Help about any command
project Get project information
user Get user information
Flags:
-h, --help help for expanse-client
-p, --plain plain no graphics output
-v, --verbose verbose output
Use expanse-client [command] --help
for more information about a command.
Many users will have access to multiple accounts (e.g. an allocation for a research project and a separate allocation for classroom or educational use). Users should verify that the correct project is designated for all batch jobs. Awards are granted for a specific purposes and should not be used for other projects. Designate a project by replacing << project >>
with a project listed in the SBATCH directive in your job script:
#SBATCH -A << project >>
Project PIs and co-PIs can add or remove users from an account. To do this, log in to your XSEDE portal account and go to the Add User page.
The charge unit for all SDSC machines, including Expanse, is the Service Unit (SU). This corresponds to the use of one compute core utilizing less than or equal to 2G of data for one hour, or 1 GPU using less than 96G of data for 1 hour. Keep in mind that your charges are based on the resources that are tied up by your job and don't necessarily reflect how the resources are used. Charges are based on either the number of cores or the fraction of the memory requested, whichever is larger. The minimum charge for any job is 1 SU.
--mem
directive to request additional memory--mem = 248G
--mem = 374G
--mem
directive to request additional memory--mem = 2007G
Expanse CPU nodes have GNU, Intel, and AOCC (AMD) compilers available along with multiple MPI implementations (OpenMPI, MVAPICH2, and IntelMPI). The majority of the applications on Expanse have been built using gcc/10.2.0 which features AMD Rome specific optimization flags (-march=znver2). Users should evaluate their application for best compiler and library selection. GNU, Intel, and AOCC compilers all have flags to support Advanced Vector Extensions 2 (AVX2). Using AVX2, up to eight floating point operations can be executed per cycle per core, potentially doubling the performance relative to non-AVX2 processors running at the same clock speed. Note that AVX2 support is not enabled by default and compiler flags must be set as described below.
Expanse GPU nodes have GNU, Intel, and PGI compilers available along with multiple MPI implementations (OpenMPI, IntelMPI, and MVAPICH2). The gcc/10.2.0, Intel, and PGI compilers have specific flags for the Cascade Lake architecture. Users should evaluate their application for best compiler and library selections.
Note that the login nodes are not the same as the GPU nodes, therefore all GPU codes must be compiled by requesting an interactive session on the GPU nodes.
The AMD Optimizing C/C++ Compiler (AOCC) is only available on CPU nodes. AMD compilers can be loaded by executing the following commands at the Linux prompt:
module load aocc
For more information on the AMD compilers: [flang | clang ] -help
Serial | MPI | OpenMP | MPI+OpenMP | |
---|---|---|---|---|
Fortran |
flang |
mpif90 |
ifort -fopenmp |
mpif90 -fopenmp |
C |
clang |
mpiclang |
icc -fopenmp |
mpicc -fopenmp |
C++ |
clang++ |
mpiclang |
icpc -fopenmp |
mpicxx -fopenmp |
The Intel compilers and the MVAPICH2 MPI compiler wrappers can be loaded by executing the following commands at the Linux prompt:
module load intel mvapich2
For AVX2 support, compile with the -march=core-avx2
option. Note that this flag alone does not enable aggressive optimization, so compilation with -O3
is also suggested.
Intel MKL libraries are available as part of the "intel" modules on Expanse. Once this module is loaded, the environment variable INTEL_MKLHOME points to the location of the mkl libraries. The MKL link advisor can be used to ascertain the link line (change the INTEL_MKLHOME aspect appropriately).
For example to compile a C program statically linking 64 bit scalapack libraries on Expanse:
mpicc -o pdpttr.exe pdpttr.c \ -I$INTEL_MKLHOME/include \ ${INTEL_MKLHOME}/lib/intel64/libmkl_scalapack_lp64.a \ -Wl,--start-group ${INTEL_MKLHOME}/lib/intel64/libmkl_intel_lp64.a \ ${INTEL_MKLHOME}/lib/intel64/libmkl_core.a \ ${INTEL_MKLHOME}/lib/intel64/libmkl_sequential.a \ -Wl,--end-group ${INTEL_MKLHOME}/lib/intel64/libmkl_blacs_intelmpi_lp64.a \ -lpthread -lm
For more information on the Intel compilers: [ifort | icc | icpc] -help
Serial |
MPI |
OpenMP |
MPI+OpenMP |
|
Fortran |
ifort |
mpif90 |
ifort -qopenmp |
mpif90 -qopenmp |
C |
icc |
mpicc |
icc -qopenmp |
mpicc -qopenmp |
C++ |
icpc |
mpicxx |
icpc -qopenmp |
mpicxx -qopenmp |
The PGI compilers are only available on the GPU nodes, and can be loaded by executing the following commands at the Linux prompt
module load pgi
Note that the openmpi build is integrated into the PGI install so the above module load provides both PGI and openmpi.
For AVX support, compile with -fast
.
For more information on the PGI compilers: man [pgf90 | pgcc | pgCC]
Serial |
MPI |
OpenMP |
MPI+OpenMP |
|
Fortran |
pgf90 |
mpif90 |
pgf90 -mp |
mpif90 -mp |
C |
pgcc |
mpicc |
pgcc -mp |
mpicc -mp |
C++ |
pgCC |
mpicxx |
pgCC -mp |
mpicxx -mp |
The GNU compilers can be loaded by executing the following commands at the Linux prompt:
module load gcc openmpi
For AVX support, compile with -march=core-avx2. Note that AVX support is only available in version 4.7 or later, so it is necessary to explicitly load the gnu/4.9.2 module until such time that it becomes the default.
For more information on the GNU compilers: man [gfortran | gcc | g++]
Serial |
MPI |
OpenMP |
MPI+OpenMP |
|
Fortran |
gfortran |
mpif90 |
gfortran -fopenmp |
mpif90 -fopenmp |
C |
gcc |
mpicc |
gcc -fopenmp |
mpicc -fopenmp |
C++ |
g++ |
mpicxx |
g++ -fopenmp |
mpicxx -fopenmp |
Expanse uses the Simple Linux Utility for Resource Management (SLURM) batch environment. When you run in the batch mode, you submit jobs to be run on the compute nodes using the sbatch command as described below. Remember that computationally intensive jobs should be run only on the compute nodes and not the login nodes.
Expanse places limits on the number of jobs queued and running on a per group (allocation) and partition basis. Please note that submitting a large number of jobs (especially very short ones) can impact the overall scheduler response for all users. If you are anticipating submitting a lot of jobs, please contact the SDSC consulting staff before you submit them. We can work to check if there are bundling options that make your workflow more efficient and reduce the impact on the scheduler.
The limits are provided for each partition in the table below.
***Note: Partition limits are subject to change based on Early User Period evaluation.***
Partition Name | Max Walltime | Max Nodes/Job | Max RunningJobs | Max Running + Queued Jobs | Charge Factor | Comments |
---|---|---|---|---|---|---|
compute | 48 hrs | 32 | 64 | 128 | 1 | * Used for exclusive access to regular compute nodes |
shared | 48 hrs | 1 | 4096 | 4096 | 1 | Single-node jobs using fewer than 128 cores |
gpu | 48 hrs | 4 | 4 | 8 (32 Tres GPU) | 1 | Used for exclusive access to the GPU nodes |
gpu-shared | 48 hrs | 1 | 16 | 24 (24 Tres GPU) | 1 | Single-node job using fewer than 4 GPUs |
large-shared | 48 hrs | 1 | 1 | 4 | 1 | Single-node jobs using large memory up to 2 TB (minimum memory required 256G) |
debug | 30 min | 2 | 1 | 2 | 1 | Priority access to compute nodes set aside for testing of jobs with short walltime and limited resources |
gpu-debug | 30 min | 2 | 1 | 2 | 1 | ** Priority access to gpu nodes set aside for testing of jobs with short walltime and limited resources |
preempt | 7 days | 32 | 128 | .8 | Discounted jobs to run on free nodes that can be pre-empted by jobs submited to any other queue (NO REFUNDS) | |
gpu-preempt | 7 days | 1 | 24 (24 Tres GPU) | .8 | Discounted jobs to run on unallocated nodes that can be pre-empted by jobs submitted to higher priority queues (NO REFUNDS) |
* limit applies per group
**gpu-debug users can only use up to two gpus per job.
You can request an interactive session using the srun command. The following example will request one regular compute node, 128 cores, in the debug partition for 30 minutes.
srun --partition=debug --pty --account=abc123 --nodes=1 --ntasks-per-node=128 \ --mem=248G -t 00:30:00 --wait=0 --export=ALL /bin/bash
The following example will request a GPU node, 40 cores, 4 GPU and 374G (all the memory) in the debug partition for 30 minutes
srun --partition=gpu-debug --pty --account=abc123 --nodes=1 --ntasks-per-node=40 \ --mem=374G --gpus=4 -t 00:30:00 --wait=0 --export=ALL /bin/bash
Jobs can be submitted to the sbatch partitions using the sbatch command as follows:
sbatch jobscriptfile
where jobscriptfile
is the name of a UNIX format file containing special statements (corresponding to sbatch options), resource specifications and shell commands. Several example SLURM scripts are given below:
#!/bin/bash #SBATCH --job-name="hellompi" #SBATCH --output="hellompi.%j.%N.out" #SBATCH --partition=compute
#SBATCH --nodes=2 #SBATCH --ntasks-per-node=128
#SBATCH --mem=248G
#SBATCH --account=*ABC123
#SBATCH --export=ALL #SBATCH -t 01:30:00 #This job runs with 2 nodes, 128 cores per node for a total of 256 tasks.
module purge
module load cpu
#Load module file(s) into the shell environment
module load gcc
module load mvapich2
module load slurm
srun --mpi=pmi2 -n 256 ../hello_mpi
* Expanse will require users to enter a valid project name; users can list valid project by running the expanse-client
script.
#!/bin/bash #SBATCH --job-name="hello_openmp" #SBATCH --output="hello_openmp.%j.%N.out" #SBATCH --partition=compute
#SBATCH --nodes=1 #SBATCH --ntasks-per-node=24
#SBATCH --mem=248G
#SBATCH --account=*ABC123 #SBATCH --export=ALL #SBATCH -t 01:30:00
module purge
module load cpu
module load slurm
module load gcc
module load openmpi #SET the number of openmp threads export OMP_NUM_THREADS=24 #Run the job using mpirun mpirun -np 24 ./hello_openmp
* Expanse will require users to enter a valid project name; users can list valid project by running the expanse-client
script.
#!/bin/bash #SBATCH --job-name="hellohybrid" #SBATCH --output="hellohybrid.%j.%N.out" #SBATCH --partition=compute
#SBATCH --nodes=2 #SBATCH --ntasks-per-node=24
#SBATCH --mem=248G
#SBATCH --account=*ABC123 #SBATCH --export=ALL #SBATCH -t 01:30:00 #This job runs with 2 nodes, 24 cores per node for a total of 48 cores. # We use 8 MPI tasks and 6 OpenMP threads per MPI task
module purge
module load cpu
module load slurm export OMP_NUM_THREADS=6 srun --mpi=pmi2 --cpus-per-task=$OMP_NUM_THREADS -n 4 ./hello_hybrid
* Expanse will require users to enter a valid project name; users can list valid project by running the expanse-client
script.
#!/bin/bash #SBATCH -p shared
#SBATCH --nodes=1 #SBATCH --ntasks-per-node=8 #SBATCH --mem=40G #SBATCH -t 01:00:00 #SBATCH -J job.8
#SBATCH -A *ABC123 #SBATCH -o job.8.%j.%N.out #SBATCH -e job.8.%j.%N.err #SBATCH --export=ALL
module purge
module load cpu
module load gcc
module load mvapich2
module load slurm
srun -n 8 ../hello_mpi
* Expanse will require users to enter a valid project name; users can list valid project by running the expanse-client
script.
The above script will run using 8 cores and 40 GB of memory. Please note that the performance in the shared partition may vary depending on how sensitive your application is to memory locality and the cores you are assigned by the scheduler. It is possible the 8 cores will span two sockets for example.
The large memory nodes can be accessed via the "large-shared" partition. Charges are based on either the number of cores or the fraction of the memory requested, whichever is larger. By default the system will only allocate 1 GB of memory per core. If additional memory is required, users should explicitly use the --mem
directive.
For example, on the "large-shared" partition, the following job requesting 16 cores and 445 GB of memory (about 31.3% of 2TB of one node's available memory) for 1 hour will be charged 20 SUs:
455/1455(memory) * 64(cores) * 1(duration) ~= 20
#SBATCH --partition=large-shared
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=128
#SBATCH --cpus-per-task=1
#SBATCH --mem=2000G
export OMP_PROC_BIND='true'
While there is not a separate 'large' partition, a job can still explicitly request all of the resources on a large memory node. Please note that there is no premium for using Expanse's large memory nodes. Users are advised to request the large nodes only if they need the extra memory.
GPU nodes are allocated as a separate resource. The GPU nodes can be accessed via either the "gpu" or the "gpu-shared" partitions.
#SBATCH -p gpu
or
#SBATCH -p gpu-shared
When users request 1 GPU, in gpu-shared partition, by default they will also receive, 1 CPU, and 1G memory. Here is an example AMBER script using the gpu-shared queue.
#!/bin/bash #SBATCH --job-name="ambergpu-shared" #SBATCH --output="ambergpu-shared.%j.%N.out" #SBATCH --partition=gpu
#SBATCH --nodes=1 #SBATCH --gpus=4
#SBATCH --mem=374
#SBATCH --account=*ABC123 #SBATCH --no-requeue #SBATCH -t 01:00:00
module purge
module load gpu module load slurm
module load openmpi module load amber
pmemd.cuda -O -i mdin.GPU -o mdout.GPU.$SLURM_JOBID -x mdcrd.$SLURM_JOBID -nf mdinfo.$SLURM_JOBID -1 mdlog.$SLURM_JOBID -p prmtop -c inpcrd
* Expanse will require users to enter a valid project name; users can list valid project by running the expanse-client
script.
Users can find application specific example job script on the system in directory /cm/shared/examples/gpu
.
GPU modes can be controlled for jobs in the "gpu" partition. By default, the GPUs are in non-exclusive mode and the persistence mode is 'on'. If a particular "gpu" partition job needs exclusive access the following options should be set in your batch script:
#SBATCH --constraint=exclusive
To turn persistence off add the following line to your batch script:
#SBATCH --constraint=persistenceoff
The charging equation will be:
GPU SUs = (Number of GPUs) x (wallclock time)
SLURM will requeue jobs if there is a node failure. However, in some cases this might be detrimental if files get overwritten. If users wish to avoid automatic requeue, the following line should be added to their script:
#SBATCH --no-requeue
The 'requeue' count limit is currently set to 5. The job will be requeued 5 times after which the job will be placed in the REQUEUE_HOLD state and the job must be canceled and resubmitted.
SDSC User Services staff have developed sample run scripts for common applications. They are available in the /cm/shared/examples
directory on Expanse.
There are several scenarios (e.g. splitting long running jobs, workflows) where users may require jobs with dependencies on successful completions of other jobs. In such cases, SLURM's --dependency option can be used. The syntax is as follows:
[user@login01-expanse ~]$ sbatch --dependency=afterok:jobid jobscriptfile
Users can monitor jobs using the squeue command.
[user@expanse ~]$ squeue -u user1 JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) 256556 compute raxml_na user1 R 2:03:57 4 expanse-14-[11-14] 256555 compute raxml_na user1 R 2:14:44 4 expanse-02-[06-09]
In this example, the output lists two jobs that are running in the "compute" partition. The jobID, partition name, job names, user names, status, time, number of nodes, and the node list are provided for each job. Some common squeue options include:
Option | Result |
---|---|
-i <interval> | Repeatedly report at intervals (in seconds) |
-ij<job_list> | Displays information for specified job(s) |
-p <part_list> | Displays information for specified partitions (queues) |
-t <state_list> | Shows jobs in the specified state(s) |
Users can cancel their own jobs using the scancel command as follows:
[user@expanse ~]$ scancel <jobid>
All of Expanse's NFS and Lustre filesystems are accessible via the Globus endpoint xsede#expanse
. The servers also mount Comets's filesystems, so the mount points are a different for each system. The following table shows the mount points on the data mover nodes (that are the backend for xsede#comet
and xsede#expanse
).
Machine | Location on machine | Location on Globus/Data Movers |
---|---|---|
Expanse | /home/$USER |
/expanse/home/$USER |
Expanse | /expanse/lustre/projects |
/expanse/lustre/projects/ |
Expanse | /expanse/lustre/scratch |
/expanse/lustre/scratch/... |
Comet | /oasis/projects/nsf |
/oasis/projects/nsf |
Comet | /oasis/scratch/comet |
/oasis/scratch |
SDSC will enforce a strict purge policy on Expanse for /scratch
and /project
file systems. /projects
will be purged 90 days after allocation expires. /scratch
files will be purged 90 days from creation date.
The compute nodes on Expanse have access to fast flash storage. There is 250GB of SSD space available for use on each compute node. The latency to the SSDs is several orders of magnitude lower than that for spinning disk (<100 microseconds vs. milliseconds) making them ideal for user-level check pointing and applications that need fast random I/O to large scratch files. Users can access the SSDs only during job execution under the following directories local to each compute node:
/scratch/$USER/job_$SLURM_JOB_ID
Partition | Space Available |
---|---|
compute,shared | 1 TB |
gpu, gpu-shared | 1.6TB |
large-shared | 3.2 TB |
A limited number of nodes in the "compute" partition have larger SSDs with a total of 1464 GB available in local scratch. They can be accessed by adding the following to the SLURM script:
#SBATCH --constraint="large_scratch"
In addition to the local scratch storage, users will have access to global parallel filesystems on Expanse. Every Expanse node has access to a 12 PB Lustre parallel file system (provided by Aeon Computing) and a 7 PB Ceph Object Store system, 140 GB/second performance storage. SDSC limits the number of files that can be stored in the /lustre/scratch
filesystem to 2 million files per user. Users should contact support for assistance at the XSEDE Help Desk if their workflow requires extensive small I/O, to avoid causing system issues assosiated with load on the metadata server.
The two Lustre filesystems available on Expanse are:
/expanse/lustre/scratch/$USER/temp_project
/expanse/lustre/projects/
After logging in, users are placed in their home directory, /home, also referenced by the environment variable $HOME
. The home directory is limited in space and should be used only for source code storage. User will have access to 100GB in /home
. Jobs should never be run from the home file system, as it is not set up for high performance throughput. Users should keep usage on $HOME
under 100GB. Backups are currently being stored on a rolling 8-week period. In case of file corruption/data loss, please contact us at XSEDE Help Desk to retrieve the requested files.
Expanse also supports Composable Systems, allowing reserachers to create a virtual 'tool set' of resources, such as Kubernetes resources, for a specific project and then re-compose it as needed. Expanse will also feature direct scheduler integration with the major cloud providers, leveraging high-speed networks to ease data movement to and from the cloud.
All Composable System requests must include a brief justification, specifically describing why a Composable System is required for the project.
Expanse supports a broad application base with installs and modules for commonly used packages in bioinformatics, molecular dynamics, machine learning, quantum chemistry, structural mechanics, and visualization, and will continue to support Singularity-based containerization in Expanse. Users can search for available software on XSEDE resources with the XSEDE software search tool.