SDSC Webinars


Next Scheduled Webinar:

Date: Tuesday, March 12, 2019, 11:00am-1:00pm PST

"Working With File Systems on Comet"

Presenter: Manu Shantharam

Description:In this webinar, we will learn about the file systems that are part of the SDSC's Comet supercomputer. First, we will provide a brief overview of the Comet File System and introduce the various file systems within Comet. We will discuss the pros and cons of using these file systems in terms of I/O performance, storage capacity, shared access, backup and so on. Additionally, we will illustrate the basic usage of the Lustre File System, Comet's high-performance parallel file system.

Register for this webinar.


Past Webinars

"Introduction to Singularity: Containers For High-Performance Computing"

Date:2/12/2019

Presenter: Marty Kandes

singularity_on_comet_2019_thumb.pngThis 2-hour webinar provides an introduction to running Singularity containers on Comet for users currently using Comet and those who want to know more about running Singularity on Comet. SDSC's computational scientist Marty Kandes provides an in-depth review of the important issues pertaining to running Singularity in the Comet high-performance ecosystem and includes several useful container examples for you to explore.

 

Download slides for this webinar. 

View a recording of this webinar. 


"Introduction to Running Jobs on Comet"

Date: 1/8/2019

Presenter: Mary Thomas

running_jobs_on_comet_2019_thumb.jpg

This webinar covers the basics of accessing the SDSC Comet supercomputer, managing the user environment, compiling and running jobs on Comet, where to run them, and how to run batch jobs. It is assumed that you have mastered the basics skills of logging onto Comet and running basic Unix commands. The webinar will include access to training material.

 

 

Download Slides for this webinar .

View a recording of this webinar.

 

"Understanding Performance and Obtaining Hardware Information"

Date: 12/11/2018

Presenter: Bob Sinkovits

perf_hw_thumb.jpgIn this webinar we start by describing how to obtain hardware and system information such as CPU specifications, memory quantity, cache configuration, mounted file systems and their usage, peripheral storage devices and GPU properties. This information is useful for anyone who is interested in how hardware specs influence performance or who needs to report benchmarking data. We then cover the use of top for monitoring systems usage and gprof for basic code profiling. We conclude with a description of the memory hierarchy (registers, cache, memory, external storage) and show how an understanding of cache can be used to write more efficient code.

"Introduction to Data Visualization on XSEDE Systems"

Date:  10/09/2018    

Presenter:  Jeff Sale

URL: http://www.sdsc.edu/support/user_guides/tutorials/intro_to_data_visualization.html

Description:

This 2-hour* webinar provides an introduction to data visualization for users of XSEDE systems and those who want to know more about general principles of data visualization. Jeff Sale provides an introduction to data visualization principles and practices. introduces participants to XSEDE visualization expertise, and offers some hands-on demonstrations running visualization jobs on SDSC's Comet supercomputer using a VisIt client as well as a demonstration of using the XSEDE Visualization Portal hosted at the Texas Advanced Computing Center. Topics covered include:

  • Introduction: what is visualization?
  • Why do we visualize data?
  • Overview of the data visualization workflow
  • Fundamentals of data visualization principles
  • Hands-On Demonstration: Using VisIt to run jobs on SDSC’s Comet\
  • Brief demonstration of the XSEDE TACC Visualization Portal
*Authentication issues with the TACC Visualization Portal cut this webinar short. Another webinar is planned for early 2019.

"Running Singularity Containers on SDSC's Comet Supercomputer"

Date :  06/14/2018 

Presenter: Marty Kandes

URLs: 

https://portal.xsede.org/course-calendar/-/training-user/class/614

Description:

Running Singularity Containers on SDSC’s Comet Supercomputer “Singularity enables users to have full control of their environment. Singularity containers can be used to package entire scientific workflows, software and libraries, and even data. This means that you don’t have to ask your cluster admin to install anything for you - you can put it in a Singularity container and run.” [*from the Singularity web site at http://singularity.lbl.gov/ ]

This 2-hour webinar provides an introduction to running Singularity containers on Comet for users currently using Comet and those who want to know more about containerization on Comet. Marty Kandes provides an in-depth review of the important issues pertaining to running Singularity in the Comet high-performance environment including live demonstrations.

 


"A Quick Introduction to Machine Learning with Comet"

Date : 04/06/2018  

Presenter:  Paul Rodriguez, Ph.D.

URLs:  

Description:

Machine Learning covers a variety of statistical techniques that are useful for data analysis and central to the recent developments in deep learning and AI.

This 2-hour webinar will try to organize and introduce the plethora of terms and concepts that comprise machine learning, describe how machine learning can be used on Comet HPC resources using R, and demonstrate with a quick tutorial how a simple deep learning model works using Python.

Below is a brief agenda:

- Overview of Concepts and Terms for Machine Learning
- The main activities of applying machine learning models.
- Scaling models
- Deep learning basics


"Introduction to SDSC's Comet Supercomputer"

Date: 02/15/2018 

Presenter:  Mahidhar Tatineni

URL:   https://portal.xsede.org/course-calendar/-/training-user/class/583

Description:

This webinar provides a brief introduction and some hands-on instruction for users who are relatively new to SDSC’s Comet supercomputer as well as for those who need a refresher. Join SDSC’s User Support Group Lead, Dr. Mahidhar Tatineni, for an introduction to Comet, including an overview of the underlying architecture, available software, running jobs, file management, and more.