SDSC Summer Institute 2014: Big Data Meets HPC

Menu:

Important Dates

Applications will be accepted through Friday, May 30.  Those applying by May 30 will be reviewed and notified early June. 

Applicants applying after this date will be reviewed and considered based on availability through June 30. 

Attendance and R&B scholarship support is limited, you are encouraged to apply early.

 
Contact Info:

For questions please contact:

sdscsi@sdsc.edu

Links:

arrow UC San Diego
arrow SDSC

Sponsors:

National Science Foundation

SDSC


SDSC Summer Institute 2014

HPC Meets Big Data

Program/Schedule

The SDSC Summer Institute 2014: HPC Meets Big Data will be held Monday – Friday (Noon), August 4 – 8, 2014, at the San Diego Supercomputer Center (SDSC) on the University of California, San Diego (UCSD) campus. Light refreshments and lunch will be provided throughout.
SDSC Summer Institute will deploy a flexible format designed to help attendees get the most out of their week. The first half of the SI will consist of plenary sessions covering the skills that are considered essential for anyone who works with big data. Topics include data management, running jobs on SDSC resources, reproducibility, database systems, characteristics of big data, techniques for turning data into knowledge, software version control and making effective use of hardware. This will be followed by a series of parallel sessions that allow attendees to dive deeper into specialized material that is relevant to their research projects, with the exact choice of topics will be based on feedback collected during registration.

Required Materials
Summer Institutes are designed to be hand-on so participants are expected to bring a laptop computer to follow along with demos and hands-on instruction throughout the program.

HPC Meets Big Data: Things Everyone Should Know

HPC Meets Big Data Parallel Sessions: Deep Dive

back to top

Hadoop for Scientific Computing: This session will begin with providing a hands-on overview of Hadoop, its application ecosystem, and how the map/reduce paradigm can be applied to solve problems in scientific computing. Starting with a conceptual overview of Hadoop and HDFS, attendees will write simple but powerful map/reduce applications in Python, R, or Perl and learn how to adapt their existing analysis codes to work within the Hadoop framework. Some of the tools built upon Hadoop such as Pig and Mahout will be discussed in the context of expanding the capabilities of high-performance computing, and attendees will gain hands-on experience using these tools to manipulate and analyze large scientific data sets.

back to top

Parallel Computing using MPI & Open MP: This session is targeted at attendees who are looking for a hands-on introduction to parallel computing using MPI and Open MP programming. The session will start with an introduction and basic information for getting started with MPI. An overview of the common MPI routines that are useful for beginner MPI programmers, including MPI environment set up, point-to-point communications, and collective communications routines will be provided. Simple examples illustrating distributed memory computing, with the use of common MPI routines, will be covered. The OpenMP section will provide an overview of constructs and directives for specifying parallel regions, work sharing, synchronization and data scope. Simple examples will be used to illustrate the use of OpenMP shared-memory programming model, and important run time environment variables Hands on exercises for both MPI and OpenMP will be done in C and FORTRAN.

back to top

Performance Optimization: This session is targeted at attendees who both do their own code development and need their calculations to finish as quickly as possible. We'll cover the effective use of cache, loop-level optimizations, force reductions, optimizing compilers and their limitations, short-circuiting, time-space tradeoffs and more. Exercises will be done mostly in C, but emphasis will be on general techniques that can be applied in any language.

back to top

Predictive Analytics: This session is designed as an introduction for attendees seeking to extract meaningful predictive information from within massive volumes of data. The session will provide an introduction to the field of predictive analytics and a variety of data analysis tools to discover patterns and relationships in data that can contribute to building valid predictions.

back to top

Scalable Data Management: This session will take an in-depth tour of large and complex data science problems that need or more data management software. This session will take a case-study based approach and present three real-world problems from three different application domains. Each case will have a short oral introduction followed by a longer hands-on session using state-of-the-art scalable data management software.

back to top

Visualization: Visualization is largely understood and used as an excellent communication tool by researchers. This narrow view often keeps scientists from fully using and developing their visualization skillset. This tutorial will provide a “from the ground up" understanding of visualization and its utility in error diagnostic and exploration of data for scientific insight. When used effectively visualization can provide a complementary and effective toolset for data analysis, which is one of the most challenging problems in computational domains. In this tutorial we plan to bridge these gaps by providing end users with fundamental visualization concepts, execution tools, customization and usage examples. Finally, a short introduction to SeedMe.org will be provided where users will learn how to share their visualization results ubiquitously.

back to top

Workflow Management: This session will start with a crash course on workflow management basics. We will then explore common computing platforms including Sun Grid Engine, NSF XSEDE high performance computing resources, the Amazon Cloud and Hadoop with an emphasis on how workflow systems can help with rapid development of distributed and parallel applications on top of any combination of these platforms. We will then discuss how to track data flow and process executions within these workflows (i.e. provenance tracking) including the intermediate results as a way to make workflow results reproducible. We will end with a lab session on using Kepler to build, package and share workflows interacting with various computing systems.

back to top

One-on-one consulting: Attendees will have the opportunity to work individually or in small groups directly with SDSC staff. The goal is to help participants overcome the computational challenges and bottlenecks that are limiting the progress of their research projects. We will be available to assist participants with data management, software parallelization, workflow development and other topics covered in the summer institute.