SDSC Thread Graphic Issue 12, March 2007





RSS RSS Feed (What is this?)

User Services Director:
Anke Kamrath

Editor:
Subhashini Sivagnanam

Graphics Designer:
Diana Diehl

Application Designer:
Fariba Fana


Featured Story Corner

Star-P at SDSC

— Ilya Mirman, Interactive Supercomputing

San Diego Supercomputer Center has recently installed Star-P software with goal of delivering revolutionary results to scientists, engineers, and analysts. Star-P enables them to transparently use high performance computing resources using familiar desktop tools.

Platform Overview
Star-P software is a client-server parallel-computing platform that's been designed to work with multiple Very High Level Language (VHLL) client applications such as MATLAB, Python, R and others, and has built-in tools to expand VHLL computing capability through addition of libraries and hardware-based accelerators.

Photo: StarP

Star-P software is a bridge between popular computing tools and the grids, servers, and clusters used widely in technical computing. With Star-P, you can use your favorite desktop simulation tool, with its familiar features, commands, and data types. Standard commands and functions are available and transparently perform in a parallel manner. Existing scripts can be reused to run larger problems in parallel with minimal modification. This substantially reduces the learning curve and dramatically accelerates the development of custom parallel applications.

Bringing Together Key Computing Modes
Star-P brings together the three key modes of computing:

  • Serial computations
  • Task parallel computing
  • Data Parallel computing

Photo: StarP

Data parallel computations are for high-level matrix and vector operations on large data sets. They involve inter-processor communication during the computation. The Star-P data-parallel libraries are high-performance optimized libraries that can be called by the client application to perform compute-intensive operations on large distributed data sets.

Task parallelism is a powerful method to carry out many independent ("embarrassingly parallel") calculations in parallel, such as Monte Carlo simulations, or "un-rolling" serial FOR loops. For example, in a medical application involving image processing on multiple brain slices, Star-P can distribute images across several processors, and simultaneously process them. A measure of parallel abstraction is that a program should execute independently of the number of processors it has access to. With Star-P, there is no need to worry about the number of available processors — Star-P takes care of distributing the data and executing the computations.

Although there are no hard and fast rules about when to use each computing mode, the following chart offers a rough guideline.

Photo: StarP

Extensible API
Star-P Connect library API link enables you to extend the functionality of the Star-P compute engine based on your particular application and algorithm requirements. You can plug in existing serial and parallel libraries, access them via the desktop tools such as MATLAB and Python, and execute them in a task- and data-parallel modes.

Code Profiling & Optimization
Star-P debugging, profiling, and monitoring tools enable you to explore your algorithms and application code interactively, to understand where the time is being spent, the impact of server calls, data distributions, etc. By quickly zeroing on areas where most of the time is spent, you can determine what portions of your code can have the biggest impact on performance.

In this example of one of the built-in utilities, we see a breakdown of the total wall clock time for a particular code — starting out with operation mostly on the client, then on the server, and then the network. And on the bottom we see the cumulative picture evolve over time.

Photo: StarP

Overall, the key benefits of Star-P are increased productivity: the ability to run bigger simulations, faster, and with less parallel programming effort. The following provide some application examples from the field.

10-100X Faster Computations
By transparently leveraging the parallel computing capability, Star-P enables simulations developed in desktop tools to be processed in parallel, dramatically accelerating computation time.

10-100X Larger Data Sets
Using Star-P, desktop application users can work with large, distributed datasets — gigabytes and even terabytes in size — distributed across servers, clusters, and grids.

No Need for C/Fortran/MPI Re-Programming
With Star-P, there is no need to use low-level languages and constructs of C, Fortran, and MPI to take advantage of high performance computing resources. Using the Star-P Connect library API, users can leverage library functions from open source community and commercial vendors written in C or Fortran.

For More Information:
Interactive Supercomputing web site
Brief demonstration videos
Star-P White Paper Library
SDSC users can register for a Star-P support account and view training videos, download self-guided tutorials, and search the knowledge base for tips and tricks.

For questions regarding obtaining an account or using Star-P in the new SDSC Cluster, please contact Dongju Choi at SDSC's scientific computing group.

Did you know ..?

Your source for technical information about DataStar and AIX, IBM compilers and error messages can be found at IBM's Infocenter:
http://publib.boulder.ibm.com/infocenter/pseries - Eva Hocks.