CS505 - Introduction to Parallel Computing
for Computational Scientists/Engineers and Computer Scientists
Overview
Parallel computing has become the dominant technique
for achieving high performance in computational science and engineering
research. Parallel computing systems are now becoming mainstream in commercial
sectors as well, due to the performance demands and requirements of today's
engineering, database, and financial applications. Multiprocessor systems
based on commodity processors (IA32, IA64, G4, Alpha, UltraSparc, Power3, Power4)
are now common and offer excellent performance for the price. However,
parallel computing is relatively new to college curricula; until recently,
parallel computing was largely learned on the side (for research scientists)
or on the job (in industry).
This class will discuss architectures of modern parallel computers (briefly)
and will teach students how to write parallel programs for these parallel computers.
This class will be very 'applied' in nature: there will be more
emphasis on parallel programming libraries than on abstract theoretical
parallel programming concepts or computer engineering details.
Prerequisites
Students must have prior programming experience using either C or Fortran.
Experience using Unix workstations and developing scientific codes is helpful
but not required.
Instructors
Class Schedule and Location
Classes will meet Mondays and Wednesdays 5:00PM until 6:15PM at BAM 343
at SDSU.
Office Hours
Amit Majumdar : Before or after class in Rm 233. Email instructor to schedule meeting.
Topics (tentative) (order of topics may also change)
-
Overview
of Parallel Computing(Majumdar):
parallel computing concepts, parallel computer architectures, and standard
programming models for parallel computers.
-
Shared
Memory Parallel Programming with OpenMP (Majumdar):
OpenMP features and syntax, examples of parallelizing serial codes with
OpenMP, discussion of performance issues.
-
Distributed
Memory Programming with Message Passing Interface (MPI) (Majumdar):
MPI features and syntax, examples of parallelizing serial codes with MPI,
discussion of performance issues.
-
Single
Processor Optimization (Valafar):
overview of microprocessor architectures, utilizing microprocessor features
(registers, cashes, etc.) effectively, characteristics and potential bottlenecks
that determine maximum deliverable performance for code segments, and measuring
and improving code performance.
Grading
-
Overview of Parallel Computing : quiz 10%
- Shared Memory Parallel Programming with OpenMP : HW 10%, quiz 10%
- Distributed Memory Programming with Message Passing Interface : HW 10%, quiz 10%
- Single Processor Optimization : (see final exam below)
- Final exam : 50% (35% from Prof. Valafar's part and 15% from Majumdar's part)
-
Quiz missed without advance notification or acceptable excuses will not
be made up. 'Acceptable' is determined by the instructors.
-
Late assignments will be subject to a 20% penalty per day.
-
Assignments are due at the begining of the class on the due date.
Lectures, Assignments, and Quizzes
References Textbooks
-
These are reference textbooks for the class. You don't have to purchase
these, but if you would like
to continue doing work in this field you may want to purchase them.
-
Using MPI, 2nd edition
Authors: William Gropp, Ewing Lusk, and Anthony Skjellum
-
Parallel Programming in OpenMP
Authors: Rohit Chandra, Leo Dagum, Dave Kohr, Dror Maydan, Jeff
McDonald, Ramesh Menon
-
A list of web-based references is available online at:
http://www.sdsc.edu/~majumdar/CS505/references.html.
Questions?
Please feel free to contact any of the instructors
if you have any questions.