Skip to content

User Support banner image

SDSC > User Support > Tools Home > Top Home > Tools: TOP: Integrated Performance Monitoring

Integrated Performance Monitoring (IPM)

Integrated Performance Monitoring (IPM) is a tool that allows users to obtain a concise summary of the performance and communication characteristics of their codes. IPM is invoked by the user at the time a job is run. By default, a short, text-based summary of the code's performance is provided, and a more detailed Web page summary with graphs to help visualize the output can also be obtained. Instructions on using IPM at SDSC are given below, and see the article on IPM in the SDSC User Services newsletter, "Thread," for more details about the kind of information that is available from IPM and about interpretation of output.

See the SourceForge page on IPM for more information on how IPM works.

Using IPM at SDSC

IPM is installed on BlueGene. Syntax for use is given in examples below.

IPM Output

Output Formats

For both BlueGene and DataStar, a report will be produced at the end of your output summarizing the data collected. Additionally, a file will be produced with a filename that contains your username and a number generated by IPM. (E.g., nwright.1127175553.874153.0).

In order to generate a Web page showing the analysis of your code, run the ipm_parse command followed by the filename. E.g.:

ipm_parse nwright.1127175553.874153.0

The address of the Web page will be displayed on the screen once it is generated. For SDSC resources, it should look like this URL: http://www.sdsc.edu/us/tools/top/ipm/samples/ds002.140779.0

Please note that Web pages generated by IPM at SDSC are not retained indefinitely on the SDSC Web server. If you need to reference this output from a source that requires permanence, please make a copy of the resulting pages and post them on an independent site. The purge date is currently 30 days after generation. Please contact consult@sdsc.edu for details and assistance in replicating this data. The consultants may also be of assistance if you need access to a previous IPM output that has been taken down.

Interpreting the Output of IPM

IPM provides information about the hardware counters, MPI function timings, and memory usage of a particular code. To help understand this output, either from the command line report or from the IPM-generated Web page, please refer to the following aids:


Sample IPM Output

##IPMv0.915#####################################################
#
# command : a.out (completed)
# host    : ds181/0020A7DA4C00_AIX mpi_tasks : 256 on 32 nodes
# start   : 05/09/06/12:40:46      wallclock : 1675.726124 sec
# stop    : 05/09/06/13:08:42      %comm     : 34.42
# gbytes  : 1.96252e+02 total      gflop/sec : 4.26471e+02 total
#
################################################################
# region  : *       [ntasks] =    256
#
#                 [total]        [avg]        [min]        [max]
# entries             256            1            1            1
# wallclock        428888      1675.35      1675.21      1675.73
# user             412918      1612.96      1587.56      1661.64
# system          2886.76      11.2764         3.16        15.89
# mpi              147656       576.78      445.853      647.961
# %comm                        34.4197      26.6113      38.6764
# gflop/sec       426.471       1.6659      1.64202      1.67676
# gbytes          196.252     0.766609     0.636787      1.65507
#
# PM_CYC      6.16095e+14  2.40662e+12  2.36993e+12  2.47706e+12
# PM_FPU_FMA  3.53785e+14  1.38197e+12   1.3622e+12   1.3906e+12
# PM_INST_CMPL7.24051e+14  2.82833e+12   2.7369e+12  2.91607e+12
# PM_FPU_FDIV 1.97955e+10  7.73261e+07  7.61015e+07  9.64191e+07
# PM_FPU1_FIN  1.8417e+14  7.19416e+11   7.0895e+11  7.25173e+11
# PM_LSU_LDF  1.40757e+14  5.49833e+11  5.41382e+11  5.57189e+11
# PM_FPU0_FIN 1.84875e+14  7.22169e+11  7.11701e+11  7.29275e+11
# PM_FPU_STF  8.18233e+12  3.19622e+10  3.07831e+10  3.64164e+10
#
#                  [time]      [calls]       [%mpi]      [%wall]
# MPI_Wait        79136.2  8.78238e+09        53.60        18.45
# MPI_Allreduce   33119.1       372896        22.43         7.72
# MPI_Isend       19857.3  4.43462e+09        13.45         4.63
# MPI_Bcast       6794.61  8.72678e+06         4.60         1.58
# MPI_Irecv        5177.8  4.39119e+09         3.51         1.21
# MPI_Recv        2138.91  4.34352e+07         1.45         0.50
# MPI_Gather      641.012       786432         0.43         0.15
# MPI_Waitall     537.009  1.72204e+07         0.36         0.13
# MPI_Reduce      182.701  3.03765e+06         0.12         0.04
# MPI_Barrier       64.85         9216         0.04         0.02
# MPI_Test        5.67482  1.79996e+06         0.00         0.00
# MPI_Comm_rank  0.452272  3.15962e+06         0.00         0.00
# MPI_Comm_size 0.0135098         7424         0.00         0.00
# MPI_TOTALTIME    147656            0       100.00        34.43
################################################################
       

If you have any questions, please contact the SDSC Help Desk Consultants.


Did You Get
What You
Wanted?
Yes No
Comments