SPEC MPI2007 Flag Descriptions for QLogic MPI and the Intel(R) C++ Compiler 10.1

Copyright © 2006 Intel Corporation. All Rights Reserved.

Sections

Selecting one of the following will take you directly to that section:


Optimization Flags


Portability Flags


Compiler Flags


System and Other Tuning Information

Platform settings

One or more of the following settings may have been set. If so, the "General Notes" section of the report will say so; and you can read below to find out more about what these settings mean.

Hardware Prefetch:

This BIOS option allows the enabling/disabling of a processor mechanism to prefetch data into the cache according to a pattern-recognition algorithm.

In some cases, setting this option to Disabled may improve performance. Users should only disable this option after performing application benchmarking to verify improved performance in their environment.

Adjacent Sector Prefetch:

This BIOS option allows the enabling/disabling of a processor mechanism to fetch the adjacent cache line within an 128-byte sector that contains the data needed due to a cache line miss.

In some cases, setting this option to Disabled may improve performance. Users should only disable this option after performing application benchmarking to verify improved performance in their environment.

Snoop Filter Enabled/Disabled:

This BIOS option enables/disables the Snoop Filter. The Snoop Filter is designed to reduce system bus utilization coming from cache misses. On the Intel 5000X and 5400 chipset, it is built as a cache structure able to minimize unnecessary snoop traffic. When enabled, it can lead to significant memory performance improvements for several workstation applications on suitable memory configurations.

ulimit -s

Sets the stack size to n kbytes, or unlimited to allow the stack size to grow without limit.

QLogic MPI Library 2.2 options and environment variables

The submit command shown below uses the QLogic MPI mpirun command to launch the MPI processes.

     submit=  
     export PATH=/bin:/usr/bin;
     unset LD_LIBRARY_PATH;
     . $[RUNTIME_ICC_HOME]/bin/iccvars.sh;
     . $[RUNTIME_IFORT_HOME]/bin/ifortvars.sh;
     $[MPI_HOME]/bin/mpirun -mpd -rcfile \$SPEC/mpd_rcfile -disable-mpi-progress-check $command
     

The SPEC config file feature submit is used to launch MPI jobs. This specific submit command used QLogic MPI's mpirun command to launch the jobs. Before launching the job, PATH and LD_LIBRARY_PATH variables are set appropriately for the QLogic MPI processes and the Intel Compiler runtime libraries. Flags for the mpirun command are explained below.

mpirun command flags

-mpd

Used after running mpdboot to start a daemon, rather than using the default ssh protocol to start jobs. See the mpdboot(1) man page for more information. None of the other options described below (with the exception of -h) are valid when using this option.

-rcfile <node-shell-script-name>

Before starting node programs, mpirun checks to see if a file called .mpirunrc exists in the user's home directory. If it exists, it is sourced into the running remote shell. This option is used to override the default file sourced. .mpirunrc should be used to set paths, and other environment variables such as LD_LIBRARY_PATH. It is typically used to troubleshoot the startup of node programs. It should not contain any interactive commands. It may contain commands that output on stdout or stderr. Note that the .mpirunrc file is not read by mpirun, but instead is read on the remote node on the cluster.

-disable-mpi-progress-check

Quiescence is a condition when no MPI messages are being sent or received by ANY of the node processes, or there is a lack of ping reply. QLogic MPI supports quiescence detection for gracefully terminating buggy deadlocked programs. This option disables MPI communication progress check without disabling the ping reply check.

-np <# of processes>

Use this option to set the number of MPI processes to run the current arg-set.

-ppn <# of processes>

Use this option to place the indicated number of consecutive MPI processes on every host in group round robin fashion. The number of processes to start is controlled by the option -n as usual.