Avoiding runhpc
Using the SPEChpc™ 2021 benchmarks while making minimal use of SPEC's tool set

$Id$ Latest: www.spec.org/hpg/hpc2021/Docs/

Contents

Introduction

Environment

Steps

Review one rule

Install

Pick a benchmark

Pick a config file

Fake it

Find the log

Find the build dir

Copy the build dir (triple only)

Build it

Place the binary in the run dir

Copy the run dir

Run it

Save your work

Repeat

Validation

Introduction

This document is for those who prefer to avoid using some of the SPEC-supplied tools, typically because of a need for more direct access to the benchmarks. For example:

If the above describes you, here is a suggested path which should lead quickly to your desired state. This document shows you how to use SPEC's tools for the minimal purpose of just generating work directories, for use as a private sandbox. Note, however, that you cannot do formal, "reportable" runs without using SPEC's toolset.

Caution: Examples below use size=test in order to demonstrate working with a benchmark with its simplest workload. The test workload would be wildly inappropriate for performance work. Once you understand the techniques shown here, use the ref workload. If you are unable to do that (perhaps because you are using a slow simulator), you still should not use test, because it is likely to lead your research in the wrong direction. Various other benchmark test workloads just do a quick check that the binary starts and can open its files, then take the rest of the day off to go get a cup of tea (that is, do almost none of their real work). If you are really unable to simulate the ref workload, a more defensible choice would be to sample traces from ref.

License reminder: Various commands below demonstrate copying benchmarks among systems. These examples assume that all the systems belong to licensed users of SPEChpc 2021. For the SPEChpc license, see www.spec.org/hpg/hpc2021/Docs/licenses/SPEC-License.pdf and for information about all the licensed software in SPEC SPEChpc 2021, see SPEChpc 2021 Licenses.

Environments

Three different environments are referenced in this document, using these labels:

Steps

  1. Review one rule: Please read the rule on research/academic usage. It is understood that the suite may be used in ways other than the formal environment that the tools help to enforce. If you plan to publish your results, you must state how your usage of the suite differs from the standard usage.

    So even if you skip over the tools and the run rules today, you should plan a time to come back and learn them later.

  2. Install: Get through a successful installation, even if it is on a different system than the one that you care about. Yes, we are about to teach you how to mostly bypass the tools, but there will still be some minimal use. So you need a working toolset and a valid installation. If you have troubles with the install procedures described in install-guide-linux.html, please see techsupport.html and we'll try to help you.

  3. Pick a benchmark: Pick a benchmark that will be your starting point.

    Choose one benchmark from the SPEChpc 2021 suite that you'd like to start with. For example, you might start with 535.weather_t (Fortran) or 505.lbm_t (C). These are two of the shortest benchmarks for lines of code, and therefore relatively easy to understand.

  4. Pick a config file: Pick a config file for an environment that resembles your environment. You'll find a variety of config files in the directory $SPEC/config/ on Linux systems or at www.spec.org/hpg/hpc2021 with the submitted SPEChpc 2021 results. Don't worry if the config file you pick doesn't exactly match your environment; you're just looking for a somewhat reasonable starting point.

  5. Fake it: Execute a "fake" run to set up run directories, including a build directory for source code, for the benchmark.

    For example, let's suppose that you want to work with 535.weather_t and your environment is at least partially similar to the environment described in the comments for Example_gnu.cfg:

    $ pwd
    /Users/cponder/spec/hpc2021
    $ source shrc
    $ cd config
    $ cp Example_gnu.cfg my_test.cfg 
    $ runhpc --fake --loose --size test --tune base --config my_test --ranks 4 535.weather_t  
    .
    .
    . (lots of stuff goes by)
    .
    .
    
    Success: 1x535.weather_t
    
    The log for this run is in /Users/cponder/spec/hpc2021/result/hpc2021.007.log
    

    This command should report a success for the build, run and validation phases of the test case, but the actual commands have not been run. It is only a report of what would be run according to the config file that you have supplied.

  6. Find the log: Near the bottom of the output from the previous step, notice the location of the log file for this run -- in the example above, log number 007. The log file contains a record of the commands as reported by the "fake" run. You can find the commands by searching for "%%".

  7. Find the build dir: To find the build directory that was set up in the fake run, you can search for the string build/ in the log:

    $ cd $SPEC/result
    $ grep build/ hpc2021.007.log
    Wrote to makefile '/Users/cponder/spec/hpc2021/benchspec/HPC/535.weather_t/build/build_base_gnu_mpi_regular.0000/Makefile.deps':
    Wrote to makefile '/Users/cponder/spec/hpc2021/benchspec/HPC/535.weather_t/build/build_base_gnu_mpi_regular.0000/Makefile.spec':
    $ 

    Or, you can just go directly to the benchmark build directories and look for the most recent one. For example:,

    $ go 535.weather build
    /Users/cponder/spec/hpc2021/benchspec/HPC/535.weather_t/build
    $ ls -gtd build*
    drwxrwxr-x 2 staff 4096 Oct 18 10:54 build_base_gnu_mpi_regular.0000/
    $ 

    In the example above, go is shorthand for getting us around the SPEC tree. The ls -gtd command prints the names of each build subdirectory, with the most recent first. If this is your first time here, there will be only one directory listed, as in the example above.

    You can work in this build directory, make source code changes, and try other build commands without affecting the original sources.

  8. Copy the build dir (triple only): If you are using a unified or cross-compile environment, you can skip to the next step. But if you are using a triple environment, then you will want to package up the build directory with a program such as tar -- a handy copy is in the bin directory of your SPEC installation, as spectar. You can compress it with specxz. Then, you will move the package off to whatever system has compilers.

    For example, you might say something like this:

    $ spectar -cf - build_base_gnu_mpi_regular.0000/ | specxz > mybuild.tar.xz
    $ scp mybuild.tar.xz oscar@somesys:                                  [reminder: copying]
    mybuild.tar.xz                          100%   11KB 181.7KB/s   00:00    
    $  

    Note that the above example assumes that you have versions of xz and tar available on the system that has compilers, which you will use to unpack the compressed tarfile, typically with a command similar to this:

    xz -dc mybuild.tar.xz | tar -xvf -

    If you don't have xz available, you might try bzip2 or gzip on both the sending and receiving systems. If you use some other compression utility, be sure that it does not corrupt the files by destroying line endings, re-wrapping long lines, or otherwise subtracting value.

  9. Build it: Generate an executable using the build directory. If you are using a unified or cross-compile environment, then you can say commands such as these:

    $ cd build_base_gnu_mpi_regular.0000/
    $ specmake clean
    rm -rf *.o  output6.test.txt
    find . \( -name \*.o -o -name '*.fppized.f*' -o -name '*.i' -o -name '*.mod' \) -print | xargs rm -rf
    rm -rf weather
    rm -rf weather.exe
    rm -rf core
    rm -rf options.err compiler-version.err make.out compiler-version.out options.out
    $ specmake
    /Users/cponder/spec/hpc2021/bin/specperl /Users/cponder/spec/hpc2021/bin/harness/specpp -DSPEC -DNDEBUG -DSPEC_LP64 miniWeather.F90 -o miniWeather.fppized.f90
    mpif90 -c -o miniWeather.fppized.o -Ofast -march=native -ffree-line-length-none miniWeather.fppized.f90
    mpif90       -Ofast -march=native -ffree-line-length-none    -DSPEC_LP64     miniWeather.fppized.o                      -o weather
    

    Note above that the $SPEC environment variable is used to find the SPEChpc common makefile as well as the location of the specperl and specpp utilities to perform the Fortran preprocessing.

    You can also carry out a dry run of the build, which will display the build commands without attempting to run them, by adding -n to the specmake command line. You might find it useful to capture the output of specmake -n to a file, so it can easily be edited, and used as a script.

    If you are trying to debug a new system, you can prototype changes to Makefile.spec or even to the benchmark sources. The make variables used in Makefile.spec will vary by what was included in configration file you used above. The most commonly used make variables are described in Section II.A of config.html with the full list in

    If you are using a triple environment, then presumably it's because you don't have specmake working on the system where the compiler resides. But fear not: specmake is just GNU make under another name, so whatever make you have handy on the target system might work fine with the above commands. If not, then you'll need to extract the build commands prior to creating the bundle, create and edit a local build file, and try them on the system.

    $ go 535.weather build
    /Users/cponder/spec/hpc2021/benchspec/HPC/535.weather_t/build
    $ cd build_base_gnu_mpi_regular.0000
    $ specmake -n > bld.sh 
    $ vi bld.sh  (Edit the build script with your favorite editor)
    $ cat bld.sh 
    mpif90 -c -o miniWeather.fppized.o -Ofast -march=native -ffree-line-length-none -DSPEC -DNDEBUG -DSPEC_LP64 miniWeather.F90
    mpif90       -Ofast -march=native -ffree-line-length-none    -DSPEC_LP64     miniWeather.fppized.o                      -o weather
    $ cd ..
    $ spectar -cf - build_base_gnu_mpi_regular.0000/ | specxz > mybuild.tar.xz
    $ scp mybuild.tar.xz oscar@somesys:     
    $

    Then on the remote system:

    $ tar Jxvf mybuild.tar.xz
    $ cd build_base_gnu_mpi_regular.0000
    $ sh -x bld.sh
    + mpif90 -c -o miniWeather.fppized.o -Ofast -march=native -ffree-line-length-none -DSPEC -DNDEBUG -DSPEC_LP64 miniWeather.F90
    + mpif90 -Ofast -march=native -ffree-line-length-none -DSPEC_LP64 miniWeather.fppized.o -o weather
    

    Note that the edited "bld.sh" script removed the specpp command and instead added the define flags to the compile line and changed the name of the file from "miniWeather.fppized.f90" to "miniWeather.F90". It's common practice for Fortran compilers to preprocess source files that use upper-case "F" in the file suffix. However, preprocessing is not part of the Fortran standard hence not all Fortran compilers support it. If you are using a Fortran compiler that does not support preprocessing, then you will need to either save the post-process Fortran source (*fppized.f90) prior to creating the bundle, or install the SPEC tools on the remote system so specpp is available.

  10. Find the run directory, and add the binary to it: Using techniques similar to those used to find the build directory, find the run directory established above, and place the binary into it. If you are using a unified or cross-compile environment, you can copy the binary directly into the run directory; if you are using a triple environment, then you'll have to retrieve the binary from the compilation system using whatever program you use to communicate between systems.

    In a unified environment, the commands might look something like this:

    $ go result
    /Users/cponder/spec/hpc2021/result/
    $ grep 'Setting up' hpc2021.007.log
     Setting up 535.weather_t test base gnu_mpi_regular: run_base_test_gnu_mpi_regular.0000
    $ go 535.weather_t run 
    /Users/cponder/spec/hpc2021/benchspec/HPC/535.weather_t/run/
    $ cd run_base_test_gnu_mpi_regular.0000/
    $ cp ../../build/build_base_gnu_mpi_regular.0000/weather .
    $  

    In the result directory, we search log 007 to find the correct name of the directory, go there, and copy the binary into it.

  11. Copy the run dir: If you are using a unified environment, you can skip this step. Otherwise, you'll need to package up the run directory and transport it to the system where you want to run the benchmark. For example:

    $ go 535.weather_t run
    /Users/cponder/spec/hpc2021/benchspec/HPC/535.weather_t/run/
    $ spectar cf - run_base_test_gnu_mpi_regular.0000/ | specxz > myrun.tar.xz
    $ scp myrun.tar.xz oscar@mysys: 
    $  

    Note that the above example assumes that you have versions of xz and tar available on the run time system, which you will use to unpack the compressed tarfile, typically with something like this:

    xz -dc myrun.tar.xz | tar -xvf -

    tar Jxvf myrun.tar.xz

    If you don't have xz available, you might try bzip2 or gzip on both the sending and receiving systems. If you use some other compression utility, be sure that it does not corrupt the files by destroying line endings, re-wrapping long lines, or otherwise subtracting value.

  12. Run it: If you are using a unified environment, you can use specinvoke to see the command lines that run the benchmark, and/or capture them to a shell script. You can also run them using judicious(*) cut and paste:

    $ go 535.weather_t run/run_base_test_gnu_mpi_regular.0000 
    /Users/cponder/spec/hpc2021/benchspec/HPC/535.weather_t/run/run_base_test_gnu_mpi_regular.0000
    $ cp ../../build/build_base_gnu_mpi_regular.0000/weather .
    $ specinvoke -n
    # specinvoke r4356
    #  Invoked as: specinvoke -n
    # timer ticks over every 1000 ns
    # Use another -n on the command line to see chdir commands and env dump
    # Starting run for copy #0
    mpirun -q -n 4 ../run_base_test_gnu_mpi_regular.0000/weather_base.gnu_mpi_regular output6.test.txt 512 256 1000 10 6; 0<&- > weather.1.stdout.out 2>> weather.1.stderr.err
    specinvoke exit: rc=0
    

    (*) Note above that the weather binary to include additional identifiers (__base.gnu_mpi_regular) - which we simply ignore in the command that is cut-and-pasted, because the binary built by hand is just weather.

    $ mpirun -np 4 weather output6.test.txt 512 256 1000 10 6 > weather.out  
    $ cat weather.out
     nx_glob, nz_glob:                  512                  256
     dx,dz:    39.062500000000000        39.062500000000000
     dt:   0.13020833333333331
    $  

    If you are using a cross-compile or triple environment, you can capture the commands to a file and execute that. Be sure to follow the instructions carefully for how to do that, noting in particular the items above your environment, at the specinvoke chapter of SPEChpc 2021 Utilities.

    Alternatively, you can extract the run commands from speccmd.cmd.

    $ go 535.weather_t run/run_base_test_gnu_mpi_regular.0000 
    /Users/cponder/spec/hpc2021/benchspec/HPC/535.weather_t/run/run_base_test_gnu_mpi_regular.0000
    $ tail speccmds.cmd
    -E TERM xterm
    -E UCSUITE HPC
    -E USER cponder
    -E VENDOR unknown
    -E XDG_RUNTIME_DIR /run/user/1025
    -E XDG_SESSION_ID 415
    -N C
    -C /Users/cponder/spec/hpc2021/benchspec/HPC/535.weather_t/run/run_base_test_gnu_mpi_regular.0000
    -o weather.1.stdout.out -e weather.1.stderr.err mpirun -q -n 4 ../run_base_test_gnu_mpi_regular.0000/weather_base.gnu_mpi_regular output6.test.txt 512 256 1000 10 6;
    $  

    speccmd.cmd is the script specinvoke uses to run the benchmarks. The "-E" options sets the environment variables, "-o" and "-e" are the names for the stdout and stderr logs. The command line to run the benchmark is bolded.

  13. Save your work: Important: if you are at all interested in saving your work, move the build/build* and run/run* directories to some safer location. That way, your work areas will not be accidentally deleted the next time someone comes along and uses one of runhpc cleanup actions..

  14. Repeat: Admittedly, the large number of steps that it took to get here may seem like a lot of trouble. But that's why you started with a simple benchmark and the simplest workload (--size test in the fake step). Now that you've got the pattern down, it is hoped that it will be straightforward to repeat the process for the other available workloads.

    But if you're finding it tedious... then maybe this is an opportunity to sell you on the notion of using runhpc after all, which automates all this tedium. If the reason you came here was because runhpc doesn't work on your brand-new environment, then perhaps you'll want to try to get it built, using the hints in tools-build.html.

Validation

Note that this document has only discussed getting the benchmarks built and running. Presumably at some point you'd like to know whether your system got the correct answer. At that point, you can use specdiff, which is explained in utility.html.

Avoiding runhpc Using the SPEChpc™2021 benchmarks while making minimal use of SPEC's tool set: Copyright © 2021 Standard Performance Evaluation Corporation (SPEC)