SPEC logo

SPEC CPU2000 Run and Reporting Rules

SPEC Open Systems Group

This document provides guidelines required to build, run, and report on the SPEC CPU2000 benchmarks.

SPEC CPU2000 V1.3A

June 15, 2006: SPEC has discovered that the SPEC CPU2000 V1.3 update, which shipped November 1, 2005, mistakenly included an obsolete version of the run rules.

The correct version of the run rules should have been the same as the SPEC CPU2000 V1.2A rules, not the V1.2 rules.

This file, June 15, 2006, contains the V1.2A rules plus minor typo/markup fixes.

SPEC CPU2000 V1.2A

Edit history since V1.2: The following changes are effective for results published on or after July 1, 2002:

SPEC CPU2000 V1.2

Edit history since V1.1:

(To check for possible updates to this document, please see http://www.spec.org/cpu2000/ )


Clicking one of the following will take you to the detailed table of contents for that section:
1. General Philosophy
2. Building SPEC CPU2000
3. Running SPEC CPU2000
4. Results Disclosure
5. Run Rule Exceptions


Detailed Contents

1. General Philosophy
1.1 Objective tests, provided in source code form
1.2 Conventions for optimization
1.3 SPEC may adapt the suites
1.4 Estimates are allowed
1.5 Submission to SPEC is encouraged
2.0 Building SPEC CPU2000
2.0.1 Peak and base builds
2.0.2 Runspec must be used
2.0.3 The runspec build environment
2.0.4 Continuous Build requirement
2.0.5 Changes to the runspec build environment
2.0.6 Cross-compilation allowed
2.0.7 Individual builds allowed
2.0.8 Tester's assertion of equivalence between build types
2.1 General Rules for Selecting Compilation Flags
2.1.1 Cannot use names
2.1.2 Limitations on library substitutions
2.1.3 Feedback directed optimization is allowed
2.1.4 Limitations on size changes
2.1.5 Portability flags
2.2 Base Optimization Rules
2.2.1 Safe
2.2.2 Same for all
2.2.3 Feedback directed optimization is allowed in base
2.2.4 Assertion flags may NOT be used in base
2.2.5 Floating point reordering allowed
2.2.6 Only 4 optimization switches Unit of definition Delimited lists Portability flags in base ANSI Compliance Feedback invocation in Pass 1 and Pass 2 Location flags Warnings, verbosity, output flags Entire compilation system is counted Assertion of ANSI compliance Hidden switches Installation-provided switches Switches to declare 64-bit mode Cross-module optimization Base build environment
2.2.7 Safety and Standards Conformance
3. Running SPEC CPU2000
3.1 System Configuration
3.1.1 File Systems
3.1.2 System State
3.2 Additional Rules for Running SPECrate
3.2.1 Number of copies in peak
3.2.2 Number of copies in base
3.2.3 Single file system
3.3 Continuous Run Requirement
3.4 Run-time environment
3.5 Basepeak
4. Results Disclosure
4.1 Rules regarding availability date and systems not yet shipped
4.1.1 Pre-production software can be used
4.1.2 Software component names
4.1.3 Specifying dates
4.1.4 If dates are not met
4.1.5 Performance changes for pre-production systems
4.2 Configuration Disclosure
4.2.1 System Identification
4.2.2 Hardware Configuration
4.2.3 Software Configuration
4.2.4 Tuning Information
4.3 Test Results Disclosure
4.3.1 Speed Metrics
4.3.2 Throughput Metrics
4.3.3 Performance changes for production systems
4.4 Metric Selection
4.5 Research and Academic usage of CPU2000
4.6 Required Disclosures
4.7 Fair Use
5. Run Rule Exceptions


This document specifies how the benchmarks in the CPU2000 suites are to be run for measuring and publicly reporting performance results, to ensure that results generated with the suites are meaningful, comparable to other generated results, and reproducible (with documentation covering factors pertinent to reproducing the results).

Per the SPEC license agreement, all results publicly disclosed must adhere to the SPEC Run and Reporting Rules, or be clearly marked as estimates.

The following basics are expected:

Each of these points are discussed in further detail below.

Suggestions for improving this run methodology should be made to the SPEC Open Systems Group (OSG) for consideration in future releases.

1. General Philosophy

1.1 Objective tests, provided in source code form

SPEC believes the user community will benefit from an objective series of tests which can serve as common reference and be considered as part of an evaluation process.

SPEC CPU2000 provides benchmarks in the form of source code, which are compiled according to the rules contained in this document. It is expected that a tester can obtain a copy of the suites, install the hardware, compilers, and other software described in another tester's result disclosure, and reproduce the claimed performance (within a small range to allow for run-to-run variation).

Benchmarks are provided in two suites: an integer suite, known as CINT2000, and a floating point suite, known as CFP2000.

1.2 Conventions for optimization

SPEC is aware of the importance of optimizations in producing the best system performance. SPEC is also aware that it is sometimes hard to draw an exact line between legitimate optimizations that happen to benefit SPEC benchmarks and optimizations that specifically target the SPEC benchmarks. However, with the list below, SPEC wants to increase awareness of implementers and end users to issues of unwanted benchmark-specific optimizations that would be incompatible with SPEC's goal of fair benchmarking.

To ensure that results are relevant to end-users, SPEC expects that the hardware and software implementations used for the running the SPEC benchmarks adhere to following conventions:

In cases where it appears that the above guidelines have not been followed, SPEC may investigate such a claim and request that the offending optimization (e.g. a SPEC-benchmark specific pattern matching) be backed off and the results resubmitted. Or, SPEC may request that the vendor correct the deficiency (e.g. make the optimization more general purpose or correct problems with code generation) before submitting results based on the optimization.

1.3 SPEC may adapt the suites

The SPEC Open Systems Group reserves the right to adapt the CINT2000 and CFP2000 suites as it deems necessary to preserve its goal of fair benchmarking (e.g. remove a benchmark, modify benchmark code or workload, etc). If a change is made to a suite, SPEC will notify the appropriate parties (i.e. members and licensees). SPEC may redesignate the metrics (e.g. changing the metric from SPECfp2000 to SPECfp2000a). In the case that a benchmark is removed, SPEC reserves the right to republish in summary form adapted results for previously published systems, converted to the new metric. In the case of other changes, such a republication may necessitate re-testing and may require support from the original test sponsor.

1.4 Estimates are allowed

SPEC CPU2000 metrics may be estimated. All estimates must be clearly identified as such. Licensees are encouraged to give a rationale or methodology for any estimates, and to publish actual SPEC CPU2000 metrics as soon as possible. SPEC requires that every use of an estimated number be flagged, rather than burying an asterisk at the bottom of a page. For example, say something like this:

      The JumboFast will achieve estimated performance of 
          Model 1   SPECint2000 50 est.
                    SPECfp2000  60 est.
          Model 2   SPECint2000 70 est.
                    SPECfp2000  80 est.

1.5 Submission to SPEC is encouraged

SPEC encourages the submission of results for review by the relevant subcommittee and subsequent publication on SPEC's web site (http://www.spec.org/). SPEC uses a peer-review process prior to publication, in order to improve consistency in the application and interpretation of these run rules.

Submission to SPEC's review process is not required. Testers may publish rule-compliant results independently. No matter where published, all results publicly disclosed must adhere to the SPEC Run and Reporting Rules, or be clearly marked as estimates. (See also rules 4.5 and 4.6, below.)

2.0 Building SPEC CPU2000

SPEC has adopted a set of rules defining how SPEC CPU2000 benchmark suites must be built and run to produce peak and base metrics.

2.0.1 Peak and base builds

"Peak" metrics are produced by building each benchmark in the suite with a set of optimizations individually tailored for that benchmark. The optimizations selected must adhere to the set of general benchmark optimization rules described in section 2.1 below. This may also be referred to as "aggressive compilation".

"Base" metrics are produced by building all the benchmarks in the suite with a common set of optimizations. In addition to the general benchmark optimization rules (section 2.1), base optimizations must adhere to a stricter set of rules described in section 2.2. These additional rules serve to form a "baseline" of recommended performance optimizations for a given system.

2.0.2 Runspec must be used

With the release of SPEC CPU2000 suites, a set of tools based on GNU Make and Perl5 are supplied to build and run the benchmarks. To produce publication-quality results, these SPEC tools must be used. This helps ensure reproducibility of results by requiring that all individual benchmarks in the suite are run in the same way and that a configuration file that defines the optimizations used is available.

The primary tool is called runspec (runspec.bat for Windows NT). It is described in the runspec documentation in the docs subdirectory of the SPEC root directory -- in a Bourne shell that would be called ${SPEC}/docs/, or on NT %SPEC%\docs\ .

SPEC supplies pre-compiled versions of the tools for a variety of platforms. If a new platform is used, please see ${SPEC}/docs/tools_build.txt for information on how to build the tools and how to obtain approval for them.

For more complex ways of compilation, for example feedback-driven compilation, SPEC has provided hooks in the tools so that such compilation and execution is possible (see the tools documentation for details). Only if, unexpectedly, such a compilation and execution should not be possible, there is the possibility that the test sponsor can ask for a permission to use performance-neutral alternatives (see section 5).

2.0.3 The runspec build environment

When runspec is used to build the SPEC CPU2000 benchmarks, it must be used in generally available, documented, and supported environments (see section 1), and any aspects of the environment that contribute to performance must be disclosed to SPEC (see section 4).

On occasion, it may be possible to improve run time performance by environmental choices at build time. For example, one might install a performance monitor, turn on an operating system feature such as bigpages, or set an environment variable that causes the cc driver to invoke a faster version of the linker.

It is difficult to draw a precise line between environment settings that are reasonable versus settings that are not. Some settings are obviously not relevant to performance (such as hostname), and SPEC makes no attempt to regulate such settings. But for settings that do have a performance effect, for the sake of clarity, SPEC has chosen that:

Environmental settings that meet 2.0.3 requirement (a), (b), or (c) do not count against the limit of 4 switches (see section 2.2.6) unless they violate the rule about hidden switches (

2.0.4 Continuous Build requirement

As described in section 1, it is expected that testers can reproduce other testers' results. In particular, it must be possible for a new tester to compile both the base and peak benchmarks for an entire suite (i.e. CINT2000 or CFP2000) in one execution of runspec, with appropriate command line arguments and an appropriate configuration file, and obtain executable binaries that are (from a performance point of view) equivalent to the binaries used by the original tester.

The simplest and least error-prone way to meet this requirement is for the original tester to take production hardware, production software, a SPEC config file, and the SPEC tools and actually build the benchmarks in a single invocation of runspec on the System Under Test (SUT). But SPEC realizes that there is a cost to benchmarking and would like to address this, for example through the rules that follow regarding cross-compilation and individual builds. However, in all cases, the tester is taken to assert that the compiled executables will exhibit the same performance as if they all had been compiled with a single invocation of runspec (see 2.0.8).

2.0.5 Changes to the runspec build environment

SPEC CPU2000 base binaries must be built using the environment rules of section 2.0.3, and may not rely upon any changes to the environment during the build.

Note 1: base cross compiles using multiple hosts are allowed (2.0.6), but the performance of the resulting binaries is not allowed to depend upon environmental differences among the hosts. It must be possible to build performance-equivalent base binaries with one set of switches (2.2.2), in one execution of runspec (2.0.4), on one host, with one environment (2.0.3).

For a peak build, the environment may be changed, subject to the following constraints:

Note 2: peak cross compiles using multiple hosts are allowed (2.0.6), but the performance of the resulting binaries is not allowed to depend upon environmental differences among the hosts. It must be possible to build performance-equivalent peak binaries with one config file, in one execution of runspec (2.0.4), in the same execution of runspec that built the base binaries, on one host, starting from the environment used for the base build (2.0.3), and changing that environment only through config file hooks (2.0.5).

2.0.6 Cross-compilation allowed

It is permitted to use cross-compilation, that is, a building process where the benchmark executables are built on a system (or systems) that differ(s) from the SUT. The runspec tool must be used on all systems (typically with -a build on the host(s) and -a validate on the SUT).

If all systems belong to the same product family and if the software used to build the executables is available on all systems, this does not need to be documented. In the case of a true cross compilation, (e.g. if the software used to build the benchmark executables is not available on the SUT, or the host system provides performance gains via specialized tuning or hardware not on the SUT), the host system(s) and software used for the benchmark building process must be documented in the Notes section. See section 4.

It is permitted to use more than one host in a cross-compilation. If more than one host is used in a cross-compilation, they must be sufficiently equivalent so as not to violate rule 2.0.4. That is, it must be possible to build the entire suite on a single host and obtain binaries that are equivalent to the binaries produced using multiple hosts.

The purpose of allowing multiple hosts is so that testers can save time when recompiling many programs. Multiple hosts may NOT be used in order to gain performance advantages due to environmental differences among the hosts. In fact, the tester must exercise great care to ensure that any environment differences are performance neutral among the hosts, for example by ensuring that each has the same version of the operating system, the same performance software, the same compilers, and the same libraries. The tester should exercise due diligence to ensure that differences that appear to be performance neutral - such as differing MHz or differing memory amounts on the build hosts - are in fact truly neutral.

Multiple hosts may NOT be used in order to work around system or compiler incompatibilities (e.g. compiling the SPECfp2000 C benchmarks on a different OS version than the SPECfp2000 Fortran benchmarks in order to meet the different compilers' respective OS requirements), since that would violate the Continuous Build rule (2.0.4).

2.0.7 Individual builds allowed

It is permitted to build the benchmarks with multiple invocations of runspec, for example during a tuning effort. But, the executables must be built using a consistent set of software. If a change to the software environment is introduced (for example, installing a new version of the C compiler which is expected to improve the performance of one of the floating point benchmarks), then all affected benchmarks must be rebuilt (in this example, all the C benchmarks in the floating point suite).

2.0.8 Tester's assertion of equivalence between build types

The previous 4 paragraphs may appear to contradict each other (2.0.4 through 2.0.7), but the key word in 2.0.4 is the word "possible". Consider the following sequence of events:

In this example, the tester is taken to be asserting that the above sequence of events produces binaries that are, from a performance point of view, equivalent to binaries that would have been produced in a single invocation of the tools. If there is some optimization that can only be applied to individual benchmark builds and cannot be applied in a continuous build, the optimization is not allowed.

Rule 2.0.8 is intended to provide some guidance about the kinds of practices that are reasonable, but the ultimate responsibility for result reproducibility lies with the tester. If the tester is uncertain whether a cross-compile or an individual benchmark build is equivalent to a full build on the SUT, then a full build on the SUT is required (or, in the case of a true cross-compile which is documented as such, then a single runspec -a build is required on a single host.) Although full builds add to the cost of benchmarking, in some instances a full build in a single runspec may be the only way to ensure that results will be reproducible.

2.1 General Rules for Selecting Compilation Flags

The following rules apply to compiler flag selection for SPEC CPU2000 Peak and Base Metrics. Additional rules for Base Metrics follow in section 2.2.

2.1.1 Cannot use names

No source file or variable or subroutine name may be used within an optimization flag or compiler option.

Identifiers used in preprocessor directives to select alternative source code are also forbidden, except for a rule-compliant library substitution (2.1.2) or an approved portability flag (2.1.5). For example, if a benchmark source code uses one of:

        #ifdef IDENTIFIER 
        #ifndef IDENTIFIER 
        #if defined IDENTIFIER
        #if !defined IDENTIFIER

to provide alternative source code under the control of a compiler option such as -DIDENTIFIER, such a switch may not be used unless it meets the criteria of 2.1.2 or 2.1.5.

2.1.2 Limitations on library substitutions

Flags which substitute pre-computed (e.g. library-based) routines for routines defined in the benchmark on the basis of the routine's name are not allowed. Exceptions are:

a) the function alloca. It is permitted to use a flag that substitutes the system's builtin_alloca for any C/C++ benchmark. The use of such a flag shall furthermore not count as one of the allowed 4 base switches.

b) the level 1, 2 and 3 BLAS functions in the CFP2000 benchmarks, and the netlib-interface-compliant FFT functions. Such substitution shall only be acceptable in a peak run, not in base.

2.1.3 Feedback directed optimization is allowed.

Only the training input (which is automatically selected by runspec) may be used for the run that generates the feedback data.

For peak runs, optimization with multiple feedback runs is also allowed.

The requirement to use only the train data set at compile time shall not be taken to forbid the use of run-time dynamic optimization tools that would observe the reference execution and dynamically modify the in-memory copy of the benchmark. However, such tools would not be allowed to in any way affect later executions of the same benchmark (for example, when running multiple times in order to determine the median run time). Such tools would also have to be disclosed in the submission of a result, and would have to be used for the entire suite (see section 3.3).

2.1.4 Limitations on size changes

Flags that change a data type size to a size different from the default size of the compilation system are not allowed. Exceptions are: a) C long can be 32 or greater bits, b) pointer sizes can be set different from the default size.

2.1.5 Portability Flags

A flag is considered a portability flag if, and only if, one of the following two conditions hold:

(a) The flag is necessary for the successful compilation and correct execution of the benchmark regardless of any or all compilation flags used. That is, if it is possible to build and run the benchmark without this flag, then this flag is not considered a portability flag.

(b) The benchmark is discovered to violate the ANSI standard, and the compilation system needs to be so informed in order to avoid incorrect optimizations.

For example, if a benchmark fails with
due to a standard violation, but works with either
      -O4 -noansi_alias
then it would be permissible to use -noansi_alias as a portability flag.

Proposed portability flags are subject to scrutiny by the SPEC CPU Subcommittee. The initial submissions for CPU2000 will include a reviewed set of portability flags on several operating systems; later submitters who propose to apply additional portability flags should prepare a justification for their use. If the justification is 2.1.5(b), please include a specific reference to the offending source code module and line number, and a specific reference to the relevant sections of the appropriate ANSI standard.

SPEC always prefers to have benchmarks obey the standard, and SPEC attempts to fix as many violations as possible before release of the suites. But it is recognized that some violations may not be detected until years after a suite is released. In such a case, a portability switch may be the practical solution. Alternatively, the subcommittee may approve a source code fix.

For a given portability problem, the same flag(s) must be applied to all affected benchmarks.

If a library is specified as a portability flag, SPEC may request that the table of contents of the library be included in the disclosure.

2.2 Base Optimization Rules

In addition to the rules listed in section 2.1 above, the selection of optimizations to be used to produce SPEC CPU2000 Base Metrics includes the following:

2.2.1 Safe

The optimizations used are expected to be safe, and it is expected that system or compiler vendors would endorse the general use of these optimizations by customers who seek to achieve good application performance.

2.2.2 Same for all

The same compiler and same set of optimization flags or options is used for all benchmarks of a given language within a benchmark suite, except for portability flags (see 2.1.5 below). All flags must be applied in the same order for all benchmarks. The runspec documentation file covers how to set this up with the SPEC tools.

Specifically, benchmarks that are written in Fortran-77 or Fortran-90 may not use a different set of flags or different compiler invocation in a base run. In a peak run, it is permissible to use different compiler commands, as well as different flags, for each benchmark.

2.2.3 Feedback directed optimization is allowed in base.

The allowed steps are:

   PASS1:        compile the program

   Training run: run the program with the train data set

   PASS2:        re-compile the program, or invoke a tool that
                 otherwise adjusts the program, and which uses
                 the observed profile from the training run.

PASS2 is optional. For example, it is conceivable that a daemon might optimize the image automatically based on the training run, without further tester intervention. Such a daemon would have to be noted in the full disclosure to SPEC.

It is acceptable to use the various fdo_ hooks to clean up the results of previous feedback compilations. The preferred hook is fdo_pre0 -- for example:
       fdo_pre0 = rm /tmp/prof/*Counts*
Other than such cleanup, no intermediate processing steps may be performed between the steps listed above. If additional processing steps are required, the optimization is allowed for peak only but not for base.

When a two-pass process is used, the flag(s) that explicitly control(s) the generation or the use of feedback information can be - and usually will be - different in the two compilation passes. For the other flags, one of the two conditions must hold:

  1. The same set of flags are used for both invocations of the compiler/linker. For example:
        PASS1_CFLAGS= -gen_feedback -fast_library -opt1 -opt2 
        PASS2_CFLAGS= -use_feedback -fast_library -opt1 -opt2 
  2. The set of flags in the first invocation are a subset of the flags used in the second. For example:
        PASS1_CFLAGS= -gen_feedback -fast_library
        PASS2_CFLAGS= -use_feedback -fast_library -opt1 -opt2 

2.2.4 Assertion flags may NOT be used in base.

An assertion flag is one that supplies semantic information that the compilation system did not derive from the source statements of the benchmark.

With an assertion flag, the programmer asserts to the compiler that the program has certain nice properties that allow the compiler to apply more aggressive optimization techniques (for example, that there is no aliasing via C pointers). The problem is that there can be legal programs (possibly strange, but still standard-conforming programs) where such a property does not hold. These programs could crash or give incorrect results if an assertion flag is used. This is the reason why such flags are sometimes also called "unsafe flags". Assertion flags should never be applied to a production program without previous careful checks; therefore they are disallowed for base.

2.2.5 Floating point reordering allowed

Base results may use flags which affect the numerical accuracy or sensitivity by reordering floating-point operations based on algebraic identities.

2.2.6 Only 4 optimization switches

Base optimization is further restricted by limiting to four (4) the maximum number of optimization switches that can be applied to create a base result. An example of this would be:
       cc general_opt processor_flag library other_opt
Where testers might use a flag for a general optimization, one to specify the architecture, one to specify an optimal library, plus one other optimization flag.

The following rules must be followed for selecting and counting optimization flags: Unit of definition

A flag is defined as a unit of definition to the compilation system. For example, each of the following is defined as a single flag:

    -tp p5
    /link /compress_image

In the last example above, "/link" merely tells the driver to send the flags that follow to the linker. The /compress_image actually tells the linker to do something, and so counts as a single unit of definition. Each action requested of the linker would count as a flag; for example:
       /link /compress_image /static_addressing
would be 2 flags. Delimited lists

Some compilers allow delimited lists (usually comma or space delimited) behind an initial flag; for purposes of base, each optimization item in the list counts as an optimization toward the limit of four. For example:
        -K inline,unroll,strip_mine

counts as three optimization flags. Portability flags in base

Portability flags are not counted in the count of four.

[Note: most of the run rule text formerly contained in this section is now contained in section 2.1.5.] ANSI Compliance

If a compiler flag causes a compiler to operate in an ANSI/ISO mode, one such flag may be used without being counted in the count of four switches, provided that the flag is used for all benchmarks of the given language in the benchmark suite. Feedback invocation in Pass 1 and Pass 2

Switches for feedback directed optimization follow the same rules (one unit of definition) and count as one of the four optimization flags. Since two passes are allowed for base, the first and second invocations of activating feedback count as one flag. For example:

   Pass 1: cc -prof_gather -O9 -go_fast -arch=404
   Pass 2: cc -prof_use    -O9 -go_fast -arch=404

This breaks down into [FDO invocation, optimization level, extra optimization, and an architecture flag] and counts as an acceptable four flags. Location flags

Pointer or location flags (flags that indicate where to find data) are not included in the four flag definition. For example:
       -prof_dir `pwd` Warnings, verbosity, output flags

Flags that only suppress warnings (typically -w), flags that only create object files (typically -c), flags that only affect the verbosity level of the compiler driver (typically -v), and flags that only name the output file (typically -o) are not counted as optimization flags. Entire compilation system is counted

The four flag limit counts all options for all parts of the compilation system, i.e. the entire transformation from SPEC supplied source code to completed executable. The list below is a partial set of the types of flags that would be included in the flag count: Assertion of ANSI compliance

Rule 2.2.4 shall not be taken to forbid the use of flags that assert that a benchmark complies with one or more aspects of the ANSI standard. For example, suppose that a compiler has an ANSI mode specified by saying cc -relaxed_ansi, which provides the following extensions to the standard:

It would be permissible in a base run to turn one or more of these features off. If the command:
      cc -relaxed_ansi -nointrinsic -noarg_check -noalign_check
were issued, this would be acceptable in a base run and would count as 3 optimization flags (the -relaxed_ansi is considered to be a dialect selection, not an optimization switch. See Hidden switches

It is not permissible to use environment variables or login scripts to defeat the four switch rule. For example, if a system allows the system manager to put the following into /etc/cshrc.global
      alias cc "cc -fast -O4 -unroll 8"
then the system manager has just spent 3 of the 4 allowable optimization flags, the tester has only 1 left to spend, and the full disclosure must document the switches from /etc/cshrc.global. Similarly, an environment variable or login script may not be used to pass hidden switches to other portions of the compilation system, such as pre-processors or the linker.

The behavior that is forbidden here is the hiding of strings that would normally be typed on a command line by typing them somewhere else. If a compilation system can derive more intelligent default settings for switches by its automatic examination of its environment, that behavior is allowed. For example, a compiler driver could freely notice that vm_bigpages=1 in the kernel, and change the default to -bigpages=yes for the cc command, provided, of course, that the change in defaults is documented (see section 1, Philosophy). Installation-provided switches

A compiler may freely pick up options from a system wide file that is written by default at installation time. For example, suppose that the compiler installation script examines the environment and creates:

     debug_options:   -oldstyle_debugging
     linker_type:     -multi_thread
     machine_options: -architecture_level 3
     memory_options:  -bigpages

If a mechanism such as the above operates by default at installation time with no installer intervention required (other than accepting the defaults) then the flags would NOT be counted in the 4 flag limit. The key points here are that the installer would not deviate from the defaults, and ordinary compiler users are not required to be aware of the mechanism. Switches to declare 64-bit mode

None of the SPEC CPU2000 benchmarks require a 64 bit address space, since the target memory size is only 256MB. Nevertheless, SPEC would like to encourage the submission of results using 64-bit compilers, because they represent an important new addition of technology in the industry. Therefore, a flag that puts a compiler into 64-bit mode is NOT counted against the 4-flag limit.

During the development of CPU2000, SPEC has tested the benchmarks on 7 different 64-bit platforms and believes that most 64-bit portability problems have been addressed. But it is possible, especially in the larger benchmarks, that some 64-bit source code problems may remain. If further problems are discovered, it would be permissible to specify 64-bit mode for baseline with portability exceptions. The submitter should prepare a statement of the problems found in 64-bit mode, including modules and source line numbers.

For example, suppose that a tester selects 64-bit mode through the compiler switch -lp64, and finds that benchmark 999.clumsy has incorrectly assumed that pointers and ints are the same size. It would be acceptable to submit results to SPEC using -lp64 for every benchmark except 999.clumsy, which would use -lp32. The presence of -lp64 would not count against the 4-flag limit, nor would the use of -lp32 on 999.clumsy. Cross-module optimization

Frequently, performance is improved via optimizations that work across source modules, for example -ifo, -xcrossfile, or -IPA. Some compilers may require the simultaneous presentation of all source files for inter-file optimization, as in:

      cc -ifo -o a.out file1.c file2.c 

Other compilers may be able to do cross-module optimization even with separate compilation, as in:

      cc -ifo -c -o file1.o file1.c
      cc -ifo -c -o file2.o file2.c
      cc -ifo -o a.out file1.o file2.o

By default, the SPEC tools operate in the latter mode, but they can be switched to the former through the config file option ONESTEP=yes.

Cross-module optimization is allowed in baseline, and is deemed to cost exactly one switch under any of the following conditions: Base build environment

The system environment must not be manipulated during a build of base. For example, suppose that an environment variable called bigpages can be set to yes or no, and the default is no. The tester must not change the choice during the build of the base binaries. See section 2.0.5.

2.2.7 Safety and Standards Conformance

The requirements that optimizations are expected to be safe, and generate correct code for a class of programs larger than the suite itself (sections 2.2.1 and 1.2), are normally interpreted as requiring that the system, as used in baseline, implement the language correctly. "The language" is defined by the appropriate ANSI/ISO standard (C89, Fortran-90, C++ 98).

The principle of standards conformance is not automatically applied, because SPEC has historically allowed certain exceptions:

  1. Section 2.2.5 allows reordering of arithmetic operands.
  2. SPEC has not insisted on conformance to the C standard in the setting of errno.
  3. SPEC has not dealt with (and does not intend to deal with) language standard violations that are performance neutral for the CPU2000 suite.
  4. When a more recent standard modifies a requirement imposed by an earlier standard, SPEC will also accept systems that adhere to the more recent ANSI/ISO standard.

Otherwise, a deviation from the standard that is not performance neutral, and gives the particular implementation a CPU2000 performance advantage over standard-conforming implementations, is considered an indication that the requirements about "safe" and "correct code" optimizations are probably not met. Such a deviation can be a reason for SPEC to find a result not rule-conforming.

If an optimization causes a SPEC benchmark to fail to validate, and if the relevant portion of this benchmark's code is within the language standard, the failure is taken as additional evidence that an optimization is not safe.

3. Running SPEC CPU2000

3.1 System Configuration

3.1.1 File Systems

SPEC requires the use of a of single file system to contain the directory tree for the SPEC CPU2000 suite being run. SPEC allows any type of file system (disk-based, memory-based, NFS, DFS, FAT, NTFS etc.) to be used. The type of file system must be disclosed in reported results.

3.1.2 System State

The system state (multi-user, single-user, init level N) may be selected by the tester. This state along with any changes in the default configuration of daemon processes or system tuning parameters must be documented in the notes section of the results disclosure. (For Windows NT, system state is normally "Default"; a list of services that are shut down should be provided, if any, e.g. networking service shut down)

3.2 Additional Rules for Running SPECrate

3.2.1 Number of copies in peak

For SPECint_rate2000 and SPECfp_rate2000 (peak), the tester is free to choose the number of concurrent copies for each individual benchmark independently of the other benchmarks.

The median value that is used must, for each benchmark, come from at least three runs with the same number of copies. However, this number may be different between benchmarks.

3.2.2 Number of copies in base

For SPECint_rate_base2000 and SPECfp_rate_base2000, the tester must select a single value to use as the number of concurrent copies to be applied to all benchmarks in the suite.

3.2.3 Single file system

The multiple concurrent copies of the benchmark must be executed using data from different directories within the same file system. Each copy of the test must have its own working directory, which is to contain all the files needed for the actual execution of the benchmark, including input files, and all output files when created. The output of each copy of the benchmark must be validated to be the correct output.

Note: In CPU95, the benchmark binary itself was also copied, which inhibited sharing of the text section across multiple users. For CPU2000, the benchmark will be placed in the run directories only once. For example, if swim is executed for six users, there would be six copies of its data but only one copy of the swim executable in the run directories.

3.3 Continuous Run Requirement

All benchmark executions, including the validations steps, contributing to a particular result page must occur continuously, that is, in one execution of runspec.

3.4 Run-time environment

SPEC does not attempt to regulate the run-time environment for the benchmarks, other than to require that the environment be:

For example, if each of the following:

     run level:   single-user 
     OS tuning:   bigpages=yes, cpu_affinity=hard
     file system: in memory

were set prior to the start of runspec, unchanged during the run, described in the submission, and documented and supported by a vendor for general use, then these options could be used in a CPU2000 submission.

Note: Item (a) is intended to forbid all means by which a tester might change the environment. In particular, it is forbidden to change the environment during the run using the config file hooks such as monitor_pre_bench. Those hooks are intended for use when studying the benchmarks, not for actual submissions.

3.5 Basepeak

If a result page will contain both peak and base CFP2000 results, a single runspec invocation must have been used to run both the peak and base executables for each benchmark and their validations. The tools will ensure that the base executables are run first, followed by the peak executables.

It is permitted to:


1. It is permitted but not required to compile in the same runspec invocation as the execution. See rule 2.0.6 regarding cross compilation.

2. It is permitted but not required to run both the integer suite and the floating point suite in a single invocation of runspec.

4. Results Disclosure

SPEC requires a full disclosure of results and configuration details sufficient to reproduce the results. SPEC also requires that base results be submitted whenever peak results are submitted. If peak results are published outside of the SPEC web site (http://www.spec.org/cpu2000/) in a publicly available medium, the tester must supply base results on request. Publication of results under non-disclosure or company internal use or company confidential are not "publicly" available.

A full disclosure of results will typically include:

A full disclosure of results should include sufficient information to allow a result to be independently reproduced. If a tester is aware that a configuration choice affects performance, then s/he should document it in the full disclosure.

Note: this rule is not meant to imply that the tester must describe irrelevant details or provide massively redundant information. For example, if the SuperHero Model 1 comes with a write-through cache, and the SuperHero Model 2 comes with a write-back cache, then specifying the model number is sufficient, and no additional steps need to be taken to document the cache protocol. But if the Model 3 is available with both write-through and write-back caches, then a full disclosure must specify which cache is used.

For information on how to submit a result to SPEC, contact the SPEC office. Contact information is maintained at the SPEC web site, http://www.spec.org/.

4.1 Rules regarding availability date and systems not yet shipped

If a tester publishes results for a hardware or software configuration that has not yet shipped,

Note 1: "Generally available" is defined in the SPEC Open Systems Group Policy document, which can be found at http://www.spec.org/osg/policy.html.

Note 2: It is acceptable to test larger configurations than customers are currently ordering, provided that the larger configurations can be ordered and the company is prepared to ship them. For example, if the SuperHero is available in configurations of 1 to 1000 CPUs, but the largest order received to date is for 128 CPUs, the tester would still be at liberty to test a 1000 CPU configuration and publish the result.

4.1.1 Pre-production software can be used

A "pre-production", "alpha", "beta", or other pre-release version of a compiler (or other software) can be used in a test, provided that the performance-related features of the software are committed for inclusion in the final product.

The tester should practice due diligence to ensure that the tests do not use an uncommitted prototype with no particular shipment plans. An example of due diligence would be a memo from the compiler Project Leader which asserts that the tester's version accurately represents the planned product, and that the product will ship on date X.

The final, production version of all components must be generally available within 3 months after first public release of the result.

4.1.2 Software component names

When specifying a software component name in the results disclosure, the component name that should be used is the name that customers are expected to be able to use to order the component, as best as can be determined by the tester. It is understood that sometimes this may not be known with full accuracy; for example, the tester may believe that the component will be called "TurboUnix V5.1.1" and later find out that it has been renamed "TurboUnix V5.2", or even "Nirvana 1.0". In such cases, an editorial request can be made to update the result after publication.

Some testers may wish to also specify the exact identifier of the version actually used in the test (for example, "build 20020604"). Such additional identifiers may aid in later result reproduction, but are not required; the key point is to include the name that customers will be able to use to order the component.

Instead of listing the component name for the production version of a component, a tester may prefer to reference a beta version of a component. This is acceptable, provided that all three of these conditions are met:

  1. The beta is open to all interested parties without restriction. For example, a compiler posted to the web for general users to download, or a software subscription service for developers, would both be acceptable.

  2. The beta is generally announced. A secret test version is not acceptable.

  3. The final product has a committed date for general availability, no greater than 3 months after the first public release of the result.

4.1.3 Specifying dates

The configuration disclosure includes fields for both "Hardware Availability" and "Software Availability". In both cases, the date to be used is the date of the component which is the last of the respective type to become generally available.

For software, the date of a beta may be used if it meets the conditions mentioned in the previous section.

4.1.4 If dates are not met

If a software or hardware date changes, but still falls within 3 months of first publication, a result page may be updated on request to SPEC.

If a software or hardware date changes to more than 3 months after first publication, the result is considered Non-Compliant. For procedures regarding Non-Compliant results, see the OSG Policy Document, http://www.spec.org/osg/policy.html.

4.1.5 Performance changes for pre-production systems

SPEC is aware that performance results for pre-production systems may sometimes be subject to change, for example when a last-minute bugfix reduces the final performance.

For results measured on pre-production systems, if the tester becomes aware of something that will reduce production system performance by more than 1.75% on an overall metric (for example, SPECfp_base2000 or SPECfp2000), the tester is required to republish the result, and the original result shall be considered non-compliant.

4.2 Configuration Disclosure

The following sections describe the various elements that make up the disclosure for the system and test configuration used to produce a given test result. The SPEC tools used for the benchmark allow setting this information in the configuration file:

4.2.1 System Identification

4.2.2 Hardware Configuration

4.2.3 Software Configuration

4.2.4 Tuning Information

SPEC is aware that sometimes the spelling of compiler switches, or even the presence of compiler switches, changes between beta releases and final releases. For example, suppose that during a compiler beta the tester specifies:

     f90 -fast -architecture_level 3 -unroll 16

but the tester knows that in the final release the architecture level will be automatically set by -fast, and the compiler driver is going to change to set the default unroll level to 16. In that case, it would be permissible to mention only -fast in the notes section of the full disclosure, and the above command line would be considered to have used only one optimization switch out of the four allowed in base. The tester is expected to exercise due diligence regarding such flag reporting, to ensure that the disclosure correctly records the intended final product. An example of due diligence would be a memo from the compiler Project Leader which promises that the final product will spell the switches as reported. SPEC may request that such a memo be generated and that a copy be provided to SPEC.

4.3 Test Results Disclosure

The actual test results consist of the elapsed times and ratios for the individual benchmarks and the overall SPEC metric produced by running the benchmarks via the SPEC tools. The required use of the SPEC tools ensures that the results generated are based on benchmarks built, run, and validated according to the SPEC run rules. Below is a list of the measurement components for each SPEC CPU2000 suite and metric:

4.3.1 Speed Metrics

o CINT2000 Speed Metrics: 
     SPECint_base2000  (Required Base result)
     SPECint2000       (Optional Peak result)

o CFP2000 Speed Metrics:  
     SPECfp_base2000   (Required Base result)
     SPECfp2000        (Optional Peak result)

The elapsed time in seconds for each of the benchmarks in the CINT2000 or CFP2000 suite is given and the ratio to the reference machine (Sun Ultra 10) is calculated. The SPECint_base2000 and SPECfp_base2000 metrics are calculated as a Geometric Mean of the individual ratios, where each ratio is based on the median execution time from an odd number of runs, greater than or equal to 3. All runs of a specific benchmark when using the SPEC tools are required to have validated correctly.

The benchmark executables must have been built according to the rules described in section 2 above.

4.3.2 Throughput Metrics

o CINT2000 Throughput Metrics:    
        SPECint_rate_base2000 (Required Base result)
        SPECint_rate2000      (Optional Peak result)

o CFP2000 Throughput Metrics:     
        SPECfp_rate_base2000  (Required Base result)
        SPECfp_rate2000       (Optional Peak result)

The throughput metrics are calculated based on the execution of the same base and/or peak benchmark executables as for the speed metrics described above. However, the test sponsor may select the number of concurrent copies of each benchmark to be run. The same number of copies must be used for all benchmarks in a base test. This is not true for the peak results where the tester is free to select any combination of copies. The number of copies selected is usually a function of the number of CPUs in the system.

The "rate" calculated for each benchmark is a function of:
    the number of copies run *
    reference factor for the benchmark *
    number of seconds in an hour /
    elapsed time in seconds

which yields a rate in jobs/hour. The rate metrics are calculated as a geometric mean from the individual SPECrates using the median result from an odd number of runs, greater than or equal to 3 runs. As with the speed metric, all copies of the benchmark during each run are expected to have validated correctly.

It is permitted to use the SPEC tools to generate a 1-cpu rate disclosure from a 1-cpu speed run. The reverse is not permitted.

4.3.3 Performance changes for production systems

As mentioned above, performance may sometimes change for pre-production systems; but this is also true of production systems (that is, systems that have already begun shipping). For example, a later revision to the firmware, or a mandatory OS bugfix, might reduce performance.

For production systems, if the tester becomes aware of something that reduces performance by more than 1.75% on an overall metric (for example, SPECfp_base2000 or SPECfp2000), the tester is encouraged but not required to republish the result. In such cases, the original result is not considered non-compliant. The tester is also encouraged, but not required, to include a reference to the change that makes the results different (e.g. "with OS patch 20020604-02").

4.4 Metric Selection

Submission of peak results are considered optional by SPEC, so the tester may choose to submit only base results. Since by definition base results adhere to all the rules that apply to peak results, the tester may choose to refer to these results by either the base or peak metric names (e.g. SPECint_base2000 or SPECint2000).

It is permitted to publish base-only results. Alternatively, the use of the flag basepeak is permitted, as described in section 3.5.

4.5 Research and Academic usage of CPU2000

SPEC encourages use of the CPU2000 suites in academic and research environments. It is understood that experiments in such environments may be conducted in a less formal fashion than that demanded of hardware vendors submitting to the SPEC web site. For example, a research environment may use early prototype hardware that simply cannot be expected to stay up for the length of time required to meet the Continuous Run requirement (see section 3.3), or may use research compilers that are unsupported and are not generally available (see section 1).

Nevertheless, SPEC would like to encourage researchers to obey as many of the run rules as practical, even for informal research. SPEC respectfully suggests that following the rules will improve the clarity, reproducibility, and comparability of research results.

Where the rules cannot be followed, SPEC requires that the deviations from the rules be clearly disclosed, and that any SPEC metrics (such as SPECint2000) be clearly marked as estimated.

It is especially important to clearly distinguish results that do not comply with the run rules when the areas of non-compliance are major, such as not using the reference workload, or only being able to correctly validate a subset of the benchmarks.

4.6 Required Disclosures

If a SPEC CPU2000 licensee publicly discloses a CPU2000 result (for example in a press release, academic paper, magazine article, or public web site), and does not clearly mark the result as an estimate, any SPEC member may request that the rawfile(s) from the run(s) be sent to SPEC. Such results must be made available to all interested members no later than 10 working days after the request.

Any SPEC member may request that the result and its rawfile be reviewed by the appropriate SPEC subcommittee. If the tester does not wish to have the result posted on the SPEC web pages, the result will not be posted.

But when public claims are made about CPU2000 results, whether by vendors or by academic researchers, SPEC reserves the right to take actions, for example if it should occur that the rawfile is not made available, or shows substantially different performance from the tester's claim, or shows obvious violations of the run rules.

4.7 Fair Use

Consistency and fairness are guiding principles for SPEC. To help ensure that these principles are sustained, SPEC has adopted guidelines for public use of SPEC CPU2000 benchmark results.

When any organization or individual makes public claims using SPEC CPU2000 benchmark results, SPEC requires that:

[1]    Reference is made to the SPEC trademark. Such reference may be included in a notes section with other trademark references (see http://www.spec.org/spec/trademarks.html for all SPEC trademarks and service marks).
[2] The SPEC web site (http://www.spec.org) or a suitable sub-page is noted as the source for more information.
[3] If competitive comparisons are made, the following additional rules apply:
a.    The results compared must use SPEC metrics. Performance comparisons may be based upon any of the following metrics:
  • The overall results: SPECint_base2000, SPECint2000, SPECfp_base2000, SPECfp2000, SPECint_rate_base2000, SPECint_rate2000, SPECfp_rate_base2000, SPECfp_rate2000
  • Individual benchmark SPECratios
  • Median run times of the individual benchmarks
b. The basis for comparison must be stated. Information from result pages may be used to define a basis for comparing a subset of systems, such as number of CPUs, operating system version, cache size, memory size, compiler version, or compiler optimizations used.
c. The source of the competitive data must be stated, and the licensee (tester) must be identified or be clearly identifiable from the source.
d. The date competitive data was retrieved must be stated.
e. All data used in comparisons must be publicly available (from SPEC or elsewhere)

The following paragraph is an example of acceptable language when publicly using SPEC benchmarks for competitive comparisons:


SPEC® and SPEC CPU2000® are registered trademarks of the Standard Performance Evaluation Corporation. Competitive benchmark results stated above reflect results published on www.spec.org as of Jan 12, 2001. The comparison presented above is based on the best performing 4-cpu systems currently shipping by Vendor 1, Vendor 2 and Vendor 3. For the latest SPEC CPU2000 benchmark results, visit http://www.spec.org/cpu2000/.

5. Run Rule Exceptions

If for some reason, the test sponsor cannot run the benchmarks as specified in these rules, the test sponsor can seek SPEC OSG approval for performance-neutral alternatives. No publication may be done without such approval. OSG maintains a Policies and Procedures document that defines the procedures for such exceptions.


Copyright (C) 1999-2001 Standard Performance Evaluation Corporation. All Rights Reserved