SPEC logo

SPECmail2001 User Guide

Version 1.05
Last modified: March 21, 2001


1     Introduction
1.1   Terminology
1.2   Overview
2     How to install SPECmail2001
2.1   Before you begin – a pre-installation checklist
2.1.1 Network Configuration
2.1.2 Directory Services
2.1.3 Mail Domain Names
2.1.4 Disk Space Allocation
2.1.5 Mailbox Creation
2.1.6 Client Configuration
2.1.7 Synchronizing the Clocks of the Servers and Clients
2.2   Setting Up the Server
2.2.1 Installing your Server
2.2.2 Testing your Server
2.2.3 Creating User Accounts on the Server
2.3   Setting Up the Clients
2.3.1 Installing Java
2.3.2 Installing the SPECmail2001 Software
2.4   RC Files
2.4.1 Configuration Parameters
2.4.2 System Under Test (SUT) Description
2.4.3 Trace Parameters
2.4.4 Fixed Parameters
3     How to Run SPECmail2001
3.1   Starting the Load Generators
3.1.1 Starting the Load Generators on UNIX
3.1.2 Starting the Load Generators on MS Windows
3.2   Starting the Mail Sinks
3.3   Starting the Benchmark Manager
3.3.1 Options
4     Results
4.1   Generating a SPECmail2001 HTML Report
4.1.1 In UNIX
4.1.2 In MS Windows
4.2   The HTML Results File
4.2.1 Results Block
4.2.2 Summary Results
4.2.3 System Descriptions
4.2.4 Configuration Diagram
4.2.5 Detailed Results
4.2.6 Notes/Tuning Information
5     Tips and Troubleshooting
5.1   Using Java Version 1.1.8
5.2   Benchmark Manager Only Prints Version Number
5.3   Connecting to Remote Clients Failed: InvalidClassException
5.4   Couldn't Read Config File Error
6     Submitting Results
Appendix A – Configuration Parameters
Appendix B – System Description Parameters
Appendix C – Trace Parameters
Appendix D – Fixed Parameters

1 Introduction

SPECmail2001 is a client server benchmark for measuring mail server performance.

Software on one or more client machines generates a benchmark load for a System Under Test (SUT) and measures the SUT response times. A SUT can be a mail server running on a single system or a cluster of systems.

A SPECmail2001 'run' consists of three benchmark sets simulating differing load levels: 80%, 100% and 120%. The configuration file specifies a specific number of users, indicating the number of SPECmail2001 users to simulate at 100% load level. The mail server must maintain a specific Quality of Service (QoS) at the 100% load level to produce a valid benchmark result. If the mail server does maintain the specified QoS at the 100% load level, the performance of the mail server is reported as SPECmail2001 messages per minute. SPECmail2001 Messages Per Minute is the metric used for the SPECmail benchmark.

This document is a practical guide for setting up and running a SPECmail2001 benchmark. This user guide discusses some, but not all, of the rules and restrictions pertaining to SPECmail2001. Before continuing with the benchmark, we strongly recommend you read the complete SPECmail2001 Run and Reporting Rules contained in the kit. For an overview of the benchmark architecture, see the SPECmail2001 Architecture White Paper also contained in the kit.

1.1 Terminology

The System Under Test (SUT) is the hardware and software that makes up the mail server being tested by the benchmark. It includes the hardware system, disks, and network components used, as well as the OS and mail server software. The client machines (clients) running the load generators are not considered part of the SUT.

The Load Generator is the program that creates traffic (POP and SMTP) for the SUT. You can have as many clients running load generators as you wish, but you should at a minimum have two. One client will be the SMTP mail sink and one client will run the benchmark manager.

The SMTP Mail Sink is one or more load generators that receives traffic from the SUT and that simulate a remote mail server. During the benchmark run, the load generators will generate mail for internal and external mail recipients. Mail intended for external mail recipients will be relayed by the SUT to the SMTP mail sinks.

The Benchmark Manager is the program that reads in the RC file and drives the load generators. You can only have one benchmark manager. The benchmark manager is responsible for starting and stopping the load generator and for collecting results and creating the raw results file. The system that runs the benchmark manager is known as the Prime Client.

1.2 Overview

Below is a quick overview of what needs to be done before running the SPECmail2001 benchmark:

  1. Install and setup your Mail Server according to your Mail Server instructions.
  2. On your Mail Server, create the target number of users and mailboxes.
  3. Configure the client machines to drive the SUT. At least two client machines are recommended.
  4. Determine which clients will serve as the benchmark manager, load generators, and mail sinks.
  5. Install the SPECmail2001 benchmark manager and load generator software on your clients, based on the function the client will serve during the test. Note that a mail sink requires the load generator software.
  6. On the prime client, where the benchmark manager resides, modify the SPECmail_config.rc file to reflect client/SUT configuration, and the number of target users.
  7. Verify that mail can be sent and retrieved by the SUT in accordance to how the SUT is defined in the SPECmail_config.rc file.

To check that the configuration is set up correctly:

  1. Setup and start a short run.

    1. Setup the SPECmail_config.rc file to define your SUT and a small number of users. For instance, set USER_START to 1 and USER_END to 500. This will generate a load for only 500 users.

    2. Include in the SPECmail_config.rc file the following two variables: LOAD_FACTORS and RUN_SECONDS, and set them to 100% and 600 respectively. This will cause the test to only perform at the 100% load level. The test will run for approximately 30 minutes (10 minutes for warm-up, 10 minutes for measurement, and 10 minutes for ramp-down).

    3. Start the load generators and the mail sinks (the clients)

    4. Start the benchmark manager. Note, the clients must have started properly before you can start the benchmark manager.

  2. Verify that the mail sinks are setup properly, by manually connecting to them during the short test run. The mail sinks should display a banner indicating that it is a SPECmail2001 program.

  3. After the short test has completed, examine the mail logs on the Mail Server to ensure local/remote mail was delivered, and local mail was retrieved.

  4. Run the reporter program on the results file, to verify that statistics and other benchmark related information is being recorded during the test run.

2 How to install SPECmail2001

2.1 Before you begin – a pre-installation checklist

Here is a checklist of items to review before installing the SPECmail2001 benchmark software.

2.1.1 Network Configuration

It is strongly recommended that the benchmark be run in an isolated network, to avoid external network traffic impacting your measured results. 100base-T Ethernet or faster media and switching are recommended for connecting the clients and server. The connections between a load generating client and the SUT must not use a TCP Maximum Segment Size (MSS) greater than 1460 bytes and the SUT value of TIME_WAIT must be at least 60 seconds (see the SPECmail2001 Run and Reporting Rules for more details).

2.1.2 Directory Services

Running directory services is optional, and it is your choice whether to run the directory service on the mail server or on a separate system. In either case, if you are running a directory service it must be reported as part of the SUT. Note that if you are using a directory service, it needs to be populated with the user information before the benchmark can be run (see Creating User Accounts on the Server below).

2.1.3 Mail Domain Names

You must choose at least two domain names, one for local delivery to the mail server and one for remote delivery to be relayed by the mail server. If you are running multiple mail sinks, each will require it's own domain. These domain names need to be entered into your internal domain name service, network information service, distributed name service, or other equivalent service for the mail server. Your mail server needs to be configured to relay remote mail to the SMTP mail sinks. The domain names you choose should be unique and isolated from the real world.

2.1.4 Disk Space Allocation

Your mail server will need enough disk space to hold the mail store and mail queue based on the SPECmail2001 user load you intend to generate. For a workload simulating 10,000 SPECmail2001 users, you will need ~1,000Mb of disk space for the mailboxes. For 100,000 SPECmail2001 users, you will need ~10,000Mb of disk space for the mailboxes. You can scale appropriately for higher user counts.

Note that these disk space values are just estimates, the actual amount of disk space required may vary based on your SUT. The benchmark load is designed to maintain a steady state, so the number of messages and message size should remain fairly constant during the benchmark run but you may need additional space on your SUT for disk fragmentation, log files, and storage overhead.

2.1.5 Mailbox Creation

The mailboxes must be created prior to running the benchmark, using the naming convention specified in Creating User Accounts on the Server below. The mailbox contents can be initialized as part of the benchmark run or initialization can be done independently using the –initonly flag with the benchmark manager.

2.1.6 Client Configuration

The client systems running the benchmark manager, load generators and mail sinks must all use the same version of the Java Runtime Environment (JRE). The load generators require at a minimum JRE v1.1.8; the software has been tested using both v1.1.8 and v1.2.2 on several different platforms. The number of load generators needed will depend on the number of SPECmail2001 users being simulated and the performance of the clients – we recommend that you look at existing posted results for an indication of the number and size of clients you will need to run the benchmark. There is no requirement that all clients be the same type or size.

2.1.7 Synchronizing the Clocks of the Servers and Clients

Synchronization of the clocks on the client and server is not required for the SPECmail2001 benchmark, but it is recommended. Synchronized clocks will simplify evaluating the log files created by the benchmark on the clients.

2.2 Setting Up the Server

2.2.1 Installing your Server

Please see the installation instructions for your version of the mail server. Your installation must present the appearance and behavior of a single logical server for each protocol to all the mail clients (see the SPECmail2001 Run and Reporting Rules for more information).

You will need to set up local and remote domains as set in the configuration file. The local domain is for mail that gets stored by your mail server, the remote domains are for mail that gets forwarded to the mail sinks (SPECmail load generators act as mail sinks during the benchmark run). Note that for some mail servers, both the local and remote domains must resolve (they must be recognized by your system). If the remote domain does not resolve the server will not forward remote mail to the mail sinks.

2.2.2 Testing your Server

Once you have installed and started your mail server, you should perform the following tests on all clients to ensure that the clients can communicate with the mail server. Note that you must set up your domains and at least one user account before attempting these tests:

For SMTP (change the words in italics according to your setup):

# telnet <your-server> 25
<the server should greet you with its server name and version number, etc>
helo domain.com
mail from: test001@domain.com
rcpt to: test002@domain.com
data
This is a test for SPECmail2001
.
quit

Then go to your mail server queue repository to check if the mail is queued, or check the mailbox to see if it has been delivered. You can also use the following POP3 commands to check if the mail has been delivered:

# telnet <your-server> 110
<the server should greet you with its server name and version number, etc>
user test002
pass test002-password
stat
list
retr 1
quit

2.2.3 Creating User Accounts on the Server

The user accounts you create on your mail server for the benchmark must be of the form <USERNAME_PREFIX><USER_ID>, for example test1000. The username prefix is defined in the configuration file (see Configuration Parameters below). Each user has a unique user ID, which is a number whose lower and upper bounds are defined in the configuration file.

You can create as many user accounts on the server as you wish; the benchmark will only use the user accounts that are dictated by the username prefix and user IDs specified in the configuration file. It is recommended you create your target maximum number of user accounts and that you initialize all of them, even if you do not intend to use them all initially. This is because the benchmark will maintain the contents of the mailboxes in a steady state, so re-initialization will not be necessary from run to run. So, if you initialize the mailboxes for 100,000 users, even if you only use 50,000 for your first tests you can later change your configuration to use up to 100,000 users without having to re-initialize the contents of the mailboxes.

2.3 Setting Up the Clients

2.3.1 Installing Java

The load generators are Java applications, so your clients will need a Java version 1.1.8 or later. The SPEC Mail Server subcommittee suggests you use Java version 1.2.2 or later.  If you must use Java version 1.1.8, please see the Troubleshooting section on using Java version 1.1.8.  You can obtain Java for your client platform from one of the following web sites:

Refer to the vendor web sites for installing and configuring Java on your clients. Java is not necessary on the mail server unless required by your mail server software.

Note that some versions of Java (such as freeBSD and earlier 1.1.8 versions) do not contain a Just In Time Java bytecode compiler (JIT). A JIT compiler significantly reduces the CPU consumption of Java. As a result, depending on the speed of the CPU, without a JIT several clients will be required to generate a load for a relatively small number of users. Since a JIT can yield a 10 to 20-fold speedup over the Java interpreter, you may find that running without a JIT is not practical for producing a SPECmail2001 load.

2.3.2 Installing the SPECmail2001 Software

There are four files you need at a minimum to run the SPECmail2001 benchmark:

The specmail2001.jar and check.jar files must be present on all the client systems that will be running load generators. The RC files should be installed on the prime client (the system running the benchmark manager) along with the specmail2001.jar and check.jar files. The full installation includes additional files, including sample scripts for running the benchmark manager and the load generators.

2.4 RC Files

2.4.1 Configuration Parameters

Parameters for identifying clients and the mail server are contained in the SPECmail_config.rc file. This is where your client host names and port numbers are listed for the benchmark manager to use when controlling the load generators. This is also where you identify the SMTP server, the POP3 server, and the mail sink server names and port number.

At a minimum, to run a test you will need to modify the following parameters:

See Appendix A for a detailed description of the configuration parameters.

* in v1.00 of the benchmark, these attributes were singular (no 'S' on the end).

2.4.2 System Under Test (SUT) Description

The SUT is defined in SPECmail_config.rc file. This information is used to create a section describing the SUT in the HTML results file. This information is not necessary for running the benchmark, but it is necessary if you intend to publish results. The SPECmail2001 Run and Reporting Rules specifies that a published result should contain enough information about the SUT to allow someone else to reproduce the results.

Since a SUT may consist of more than one system, most of the parameters use an array format to allow you to identify multiple systems. For example, the function definition for the first system in your configuration uses the parameter SYSTEM.FUNCTION[0]. Additional systems would use the parameters SYSTEM.FUNCTION[1], SYSTEM.FUNCTION[2], SYSTEM.FUNCTION[3], etc. The main exceptions are SYSTEM.TITLE, which is used to name the SUT, and SYSTEM.CONFIG_DIAG, which is used to indicate a file containing a diagram of the SUT.

A description of the clients running the load generators is not required, but is recommended. The clients must appear in the configuration diagram, so placing the hardware and software descriptions in the SUT description will help when evaluating the results.

See Appendix B for detailed descriptions of the SUT parameters.

2.4.3 Trace Parameters

The SPECmail_config.rc file contains a section with trace variables. These trace variables are commented out initially, if you uncomment one or more of them the clients will produce trace information while running the benchmark. This trace information is provided for testing and debugging purposes; you can use the traces when initially setting up your mail server or to track down problems if you are having difficulty with your configuration.

As an alternative, you can disable the trace variables by uncomment them and changing their value to zero. Then to enable them you simply change their value to one.

See Appendix C for a detailed description of the trace parameters.

These trace variables will have a performance impact on the clients, so they should be commented out when running the benchmark to produce publishable results.

2.4.4 Fixed Parameters

The SPECmail_fixed.rc file contains the parameters used by the benchmark to create the load generated for the mail server, establish run times, and measure the mail server Quality of Service (QoS) criteria. You should not edit this file. If you would like to alter one of these parameters while testing your mail server, you can override these values by putting them in the SPECmail_config.rc file. Any of these values found in the SPECmail_config.rc file will override the corresponding value in the SPECmail_fixed.rc file, and a note will be entered into the results indicating that a fixed parameter was overridden. You cannot produce publishable results if any of these parameters is altered.

See Appendix D for a detailed description of the fixed parameters.

When running the benchmark, the benchmark manager checks the SPECmail_fixed.rc, if it is altered in any way the run will be marked non-compliant.

3 How to Run SPECmail2001

3.1 Starting the Load Generators

The benchmark manager controls the benchmark load generators, but the user must start them manually. Scripts are provided for starting the load generators, but you will need to modify them to work in your environment.

3.1.1 Starting the Load Generators on UNIX

If you are using Java 1.2 or later and java is in your path, the simplest way to launch the client would be to use the command:

  # java –classpath specmail2001.jar:check.jar specmailclient

If you are using Java 1.1.8, you will need to include the classes.zip file in your classpath. You can check which version of java your are using by issuing the following command:

  # java -version

As an alternative, you can put the jar files in your classpath, using one of the following commands:

In Bourne Shell:

  # CLASSPATH=<install dir>/specmail2001.jar:<install dir>/check.jar
  # export CLASSPATH

In C-Shell:

  # setenv CLASSPATH <install dir>/specmail2001.jar:<install dir>/check.jar

Then you can run the client using:

  # java specmailclient

Note that the specmail2001.jar file must be the first file in your classpath.

The default heap size for your Java VM may be insufficient; setting the memory heap size to 64Mb (via the –mx64m flag) or larger should provide enough memory for running the load generator. If your client aborts with memory exceptions, try increasing this value.

The load generators have a single flag, -p <port>, which allows you to set the port number the client will listen on for the benchmark manager. The port number given on the command line must match the port number indicated for that client in the configuration parameters file. If the port is not provided, a default port number of 1099 is used. You will need to alter the port number if you are running multiple instances of the load generator on the same client machine or have a port number conflict with other software running on the system.

Thus, an extended version of the command used to start the load generator using a different port number would look like this, assuming your classpath has been set:

  # java –mx64m specmailclient –p 1555

Shell scripts for starting the client are provided with the benchmark; you can modify and use them as well.

3.1.2 Starting the Load Generators on MS Windows

To run the load generators, it is simplest if the specmail2001.jar and check.jar files are in your classpath. To add the jar files to your classpath in MS Windows, do the following (this example is for MS Windows 2000, your system might differ slightly):

Note that the specmail2001.jar file must be the first file in your classpath. If you are using Java 1.1.8, you will need to include the classes.zip file in your classpath

Once your classpath is set, open a command window. In the command window issue the command:

  C:\> java specmailclient

The default heap size for your Java VM may be insufficient; setting the memory heap size to 64Mb (via the –mx64m flag) should provide enough memory for running the load generator (assuming that your client has 64Mb of RAM). If your client aborts with memory exceptions, try increasing this value.

The load generators have a single flag, -p <port>, which allows you to set the port number the client will listen on for the benchmark manager. When starting on a client, the port number given on the command line must match the port number indicated for that client in the configuration parameters file. If the port is not provided, a default port number of 1099 is used. You will need to alter the port number if you are running multiple instances of the load generator on the same client machine or have a port number conflict with other software running on the system.

Thus, an extended version of the command used to start the load generator using a different port would look like this:

  C:\> java –mx64m specmailclient –p 1555

Another way to start the load generator on MS Windows is to use a batch script. Batch scripts for starting the client are provided with the benchmark; you can modify and use them.

3.2 Starting the Mail Sinks

The mail sinks are started the same way you start a load generator; please see the section above. The specmailclient is used in both cases. A mail sink can also be a load generator, depending on how the configuration file is set up, but this configuration is not recommended unless you have a very powerful client. Note that as of version 1.01, multiple mail sinks are supported.

You will not be able to verify that the mail sinks have started correctly until the benchmark begins the warm-up period. Just prior to the warm-up period, the Benchmark Manager connects to and starts up the mail sinks. After the warm-up period has started, you can verify the mail sinks have started correctly by performing the following:

  telnet <mail sink hostname> <SINK_PORT>.

The SINK_PORT is the port number the mail sinks will listen on for receiving outbound messages from the benchmark. A SPECmail2001 banner will be displayed after connecting to the mail sink. Typically, mail servers use port 25 to relay outbound messages. Therefore, if your mail server is configured to use port 25 as the sink port, ensure there are no other processes using port 25 (e.g. the sendmail process).

3.3 Starting the Benchmark Manager

Starting the benchmark manager is similar to starting the load generators; the main difference is that you use specmail rather than specmailclient in the java command and provide command line flags to control the behavior of the benchmark. Scripts are provided for running the benchmark manager in both UNIX and MS Windows; you can also refer to them for examples of how to run the benchmark manager in your environment.  Note that the provided scripts simply print out the version number of the benchmark manager, you should modify the script to launch the benchmark manager with your preferred options from the list below.

3.3.1 Options

Several options are available when launching the benchmark manager.

So to initialize a new mail server (after setting up the user accounts) and running the benchmark manager on UNIX, you would use the following command:

  # java specmail –initonly

In MS Windows:

  C:\> java specmail –initonly

To start a compliant run on UNIX, you would use the following command:

  # java specmail –compliant

In MS Windows:

  C:\> java specmail –complaint

4 Results

After completing a run, the benchmark manager will generate an output.raw file containing the raw results from the benchmark. These results will be in a directory like result-200011031423, where the numbers indicate the date and time the benchmark was started (in this case 2:23pm on November 3rd, 2000). SPECmail2001 includes a utility for generating an HTML results file called the reporter. The reporter is written in Java, so running it is similar to starting the load generators or benchmark manager.

4.1 Generating a SPECmail2001 HTML Report

4.1.1 In UNIX

To run the reporter, it is simplest if the specmail2001.jar file is in your classpath. To add the specmail2001.jar file in your classpath, use one of the following commands:

In Bourne Shell:

  # CLASSPATH=<install dir>/specmail2001.jar
  # export CLASSPATH

In C-Shell:

  # setenv CLASSPATH <install dir>/specmail2001.jar

Then cd into the directory where the output.raw file is. You run the reporter using:

  # java specmailreporter <filename>.raw

Where <filename>.raw is the raw file generated by the benchmark. By default this file will be called output.raw. The reporter will generate an HTML file of the form <filename>.html. The reporter also supports a ‘-d’ flag for producing a more detailed report. This report contains additional information you can use when evaluating the performance of your mail server. To generate a detailed report for output.raw, use the command:

  # java specmailreporter –d output.raw

4.1.2 In MS Windows

To run the reporter, it is simplest if the specmail2001.jar file is in your classpath. To add the specmail2001.jar file in your classpath, do the following (this example is for MS Windows 2000, your system might differ slightly):

Then, open a command (“DOS”) window and cd into the directory where the output.raw file is. You run the reporter using:

  C:\> java specmailreporter <filename>.raw

Where <filename>.raw is the raw file generated by the benchmark. By default this file will be called output.raw. The reporter will generate an HTML file of the form <filename>.html. The reporter also supports a ‘-d’ flag for producing a more detailed report. This report contains additional information you can use when evaluating the performance of your mail server. To generate a detailed report for output.raw, use the command:

 C:\> java specmailreporter –d output.raw

4.2 The HTML Results File

The HTML file created by the reporter is designed to be consistent with other SPEC reporting pages. It contains the metric of the benchmark run, a summary of the key results, and a complete system description. The detailed report includes additional information about the benchmark at the three different load levels.

4.2.1 Results Block

The results block contains the key information for the benchmark results. It includes the system name and results. It also includes publishing information such as who produced the results and when the results were created. In the case of an invalid run, a description below the results block will contain a link to the Error section that contains detailed information about why the run did not produce a valid (i.e. publishable) result.

4.2.2 Summary Results

The summary results section contains the Quality of Service (QoS) performance of the mail server on the key functions for the different load levels. These functions are the primary items of concern when testing a mail server using the SPECmail benchmark, and thus the strengths and weaknesses of a mail server can be determined from this table. It includes the following items:

4.2.3 System Descriptions

The system description section contains detailed information about the SUT. This information is extracted from the configuration file, which the benchmarker sets up prior to running the benchmark. It is intended to provide information for viewers to use when evaluating different systems and comparing results. The SPECmail2001 Run and Reporting Rules specify that enough information be provided so that a third party could set up a system and duplicate the results published.

4.2.4 Configuration Diagram

When submitting results to SPEC for publication, you must include an image file with a drawing of you mail server configuration. The HTML file created by the reporter will contain a link to this file using the file name specific in the configuration file. The file must at a minimum indicate the network connections between the mail server and the load generators. The image must be in GIF, JPEG, or PNG format.

4.2.5 Detailed Results

If the ‘-d’ flag is used when creating the result file, a detailed results section for each load is included in the HTML file. This detail section includes more information on the summary results (above) and additional performance details recorded by the load generators. For each load level, the detail section includes:

4.2.6 Notes/Tuning Information

The Notes/Tuning Information section contains additional information on the SUT configuration and tuning parameters.

If the results failed validation for some reason, an Errors block will be added to the end of the Notes/Tuning Information section with a detailed description of what caused the failure.

5 Tips and Troubleshooting

Tips and Troubleshooting contains additional information on running the benchmark.

5.1 Using Java Version 1.1.8

The SPEC Mail Server subcommittee suggests you use Java version 1.2.2 or later.  If you must use Java version 1.1.8 to run the load generators and benchmark manager, you should use the command 'jre' instead of 'java'. You will also need to include the classes.zip file in your CLASSPATH.  If you see the following error:

Unable to initialize threads: cannot find class java/lang/Thread
Could not create Java VM

the classes.zip file was not entered into your CLASSPATH correctly.  If you see the following warning message:

WARNING: specmail2001.jar was not the first file in CLASSPATH

the classes.zip file is probably the first file in your CLASSPATH.  The classes.zip file should be the last file listed in your CLASSPATH.

If you are using the batch scripts provided, you will need to modify them to use 'jre' instead of 'java' and include the classes.zip file at the end of the CLASSPATH definition before you attempt to run the scripts.

5.2 Benchmark Manager Only Prints Version Number

The scripts provided to run the benchmark manager use the '-v' flag which prints out the version number of the code.  If you would like to run the benchmark manager using a script, you should modify the provided script to use the appropriate options for your configuration. The first time you run the benchmark, you should use the command:

java specmail -initonly

which will only initialize the mail servers mailboxes.  This initialization takes a significant amount of time, so by default the benchmark assumes the mailboxes have been initialized prior to starting. The benchmark maintains the mailboxes in a steady state during the run, so under normal conditions you will only need to initialize the mailboxes once. You can verify that the mailboxes were initialized correctly using the command:

java specmail -verifyonly

When performing a benchmark run to generate results for publication, you should use the command:

java specmail -compliant

The -compliant flag will cause the benchmark to terminate if an error or warning is encountered during the run.  Without the -compliant flag, warnings will be printed to the console but the benchmark will continue to run.

5.3 Connecting to Remote Clients Failed: InvalidClassException

When launching the benchmark manager and adding hosts, you may see an error containing text similar to:

Error: Name lookup for load generator alt100 exception error
unmarshalling return; nested exception is:
        java.io.InvalidClassException: specmailclient_Stub; Local 
class not compatible: Stream classdesc serialVersionUID=xxx local
class serial VersionUID=yyy
...

This error indicates that you have non-compatible versions of the benchmark code on your clients.  All clients (running the benchmark manager, load generators and/or mail sinks) must have the same version of the benchmark software installed and running in order to run the benchmark.

5.4 Couldn't Read Config File Error

When launching the benchmark manager, you may see an error containing text similar to:

Error: couldn't read config file: exception java.lang.NullPointerException
java.lang.NullPointerException
at org.spec.specmail.Configuration.parseList(Configuration.java, Compiled Code)
...

This error indicates that one of the configuration file parameters (specifically, one of the lists) could not be found.  Check your configuration file to make sure that you have not accidentally commented out one of the configuration parameters or misspelled a parameter name.

Note that when updating the benchmark from v1.00 to v1.01, two of the configuration file parameters were changed.  If you are using a configuration file from v1.00 in a version of the benchmark v1.01 or greater, you need to change

SINK_SERVER to SINK_SERVERS (with an 'S' on the end)
and
REMOTE_DOMAIN to REMOTE_DOMAINS (with an 'S' on the end)

6 Submitting Results

Once you have a successful run, you can submit the results to the SPEC mail server subcommittee for review by mailing the output.raw file and the configuration diagram to submail2001@spec.org. When mailing the files, include them as attachments to the mail message and only submit one result per email message.

Note: The raw file contains the configuration information from the SPECmail_config.rc file. Please edit the RC file with the correct SUT information prior to running the benchmark for submission.

Every submission goes through a two-week review process, starting on a scheduled SPECmail sub-committee conference call. During the review, members of the committee may ask for additional information or clarification of the submission. Once the result has been reviewed and approved by the committee, it is displayed on the SPEC web site at http://www.spec.org/.

Appendix A – Configuration Parameters

Parameter Name

Usage

CLIENTS

A list of clients that will be running the load generators for the benchmark. A client entry consists of the machine name and the port number the load generator the benchmark manager will use to connect to the client.

SMTP_SERVER
SMTP_PORT

The host name and port number of the SMTP server for your mail server.

POP3_SERVER
POP3_PORT

The host name and port number of the POP3 server for your mail server.

SINK_SERVERS

The hosts that will provide mail sink services during the benchmark. Note that mail sink services are provided by a SPECmail load generator, thus these systems must have the load generator software running.
* - in v1.00 of the benchmark, this attribute was SINK_SERVER.

SINK_PORT
The port number the mail sinks will monitor for mail.  This is the port that your mail server will send outgoing mail to on the 'remote systems.'
LOCAL_DOMAIN

The domain name that represents local mail on the mail server (i.e. mail intended for local recipients that must be stored).

REMOTE_DOMAINS

The domain names that represents remote mail on the mail server (i.e. mail not intended for local recipients that must be forwarded).  Note: the number of remote domains must match the number of SINK_SERVERS defined.
* - in v1.00 of the benchmark, this attribute was REMOTE_DOMAIN.

USERNAME_PREFIX

The prefix to use on the user account names for the mailboxes; user account names are of the form <USERNAME_PREFIX><USER_ID>.

USERNAME_AS_PASSWORD

A flag to indicate that the user account name should also be used as the password (0=no, 1=yes).

USER_PASSWORD

If the account name is not used as the account password, this value will be used as the password for all the user accounts.

USER_START
USER_END

The starting and ending numbers to use for the user account names for the mailboxes; user account names are of the form <USERNAME_PREFIX><USER_ID>.

SMTP_SEND_DELAY_MILLISECONDS

When sending messages to the mailboxes during mailbox initialization, this value indicates how long to wait between messages.

SMTP_INIT_REST_MINUTES

After completing mailbox initialization, the benchmark will ‘rest’ for the specified amount of time to allow the mail server to de-queue all the new messages.

RSL_FILENAME

The RSL file is an ASCII file generated by the benchmark manager when the run is completed that summarizes the raw file data for the run.

Appendix B – System Description Parameters

The following parameters are singletons, they should appear only once in the configuration file.

Parameter Name

Usage

PREPARED_BY

The name of the person that ran and submitted this result.

TESTED.BY

The name of the company that ran and submitted this result.

LICENSE.NUM

If the company is a SPEC member, this is the SPEC license number assigned to that company. If the company is not a SPEC member, this is the license number assigned when the SPECmail2001 license was purchased from SPEC.

TEST.DATE

Month and Year of the benchmark run.
This field must be of the form Mon-YYYY (i.e. “Nov-2000”).

NOTES

General notes to appear in the result file. Multiple notes can be entered, so the parameter uses an array notation. For example:

NOTES[0] = “This is the first notes line.”
NOTES[1] = “”
NOTES[2] = “You can even include blank lines.”

SYSTEM.TITLE

The overall name for the SUT. This is used as the title for your results page.

SYSTEM.CONFIG_DIAG

Name of the file containing your configuration diagram.

The following parameters are system description parameters; they are repeated for each system being described as part of the SUT. The files uses an array context for these parameters, see the provided SPECmail_config.rc file for examples.

Parameter Name

Usage

SYSTEM.FUNCTION

The label that will appear at the very top of the system description box.

SYSTEM.NUM_SYSTEMS

The number of systems of this type used in the test.

SYSTEM.HW_MODEL

This is the hardware model of the system.
Note: this field is a searchable field, so model naming should be consistent with other SPEC submissions when publishing results.

SYSTEM.HW_VENDOR

This is the hardware vendor of the system.
Note: this field is a searchable field, so vendor naming should be consistent with other SPEC submissions when publishing results.

SYSTEM.HW_AVAIL

The date the hardware is/will be shipping and generally available to the public.
This field must be of the form Mon-YYYY (i.e. “Nov-2000”).

SYSTEM.OS

Operating System (including version number)

SYSTEM.FILESYSTEM

The file system used.

SYSTEM.SW_LABEL

The label that appears in the Software section of the system description box. A special value of “JVM” entered into this field causes the SW_NAME to be ignored and the JVM and JIT fields to be used instead (for use with client systems running the load generators).

SYSTEM.SW_NAME

The name and version number of the software under test (this field is ignored on client systems).

SYSTEM.SW_AVAIL

The date the software is/will-be shipping and generally available to the public.
This field must be of the form Mon-YYYY (i.e. “Nov-2000”).

SYSTEM.JVM

Java Virtual Machine description and version number. This field is only used if the SYSTEM.SW_LABEL parameter contains “JVM”.

SYSTEM.JIT

Just In Time Java compiler description and version number. This field is only used if the SYSTEM.SW_LABEL parameter contains “JVM”.

SYSTEM.CPU

The type of central processing unit(s) in the system.

SYSTEM.CPUMHZ

The speed (in MHz) of the CPUs.

SYSTEM.CPU_ENABLED

The number of CPUs in the system.

SYSTEM.MEMORY

The amount of physical memory (in Megabytes) in the system. This field should be an integer (do not use “Mb” or “Gb” in this field).

SYSTEM.L1CACHE

The amount of level 1 cache, for instruction (I) and data (D) on each CPU.

SYSTEM.L2CACHE

The amount of level 2 cache on each CPU

SYSTEM.L3CACHE

The amount of level 3 cache on each CPU. If caches higher than level 3 exist, this value should represent all level 3 cache and higher.

SYSTEM.DISK

Size and Type of disks used by the system. If a complex disk system is used, additional information on the disks should be included in the SYSTEM.NOTES parameters.

SYSTEM.NETWORK

Type of network interface(s) used by the system.

SYSTEM.HW_OTHER

Other hardware in the system that is performance-related.

SYSTEM.NOTES

Additional notes about the system.

Appendix C – Trace Parameters

The trace parameters allow you to have the load generators print information messages to standard out when running the benchmark. If you are going to be activating a trace, you should redirect the output from the load generators to a file when you start them on the clients.

These trace variables will have a performance impact on the clients, so they should be commented out when running the benchmark to produce publishable results.

Parameter Name

Usage

TRACE_EVENTS

Print message for all events in the load generators.

TRACE_POP3

Print message for POP3 events.

TRACE_POP3VRFY

Print message for POP3 verify events. Verify events occur when the mail server is probed to check that the mail boxes contain messages that match the expected size and count distribution.

TRACE_POP3CLEAN

Print message for POP3 clean events. Clean events occur when the benchmark empties the mail boxes prior to initializing them.

TRACE_SMTPINIT

Print message for SMTP initialization events. Initialization events occur when the mailboxes are pre-populated with the benchmark mail distribution.

TRACE_SMTPQOS

Print message for SMTP QoS events. QoS events are generated when the load generators create, send and retrieve mail message to verify the mail server meets the delivery time criteria.

TRACE_SMTP

Print message for SMTP events.

TRACE_SMTPSINK

Print message for SMTP SINK events. SINK events occur when a load generator, acting as the mail sink, receives message relayed by the mail server that are addressed to remote users.

TRACE_MISC

Print message for miscellaneous events. This prints messages for activity that does not fit well into any other category.

TRACE_MDIM_KEYS

When tracing a vector array, this trace will print out the key values being searched.

TRACE_CLIENT_MSGS

When processing the client list from the configuration file, this trace will print out the clients being extracted from the list.

TRACE_SETDESTPERCENT

No longer used.

TRACE_SETMSGSIZES

When processing the message size list from the file, this trace will print out the message sizes being extracted from the list.

TRACE_SETMBOXSIZES

When processing the mailbox size distribution list from the file, this trace will print out the items being extracted from the list.

TRACE_CONFIGREAD

When processing items in the configuration file, this trace will print out the items as they are parsed.

TRACE_READFROM

When processing the parameter files, this trace prints a message when each section is complete.

TRACE_SETRECIP_MSGS

When processing the mail recipient distribution list from the file, this trace will print out the items being extracted form the list.

TRACE_MODEM_RATES

When processing the modem rates to use in the load generators, this trace will print out the rates being extracted form the list.

TRACE_CALCRESULT

When running the reporter to generate HTML results, this trace will print out the values being used and the calculations that are performed.

TRACE_GETFORMATTEDRESULT

No longer used.

TRACE_RESULTS

Print message when the load generators record results.

TRACE_MSGSIZE

Print message when the load generators create a new message to send.

TRACE_WORKLOAD

Print message when adding or changing the workloads on the clients.

TRACE_RETRYPOP

Prints warning messages when the load generator is checking the POP retry pool.

TRACE_VERIFY_MBOX_COUNT

When verifying the number of messages in the mailboxes matches the expected distribution, this trace will print out the actual and expected message count distribution in the mailboxes.

TRACE_VERIFY_MSG_SIZE

When verifying the size of the messages in the mailboxes matches the expected distribution, this trace will print out the actual and expected message size distribution in the mailboxes.

Appendix D – Fixed Parameters

The fixed parameters are used by the benchmark to setup the load generators. You should never edit this file. If you would like to alter one of these parameters while testing your mail server, you can override these values by putting them in the SPECmail_config.rc file. Any of these values found in the SPECmail_config.rc file will be used in place of the value located in the SPECmail_fixed.rc file, and a note will be entered into the results indicating that a fixed parameter was overridden.

Parameter Name

Usage

WARMUP_SECONDS

How long to run the benchmark at load before starting to gather statistics.

RUN_SECONDS

How long to run the benchmark at load while gathering statistics.

RAMPDOWN_SECONDS

How long to wait after stopping the gathering of statistics and terminating the load before starting a new test (a.k.a. cool down period)

LOAD_FACTORS

The load factors with which we scale the load in a set of consecutive test runs.

MSG_SIZE_DISTRIBUTION

The message size distribution for mail messages sent by the load generators.

MSG_RECP_DISTRIBUTION

The number of recipient’s distribution for mail messages sent by the load generators.

POP_RETRY_BOXES_PERCENT

The percentage of mailboxes set aside for POP retries.

DATA_RATE_DISTRIBUTION

The distribution of speed of simulated local connections.

SLOW_MODEM_DATA_RATE

Characters per second for 28.8Kbps modem; used for SLOW in DATA_RATE_DISTRIBUTION

FAST_MODEM_DATA_RATE

Characters per second for 56.6Kbps modem; used for FAST in DATA_RATE_DISTRIBUTION

CABLE_MODEM_DATA_RATE

Characters per second for 2Mbps cable; used for CABLE in DATA_RATE_DISTRIBUTION

PROTOCOL_TIMEOUT_SECONDS

The number of seconds after which a connection is deemed to have timed out.

PROTOCOL_TIEMOUT_PERCENT

The percentage of connections that should NOT time out.

DISCONNECT_PERCENT

The percentage of client connections that will unexpectedly disconnect (i.e. the load generators will not properly terminate the connection).

DELIVERY_TIME_CHECK_PERCENT

The percentage of local SMTP messages that should be check for delivery time Quality of Service (QoS).

SMTP_DELIVERY_QOS_SECONDS

The number of seconds within which a local SMTP message should be delivered to meet the delivery time QoS limit.

SMTP_DELIVERY_QOS_PERCENT

The percentage of local SMTP messages that need to meet the delivery time QoS limit at 100% load in a valid benchmark run.

SIMPLE_COMMAND_QOS_SECONDS

The number of seconds within which the mail server should respond to a simple mail command.

SIMPLE_COMMAND_QOS_PERCENT

The percentage of simple mail commands that need to pass the QoS requirement at 100% load in a valid benchmark run.

COMPLEX_COMMAND_QOS_SECONDS

The number of seconds within which the mail server should respond to a complex mail command.

COMPLEX_COMMAND_QOS_PERCENT

The percentage of complex mail commands that need to pass the QoS requirement at 100% load in a valid benchmark run.

MSG_DESTINATION_LOCAL_PERCENT

The percentage of the mail generated by local users that is destined for other local users. The rest of the locally originated mail is destined for remote users, and all mail originating remotely is destined for local users. See the White Paper for more information.

POP_CHECKS_PER_DAY

The number (on average) of POP checks made by a user every day.

REPEATED_POP_CHECKS_PER_DAY

The number of POP checks that are 'repeated POP checks' (i.e. additional checks by a user that will (usually) access an empty mailbox).

REPEATED_POP_CHECK_INTERVAL_SECONDS

The interval between repeated POP checks on the same mailbox.

MSG_SENT_PER_DAY

The number (on average) of messages sent by local users every day.

PEAK_LOAD_PERCENT

The percentage of the daily load occurring during the peak hour (this is the load simulated by SPECmail at 100% load factor).

Copyright (c) 2001 Standard Performance Evaluation Corporation