This strips out the extra characters from the submission file so that you can view or work with the original raw file. This is the recommended method for editing a file post-submission because it ensures you are not working with an outdated version of the corresponding raw file and potentially introducing previously corrected errors into the "corrected" submission file.
See Section 6.0 in the `SPECvirt Datacenter 2021 Run and Reporting Rules`_ for instructions on submitting a measurement for SPEC publication.
6.0 Control.config experimental options
=========================================
Use this section to change benchmark and workload parameters for ease of benchmark tuning and for research purposes.
For troubleshooting and tuning, you can edit fixed Control.config parameters such as run time and workloads to run. Editing these parameters result in a non-compliant run but can help when finding SUT maximum load.
Remember to reset these to their defaults before attempting a compliant run. See `Appendix`_ for the list of Control.config parameters.
6.1 Change measurement durations
-------------------------------------------------------
As you debug and tune your environment to achieve maximum load, you might want to reduce measurement time. For example, to run a one-hour measurement, you can set::
runTime = 3600
phase2StartTime = 1800
phase3StartTime = 2400
6.2 Disable/Enable workloads
-------------------------------------------------------
You can specify which workloads to run. For example, if you want to run only the BigBench workload, you can set::
doAIO = 0
doHammerDB = 0
doBigBench = 1
6.3 Disable/Enable support file collection
-------------------------------------------------------
After running a measurement, the harness creates an archive of supporting files describing the SUT configuration and measurement parameters. Since gathering post-measurement supporting data can be time-consuming, to disable SUT configuration data collection, you can set::
collectSupportFiles = 0
.. _Appendix:
Appendix - Control.config
=========================================
The following is the contents of the $CP_BIN/Control.config file::
###########################################################################
# #
# Copyright (c) 2021 Standard Performance Evaluation Corporation (SPEC). #
# All rights reserved. #
# #
###########################################################################
# SPECvirt Datacenter 2021 benchmark control file: 4/3/21
# numTilesPhase1 : Number of Tiles used for Phase1 Throughput
# For full Tiles, use the whole number only (e.g., "5", not "5.0")
# For Partial Tiles, increments of .2 will add workloads in the following order:
# mail=Tilenum.2 , mail+web=Tilenum.4 ,
# mail+web+collab=Tilenum.6 , mail+web+collab+HammerDB=Tilenum.8
# E.g., "numTilesPhase1 = 6.4" means 6 full Tiles plus a partial 7th Tile
# containing only mail + web workloads.
numTilesPhase1 = 1
# numTilesPhase3 : The maximum (total) number of Tiles that will be run on the SUT
# Partial Tiles are allowed for numTilesPhase3. (*See comments for 'numTilesPhase1' above)
numTilesPhase3 = 1
# Virtualization environment running on the SUT (vSphere or RHV)
virtVendor =
# The IP, FQDN, and credentials for the management server (e.g., the vCenter or RHV-M server)
mgmtServerIP =
mgmtServerHostname =
mgmtServerURL =
virtUser =
virtPassword =
# The name and location of the certificate for the management server if needed
virtCert =
# The hostname(s) and credentials for the servers that will be added to the cluster
# during phase 2 of the Measurement Interval (MI). Specify one host for every
# four-node cluster
offlineHost_1 =
#offlineHost_2 =
# Name of the template / appliance to deploy for SUT VMs
templateName =
# Name of the template / appliance to deploy for CLIENT VMs
clientTemplateName =
# Name of the cluster for SUT VMs
cluster =
# Name of the network for SUT VMs
network =
# Name of the cluster for CLIENT VMs
clientCluster =
# Name of the network for CLIENT VMs
clientNetwork =
# Number of host nodes in cluster. Must be a multiple of four
numHosts = 4
# Name of the storage pools used for SUT VMs. If multiple storage pools are used
# for a given workload type, use multiple lines with a single storage pool per line
# and increment the number in brackets. For example:
#
# mailStoragePool[0] = mailPool1
# mailStoragePool[1] = mailPool2
# mailStoragePool[2] = mailPool3
#
# VMs of the same workload type will be evenly placed across all defined storage pools
# for that workload, based on the tile number.
#
# *Note, if different storage names are listed, they must be accessible by all hosts using the
# same access method(s)
mailStoragePool[0] =
webStoragePool[0] =
collabStoragePool[0] =
HDBstoragePool[0] =
BBstoragePool[0] =
# Name of the storagePool for CLIENT VMs. If multiple storage pools are used, use
# multiple lines with a single storage pool per line and increment the number in
# brackets. For example:
#
# clientStoragePool[0] = clientPool1
# clientStoragePool[1] = clientPool2
#
# Client VMs will be evenly distributed across all defined storage pools, based on the tile number.
clientStoragePool[0] =
# MAC Address Format - The MAC address, in particular, the first 3 sets of hex values
# to prefix MAC addresses used for deployments
# By default, the three hex prefixes will be 42:44:49. Change these if needed.
# (e.g., If vNICs in your network environment already contain vNICs using these values )
MACAddressPrefix = 42:44:49:
# IP address prefix (for example '172.23.' starts with Tile 1 IP's at 172.23.1.1
# *Note, the format is ...
IPAddressPrefix = 172.23.
# Disk device to be used for secondary data disk on workload VMs. Device is assumed to be in
# /dev within the VM's guest environment. Partitions and/or filesystems will be configured on
# this device and any existing data will be overwritten. Examples of vmDataDiskDevice are:
# "sdb" (default) and "vdb".
vmDataDiskDevice = sdb
################
# The number of vCPUs assigned to the workload VMs during their deployment
################
# Number of vCPUs assigned to the departmental (mail, web, collab1, collab2) workloads VMs.
# (Default value = 4 vCPUs for each workload)
vCpuAIO = 4
# Number of vCPUs assigned to HammerDB Appserver. (Default = 2 vCPUs)
vCpuHapp = 2
# Number of vCPUs assigned to HammerDB Database. (Default = 8 vCPUs)
vCpuHdb = 8
# Number of vCPUs assigned to BigBench NameNode. (Default = 8 vCPUs)
vCpuBBnn = 8
# Number of vCPUs assigned to BigBench DataNode. (Default = 8 vCPUs)
vCpuBBdn = 8
# Number of vCPUs assigned to BigBench Database. (Default = 8 vCPUs)
vCpuBBdb = 8
# Number of vCPUs assigned to the Client VMs. (Default = 4)
vCpuClient = 4
# Delay to allow the "startRun.sh" script (which collects pre-testrun SUT information) complete before starting the testrun.
startDelay = 180
# Delay before starting each workload (*mailDelay will always use zero for first Tile,
# regardless of value below)
mailDelay = 30
webDelay = 30
collab1Delay = 30
collab2Delay = 1
HDBDelay = 30
bigBenchDelay = 60
# DelayFactors provide a mecahnism to extend the workload deployment delay by an increasing
# amount per Tile.
# Example, if "mailDelay = 30" and "mailDelayFactor = 1.2", for Tile2 mail VM, the applied
# mailDelay would be 36. For Tile3 mail VM, the mailDelay would be 43 (36 * 1.2 = 43), etc...
mailDelayFactor = 1.0
webDelayFactor = 1.0
collab1DelayFactor = 1.0
collab2DelayFactor = 1.0
HDBDelayFactor = 1.0
bigBenchDelayFactor = 1.0
# Debug level for CloudPerf director & workload agents (logs reported in /export/home/cp/log
# ... and /export/home/cp/results/specvirt/)
# Valid values are: [1..9]
debugLevel = 3
# Benchmarkers can create automation scripts that run immediately prior or after the
# test to collect the information while the testbed is still in the same configuration
# as it was during the test. Scripts located in
# /export/home/cp/config/workloads/specvirt/HV_Operations/$virtVendor
# User-specific automation script to run at beginning of test
initScript = "userInit.sh"
# User-specific automation script to run at end of test
exitScript = "userExit.sh"
############# Fixed Settings ########################################
# Changing any of these values will result in a non-compliant test!!!
#
# Revision of benchmark template image
templateVersion = 1.0
# Duration of the Throughput Measurement Interval
runTime = 10800
# Set doAIO = 0 to remove web, mail, and collab workloads from the Throughput MI
doAIO = 1
# Set doHammerDB = 0 to remove HammerDB load from the Throughput MI
doHammerDB = 1
# Set doBigBench = 0 to remove BigBench load from the Throughput MI
doBigBench = 1
phase2StartTime = 5400
phase3StartTime = 7200
WORKLOAD_SCORE_REF_VALUE[0] = 1414780
WORKLOAD_SCORE_REF_VALUE[1] = 978246
WORKLOAD_SCORE_REF_VALUE[2] = 1476590
WORKLOAD_SCORE_REF_VALUE[3] = 2576560
WORKLOAD_SCORE_REF_VALUE[4] = 45
# Set collectSupportFiles = 0 to not collect the supporting tarball files
collectSupportFiles = 1
Copyright 2021 Standard Performance Evaluation Corporation (SPEC). All rights reserved.
.. _CentOS Project: http://www.centos.org
.. _SPECvirt Datacenter 2021 web site: http://www.spec.org/virt_datacenter2021
.. _SPECvirt Datacenter 2021 Design Overview: http://www.spec.org/virt_datacenter2021/docs/designoverview.html
.. _SPECvirt Datacenter 2021 FAQ: http://www.spec.org/virt_datacenter2021/docs/faq.html
.. _SPECvirt Datacenter 2021 Patches: http://www.spec.org/virt_datacenter2021/docs/patches.html
.. _SPECvirt Datacenter 2021 Run and Reporting Rules: http://www.spec.org/virt_datacenter2021/docs/runrules.html
.. _SPECvirt Datacenter 2021 Technical Support: http://www.spec.org/virt_datacenter2021/docs/techsupport.html
.. _SPECvirt Datacenter 2021 User Guide: http://www.spec.org/virt_datacenter2021/docs/userguide.html
.. _SPECvirt Datacenter 2021 SDKs: http://www.spec.org/virt_datacenter2021/sdk