SPECsfs2008_nfs.v3 Result

NetApp, Inc. : FAS3140 (SATA Disks with Performance Acceleration Module)
SPECsfs2008_nfs.v3 = 40011 Ops/Sec (Overall Response Time = 2.75 msec)


Performance

Throughput
(ops/sec)
Response
(msec)
4000 0.9
8003 1.2
12008 1.4
16019 1.7
20021 2.0
24036 2.4
28043 3.1
32044 4.1
36090 5.8
40011 10.0
Performance Graph


Product and Test Information

Tested By NetApp, Inc.
Product Name FAS3140 (SATA Disks with Performance Acceleration Module)
Hardware Available July 2008
Software Available January 2009
Date Tested December 2008
SFS License Number 33
Licensee Locations Sunnyvale, CA
USA

The NetApp® FAS3100 represents the midrange of the FAS family of storage systems with NetApp's unified storage architecture. It features three models: the FAS3140, FAS3160, and FAS3170. FAS3100 performance is driven by a 64-bit architecture that uses high throughput, low latency links and PCI Express for all internal and external data transfers. It's flexible enough to handle primary and/or secondary storage needs for NAS and/or SAN implementations. The FAS3100 system supports as many as 40 Fibre Channel ports or 36 Ethernet ports, including support for both 8Gb Fibre Channel and 10Gb Ethernet. The FAS3140 scales to a maximum of 420 disk drives and 420TB of capacity. Finally, the FAS3100 boasts an efficient, HA-ready 6U form factor which allows for a shared chassis and backplane for dual controllers by sharing of common system power resources.

The NetApp Performance Acceleration Module provides a new way to optimize performance. It is a combination of hardware and tunable caching software which reduces latency and improves I/O throughput without adding more disk drives. Depending on the storage system, up to five of these 16GB modules can be configured per controller as a unified 80GB read cache in the PCI Express slots. For the FAS3140 controller, up to two modules can configured as unified 32GB read cache per controller. The Performance Acceleration Module is optimized to improve the performance of random read intensive workloads such as file services.

Configuration Bill of Materials

Item No Qty Type Vendor Model/Name Description
1 2 Storage Controller NetApp FAS3140A-IB-BASE-R5 FAS3140A,IB,ACT-ACT,OS,R5
2 1 Controller Chassis NetApp FAS3140A-CHASSIS-R5-C FAS3140,ACT-ACT,Chassis,AC PS,-C,R5
3 8 Disk Drives w/Shelf NetApp DSX-7.0TB-R5-C Disk Shelf,7.0TB,SATA,-C,R5
4 2 PAM Adapter NetApp X1936A-R5-C ADPT,Perf Acceleration Module I,PCIe,-C,R5
5 2 FC-AL Adapter NetApp X2054B-R6 HBA,FC,4-port,PCIe,4Gb,R6
6 1 Software License NetApp SW-T3C-NFS NFS Software,T3C
7 2 Software License NetApp SW-T3-FLEXSCALE-C Perf Acceleration Module Software,T3,-C

Server Software

OS Name and Version Data ONTAP 7.3.1
Other Software None
Filesystem Software Data ONTAP 7.3.1

Server Tuning

Name Value Description
vol options 'volume' no_atime_update on Disable atime updates (applied to all volumes)

Server Tuning Notes

N/A

Disks and Filesystems

Description Number of Disks Usable Size
500GB SATA 7200RPM Disk Drives 112 39.7 TB
Total 112 39.7 TB
Number of Filesystems 4
Total Exported Capacity 39.7 TB
Filesystem Type WAFL
Filesystem Creation Options Default
Filesystem Config Each filesystem was striped across 28 disks
Fileset Size 4675.9 GB

The storage configuration consisted of 8 shelves, each wth 14 disks. Pairs of shelves were daisy-chained such that the outputs of the first shelf in the pair were attached to the inputs of the second shelf in the pair. Each shelf in each pair had two 2Gbit/s FC-AL loop connections, each connected to one of eight FC-AL ports (four integrated on the mainboard, four on the FC-AL HBA) on a different storage controller, creating four possible paths to access any disk in the two shelf pair. Logic in the shelf controller converts commands and data on the FC-AL loops to SATA, the interface on the disks. One half of the disks in each shelf pair were owned by each storage controller. Disks were organized into four pools or "aggregates", each consisting of 28 disks. Each aggregate was composed of 2 RAID-DP groups, each RAID-DP group was composed of 12 data disks and 2 parity disks. Two aggregates were owned by each storage controller. Within each aggregate, a flexible volume (utilizing DataONTAP FlexVol (TM) technology) was created to hold the SFS filesystem for that controller. Each volume was striped across all disks in the aggregate where it resided. Each controller was the owner of two volumes/filesystems, but the disks in each aggregate were dual-attached so that, in the event of a fault, they could be managed by the other controller via an alternate loop. A separate flexible volume residing in one of the aggregates owned by each controller held the DataONTAP operating system and system files.

Network Configuration

Item No Network Type Number of Ports Used Notes
1 Jumbo Frame Gigabit Ethernet 4 Integrated 10/100/1000 Ethernet controller

Network Configuration Notes

There were two gigabit ethernet network interfaces on each storage controller. The interfaces were configured to use jumbo frames (MTU size of 9000 bytes). All network interfaces were connected to a Cisco 6509 switch, which provided connectivity to the clients.

Benchmark Network

An MTU size of 9000 was set for all connections to the switch. Each load generator was connected to the network via a single 1 GigE port, which was configured with 4 separate IP addresses on separate subnets.

Processing Elements

Item No Qty Type Description Processing Function
1 2 CPU 2.4GHz Dual-Core AMD Opteron(tm) Processor 2216, 512K L2 cache Networking, NFS protocol, WAFL filesystem, RAID/Storage drivers

Processing Element Notes

Each storage controller has one physical processor with two processing cores.

Memory

Description Size in GB Number of Instances Total GB Nonvolatile
Storage controller mainboard memory 4 2 8 V
Performance Acceleration Module memory 16 2 32 V
Storage controller integrated NVRAM module 0.5 2 1 NV
Grand Total Memory Gigabytes     41  

Memory Notes

Each storage controller has main memory that is used for the operating system and for caching filesystem data. A separate, integrated battery-backed RAM module is used to provide stable storage for writes that have not yet been written to disk.

Stable Storage

The WAFL filesystem logs writes and other filesystem modifying transactions to the integrated NVRAM module. In an active-active configuration, as in the system under test, such transacations are also logged to the NVRAM on the partner storage controller so that, in the event of a storage controller failure, any transactions on the failed controller can be completed by the partner controller. Filesystem modifying NFS operations are not acknowledged until after the storage system has confirmed that the related data are stored in NVRAM modules of both storage controllers (when both controllers are active). The battery backing the NVRAM ensures that any uncommitted transactions are preserved for at least 72 hours.

System Under Test Configuration Notes

The system under test consisted of two FAS3140 storage controllers housed in a single 6U chassis and 8 storage shelves, each with 14 500GB SATA disk drives. The two controllers were configured in an active-active cluster configuration using the high-availability cluster software option in conjunction with an InfiniBand cluster interconnect on the backplane of the shared chassis. A Performance Acceleration Module was present in a PCI-e expansion slot and enabled with default settings on each storage controller. An FC-AL host bus adapter was present in a PCI-e expansion slot on each storage controller. Each storage shelf had one 2Gbit/s FC-AL loop connection to each storage controller, as well as two FC-AL connections daisy-chaining it to another shelf. The system under test was connected to a gigabit ethernet switch via 4 network ports (two per storage controller).

Other System Notes

All standard data protection features, including background RAID and media error scrubbing, software validated RAID checksumming, and double disk failure protection via double parity RAID (RAID-DP) were enabled during the test.

Test Environment Bill of Materials

Item No Qty Vendor Model/Name Description
1 25 IBM IBM eServer Bladecenter (14-blade chassis) Server blade with 2GB RAM and Linux operating system
2 1 Cisco 6509 Cisco Catalyst 6509 Ethernet Switch

Load Generators

LG Type Name LG1
BOM Item # 1
Processor Name Dual-Core AMD Opteron Processor 2216 HE
Processor Speed 2.4 GHz
Number of Processors (chips) 1
Number of Cores/Chip 2
Memory Size 2 GB
Operating System RHEL4 kernel 2.6.9-55.0.2.ELsmp
Network Type 1 x Broadcom NetXtreme II Gigabit Ethernet BCM5706 1000Base-SX (A2)

Load Generator (LG) Configuration

Benchmark Parameters

Network Attached Storage Type NFS V3
Number of Load Generators 25
Number of Processes per LG 20
Biod Max Read Setting 8
Biod Max Write Setting 8
Block Size AUTO

Testbed Configuration

LG No LG Type Network Target Filesystems Notes
1..25 LG1 1 /vol/vol1 /vol/vol2 /vol/vol3 /vol/vol4 N/A

Load Generator Configuration Notes

All filesystems were mounted on all clients, which were connected to the same physical and logical network.

Uniform Access Rule Compliance

Each load-generating client hosted 20 processes. The assignment of processes to filesystems and network interfaces was done such that they were evenly divided across all filesystems and network paths to the storage controllers. The filesystem data was striped evenly across all disks and FC-AL loops on the storage backend.

Other Notes

Other test notes: None.

NetApp is a registered trademark and "Data ONTAP", "FlexVol", and "WAFL" are trademarks of NetApp, Inc. in the United States and other countries. All other trademarks belong to their respective owners and should be treated as such.

Config Diagrams


Generated on Mon Dec 22 16:16:36 2008 by SPECsfs2008 HTML Formatter
Copyright © 1997-2008 Standard Performance Evaluation Corporation

First published at SPEC.org on 02-Feb-2009