SPECstorage(TM) Solution 2020_swbuild Result Oracle : Oracle ZFS Storage ZS11-2 Eight Drive Enclosure Hybrid Storage System SPECstorage Solution = 1010 Builds (Overall Response Time = 0.93 msec) 2020_swbuild =============================================================================== Performance =========== Business Average Metric Latency Builds Builds (Builds) (msec) Ops/Sec MB/Sec ------------ ------------ ------------ ------------ 101 0.3 50502 443 202 0.3 101004 845 303 0.4 151506 1245 404 0.5 202008 1645 505 0.5 252510 2046 606 0.5 303012 2447 707 0.6 353514 2848 808 0.7 403902 3247 909 1.0 454518 3650 1010 8.8 496577 3992 =============================================================================== Product and Test Information ============================ +---------------------------------------------------------------------+ |Oracle ZFS Storage ZS11-2 Eight Drive Enclosure Hybrid Storage System| +---------------------------------------------------------------------+ Tested by Oracle Hardware Available June 12, 2025 Software Available June 12, 2025 Date Tested April 2025 License Number 00073 Licensee Locations Redwood Shores, CA, USA The Oracle ZFS Storage ZS11-2 system is a cost-effective, unified storage system that is ideal for performance-intensive, dynamic workloads. This enterprise-class storage system offers both NAS and SAN capabilities with industry-leading Oracle Database integration, in a highly available, clustered configuration. The Oracle ZFS Storage ZS11-2 provides simplified configuration, management, and industry-leading storage Analytics. The performance-optimized platform leverages specialized read and write flash caching devices in the hybrid storage pool configuration, optimizing high-performance throughput and latency. The clustered Oracle ZFS Storage ZS11-2 system scales to 2.0TB Memory per controller, includes 32 CPU cores per controller and 20 PB of disk storage. The Oracle ZFS Storage Appliance delivers excellent value with integrated data services for file and block-level protocols with connectivity over 32GB FC, 200GbE, 100GbE, 40GbE, 25GbE, 10GbE and 1GbE. Data services include 5 levels of compression, deduplication, encryption, snapshots, and replication. An advanced data integrity architecture and four RAID redundancy options optimized for different workloads provide a strong data protection foundation Solution Under Test Bill of Materials ===================================== Item No Qty Type Vendor Model/Name Description ---- ---- ---------- ---------- ---------- ----------------------------------- 1 2 Storage Oracle Oracle ZFS Oracle ZFS Storage ZS11-2 , 2 x Controller Storage 32-Core 2 x 2.95GHz AMD EPYC 9J15 ZS11-2 CPU. 98304 MB DDR5-6400 DIMM. 2 x 3.84TB NVME Sansung boot drives. 2 48 Memory Oracle Oracle ZFS Oracle ZFS Storage ZS11-2, 24 x Storage 96000 MB DDR5-6400 RDIMM. Memory is ZS11-2 order configurable, a total of 2304GB was installed in each storage controller. 3 8 Storage Oracle Oracle 24 drive slot enclosure, SAS3 Drive Storage connected, 20 x 14TB Western Enclosure Drive Digital 7200 RPM - SAS-3 Disk Enclosure Drive, 2 x Samsung 200GB - SAS-3 DE3-24C SSD, 2 x Samsung 7.68TB - SAS-3 SSD. Dual PSU. 4 160 SAS3 HDD Oracle WDC W7214A 14TB Western Digital 7200 RPM - 520ORA014T SAS-3 Disk Drive. Drive selection is order configurable. A total of 160 x 14TB Western Digital 7200 RPM - SAS-3 Disk Drive drives were installed across eight Oracle Storage Drive Enclosure DE3-24C. 5 16 SAS3 SSD Oracle Samsung MZ Samsung 200GB - SAS-3 Solid State ILT960HBHQ Disk. Drive selection is order configurable, a total of 16 x Samsung 200GB - SAS-3 Solid State Disk drives were installed across eight Oracle Storage Drive Enclosure DE3-24C. These Drives are used for write accelerators. 6 16 SAS3 SSD Oracle Samsung Samsung 7.68TB - SAS-3 Solid State MZILT6HALA Disk. Drive selection is order configurable, a total of 16 x Samsung 7.68TB - SAS-3 Solid State Disk drives were installed across eight Oracle Storage Drive Enclosure DE3-24C. These drives are used for ZFS L2 Adaptive Replacement Cache. 7 4 Client Oracle Oracle Oracle x9-2 Client Node, 2 x Intel x9-2 Xeon Gold-5318Y 24-Core, 2.1GHz processors. 512GB RAM. 2 x (2 x 100GbE). Used for benchmark load generation. One is used for the prime client. 8 8 OS Drive Oracle Micron Micron 5300 MTFD - Solid State Disk 5300 MTFD Drive. 2 x 240GB - Solid State 240GB Disk Drive M.2, one each of the disk drives in the Oracle x9-2 Client Node was installed for OS boot drive. 9 1 Switch Arista Arista DCS Arista DCS-7060CX-32S, high- -7060CX-32 performance, low-latency S 100/40/10/25/50 Gb/sec Ethernet switch. 10 8 Network Oracle Dual 100-G Mellanox ConnectX-5 VPI Dual Port Interface igabit QSFP28 100GbE Ethernet HBA, two in Card SFP28 each x9-2 Client (Oracle Part Ethernet Number 7364127). 11 4 Network Oracle Dual 200-G Mellanox ConnectX-7 VPI Dual Port Interface igabit QSFP28 200GbE Ethernet HBA, two in Card SFP28 each ZS11-2 controller (Oracle Part Ethernet Number 7603663). Can be ordered at the same time when ordering the Oracle ZS11-2 controllers. 12 1 Switch CDW Netgear Netgear Gigabit Switch GS724Tv4 Gigabit switch is used for the management Switch and configuration network only. GS724Tv4 Configuration Diagrams ====================== 1) storage2020-20250421-00127.config1.png (see SPECstorage Solution 2020 results webpage) 2) storage2020-20250421-00127.config2.png (see SPECstorage Solution 2020 results webpage) Component Software ================== Item Name and No Component Type Version Description ---- ------------ ------------ ------------ ----------------------------------- 1 Oracle ZFS Storage 8.8.81 Oracle ZFS Storage OS Firmware. Storage Controller OS 2 Solaris Workload Solaris (11. Workload Client Operating System Client OS 4-11.4.37.0. 1.101.1 ) Hardware Configuration and Tuning - Physical ============================================ +----------------------------------------------------------------------+ | Oracle ZFS Storage ZS11-2 | +----------------------------------------------------------------------+ Parameter Name Value Description --------------- --------------- ---------------------------------------- MTU 9000 Network Jumbo Frames +----------------------------------------------------------------------+ | Oracle x9-2 Client Node | +----------------------------------------------------------------------+ Parameter Name Value Description --------------- --------------- ---------------------------------------- MTU 9000 Network Jumbo Frames Hardware Configuration and Tuning Notes --------------------------------------- The System Under Test has 100GbE Ethernet ports set to MTU 9000. Software Configuration and Tuning - Physical ============================================ +----------------------------------------------------------------------+ | Oracle x9-2 Client Nodes | +----------------------------------------------------------------------+ Parameter Name Value Description --------------- --------------- ---------------------------------------- vers 3 NFS mount option set to version 3 rsize, wsize 1048576 NFS mount option for the read and write buffer size. forcedirectio forcedirectio NFS mount option set to forcedirectio rpcmod:clnt_max 4 Increases the number of NFS client _conns connections from default of 1 to 4. Software Configuration and Tuning Notes --------------------------------------- Best practices settings for network and NFS client and the Oracle ZFS Storage using the 100GbE Ethernet for optimized performance includes setting the workload client NFS mounts of the Oracle to use forcedirectio, and read and write NFS buffer sizes to 1048576 bytes each. Oracle ZFS Storage controllers software configuration and tuning are set to factory defaults. Service SLA Notes ----------------- None Storage and Filesystems ======================= Item Stable No Description Data Protection Storage Qty ---- ------------------------------------- ------------------ -------- ----- 1 160 x 14TB HDD Oracle ZFS Storage RAID-10 Yes 160 ZS11-2 Data Pool Drives 2 16 x 200GB SSD Oracle ZFS Storage None Yes 16 ZS11-2 Log Drives 3 Samsung 7.68TB - SAS-3 Solid State None Yes 16 Disk ZS11-2 Read Cache Drives 4 3.84TB NVME Samsung SSD Oracle ZFS Mirrored No 4 Storage ZS11-2 OS Drives 5 223GB Micron 5300 SSD x9-2 Client None No 4 Node OS Drives Number of Filesystems 2 Total Capacity 930 TiB Filesystem Type ZFS Filesystem Creation Notes ------------------------- Two ZFS storage pools are created overall in the System Under Test (one storage pool and ZFS filesystem per Oracle ZFS Storage ZS11-2 controller). Each storage pool is configured with 76 HDD drives, 8 write accelerator SSDs (log devices), 8 L2 Adaptive Replacement Cache SSDs (read cache), and 4 hot spare HDDs. The storage pools are configured via the administrative browser interface. Each storage controller assigns half the disk drives, log devices, and cache devices to be used. Next, the storage pools profiles are set to mirror (RAID-10) across the 76 data HDDs drives. The log profile and cache device profiles used are set to be striped. The log construct in each storage pool is the ZFS Intent Log (ZIL) for the pool. The cache construct is the ZFS L2 Adaptive Replacement Cache for the pool. Each storage pool is also configured as a ZFS filesystem which is then configured with 16 ZFS filesystem shares. Since each Oracle ZFS Storage ZS11-2 controller is configured with one storage pool and is a ZFS filesystem and contains 16 ZFS filesystem shares, the System Under Test has 32 ZFS filesystem shares (32 NFS shares). There are 2 internal mirrored system disk drives per Oracle ZFS Storage ZS11-2 controller and are used only for the controllers NAS operating system. These drives are exclusively used for the NAS Firmware and do not cache or store user data. Each of the Oracle x9-2 workload clients mounts 4 NFS shares per Oracle ZFS Storage ZS11-2 controller for a total of 8 NFS shares. Each Oracle x9-2 workload client uses 2 100Gbe networks. (see Oracle ZFS Storage ZS11-2 Filesystem and Network Configuration Diagram). Storage and Filesystem Notes ---------------------------- All filesystems on both Oracle ZFS Storage ZS11-2 controllers are created with a setting of the Database Record Size of 128KB. The logbias setting is set to latency (the default value) for each filesystem. These standard settings are controlled through the Oracle ZFS Storage administration using the Administration browser or cli interfaces. Transport Configuration - Physical ================================== Item Number of No Transport Type Ports Used Notes ---- --------------- ---------- ----------------------------------------------- 1 100GbE Ethernet 8 Each Oracle ZFS Storage ZS11-2 Controller is networked via 4 x 200GbE Ethernet physical ports for data. 2 1GbE Ethernet 2 Each Oracle ZFS Storage ZS11-2 Controller uses 1 x 1GbE Ethernet physical port for NAS configuration and management. 3 100GbE Ethernet 8 Each Oracle x9-2 Client Node uses is networked via 2 x 100GbE Ethernet physical ports for data. 4 1GbE Ethernet 4 Each Oracle x9-2 Client Node uses 1 x 1GbE Ethernet physical port for configuration and management. Transport Configuration Notes ----------------------------- Each Oracle ZFS Storage controller uses 4 x 200 GbE Ethernet ports for a total of 8 x 200 GbE ports. In the event of a controller failure, IP address will be taken over by the surviving controller. All 200 GbE ports are set to MTU 9000. There is 1 x 1GbE port per controller and client that are assigned as the administration interface, each are connected to the 1GbE ports on the Netgear Gigbit switch. These interfaces are only used to manage the controllers and clients. They do not take part in data services in the System Under Test. Switches - Physical =================== Total Used Item Port Port No Switch Name Switch Type Count Count Notes ---- -------------------- --------------- ------ ----- ------------------------ 1 Arista 100/40/10/25/50 32 16 All ports set for MTU DCS-7060CX-32S Gb/sec Ethernet 9000. Port count based Switch on 100Gbe. 2 Netgear Gigabit 10/100/1000 26 6 Only the 1GbE ports are Switch GS724Tv4 Mb/sec Ethernet used for Management and Switch Configuration of SUT Processing Elements - Physical ============================== Item No Qty Type Location Description Processing Function ---- ---- -------- -------------- ------------------------- ------------------- 1 4 CPU Oracle ZFS 2 x 32-Core 2x2.95GHz AMD ZFS, TCP/IP, Storage ZS11-2 EPYC 9J15 CPU RAID/Storage Drivers, NFS 2 8 CPU Oracle x9-2 2 x 24-Core 2.1GHz Xeon TCP/IP, NFS Client Node Gold 5318Y processors Processing Element Notes ------------------------ Each Oracle ZFS Storage ZS11-2 controller contains 2 physical processors, each with 32 processing cores. Oracle x9-2 client contains 2 physical processors, each with 24 processing cores. Memory - Physical ================= Size in Number of Description GiB Instances Nonvolatile Total GiB ------------------------- ---------- ---------- ------------ ------------ Memory in Oracle ZFS 2304 2 V 4608 Storage ZS11-2 Memory in Oracle x9-2 512 4 V 2048 clients Grand Total Memory Gibibytes 6656 Memory Notes ------------ The Oracle ZFS Storage controllers' main memory is used for the ZFS Adaptive Replacement Cache (ARC is a read data cache), as well as operating system memory.Oracle x9-2 client memory is not used for storage or cache of the Oracle ZFS Storage ZS11-2 controllers, only for the client OS. Stable Storage ============== The Stable Storage requirement is guaranteed by the ZFS Intent Log (ZIL) which logs writes and other filesystem changing transactions to the stable storage of write flash accelerator SSDs or HDDs depending on the configuration. The System Under Test uses write flash accelerator SSDs. Writes and other filesystem changing transactions are not acknowledged until the data is written to stable storage. The Oracle ZFS Storage Appliance is an active-active cluster high availability system. In the event of a controller failure or power loss, each controller can take over for the other. Write flash accelerator SSDs and/or HDDs are located in shared disk shelves and can be accessed via the 16 backend SAS-3 channels from both controllers, the remaining active controller can complete any outstanding transactions using the ZIL. In the event of power loss to both controllers, the ZIL is used after power is restored to reinstate any writes and other filesystem changes. Solution Under Test Configuration Notes ======================================= The System Under Test is the Oracle ZFS Storage ZS11-2 in an active-active failover configuration. Other Solution Notes ==================== None Dataflow ======== Please reference the System Under Test diagram. The 4 Oracle x9-2 workload clients are used for benchmark load generation. The Oracle x9-2 workload clients each mount 8 of the total 32 filesystem shares provided by the Oracle ZFS Storage cluster via NFSv3. Sixteen filesystem shares are shared from each Oracle ZFS Storage controller. Each of the two Oracle ZFS Storage controllers has 4 x 200GbE Ethernet ports for data service, all ports are assigned separate networks. Each Oracle x9-2 workload client has 4 x 100GbE Ethernet ports only 2 ports are used, mounting 8 NFS shares per client over two networks. This is in the Oracle ZFS Storage ZS11-2 Filesystem and Network Configuration diagram. Other Notes =========== Oracle and ZFS are registered trademarks of Oracle Corporation in the U.S. and/or other countries. Intel and Xeon are registered trademarks of the Intel Corporation in the U.S. and/or other countries. Other Report Notes ================== None =============================================================================== Generated on Mon May 5 17:34:35 2025 by SpecReport Copyright (C) 2016-2025 Standard Performance Evaluation Corporation