SPECstorage(TM) Solution 2020_eda_blended Result Microsoft and NetApp : Azure NetApp Files large volume breakthrough mode Inc. SPECstorage Solution = 2880 Job_Sets (Overall Response Time = 0.51 msec) 2020_eda_blended =============================================================================== Performance =========== Business Average Metric Latency Job_Sets Job_Sets (Job_Sets) (msec) Ops/Sec MB/Sec ------------ ------------ ------------ ------------ 192 0.3 86405 1394 384 0.3 172810 2788 576 0.3 259214 4182 768 0.3 345607 5575 960 0.4 432007 6970 1152 0.4 518429 8365 1344 0.4 604834 9760 1536 0.4 691238 11153 1728 0.4 777643 12546 1920 0.5 864048 13941 2112 0.5 950452 15335 2304 0.7 1036598 16722 2496 0.6 1123134 18124 2688 1.1 1209647 19519 2880 1.6 1296040 20910 =============================================================================== Product and Test Information ============================ +---------------------------------------------------------------+ | Azure NetApp Files large volume breakthrough mode | +---------------------------------------------------------------+ Tested by Microsoft and NetApp Inc. Hardware Available May 2024 Software Available May 2024 Date Tested January 2026 License Number 33 Licensee Locations San Jose, CA USA Azure NetApp Files is an Azure native, first-party, enterprise-class, high-performance file storage service. It provides Volumes as a service, which you can create within a NetApp account and a capacity pool, and share to clients using SMB and NFS. You can also select service and performance levels and manage data protection. You can create and manage high-performance, highly available, and scalable file shares by using the same protocols and tools that you're familiar with and rely on on-premises. Solution Under Test Bill of Materials ===================================== Item No Qty Type Vendor Model/Name Description ---- ---- ---------- ---------- ---------- ----------------------------------- 1 1 Storage Microsoft Azure Azure NetApp Files large volumes can volume bre support from 50 TiB to 2 PiB in akthrough size. Breakthrough mode enables mode individual volumes to scale beyond 12.5 GiB/s. Volumes can be resized up or down on demand, and throughput can be adjusted automatically (based on volume size) or manually depending on the capacity pool QoS type. 2 12 Azure Microsoft Standard_D Red Hat Enterprise Linux running on Virtual 32_v5 Azure D32s_v5 Virtual Machines (32 Machine vCPU, 128 GB Memory, 16 Gbps Networking). The Dsv5-series virtual machines offer a combination of vCPUs and memory to meet the requirements associated with most enterprise workloads 3 1 Azure Microsoft Standard_D (Prime Client) Red Hat Enterprise Virtual 2_v5 Linux running on Azure D2s_v5 Machine Virtual Machines (2 vCPU, 8 GB Memory, 12.5 Gbps Networking). Configuration Diagrams ====================== 1) storage2020-20260227-00147.config1.png (see SPECstorage Solution 2020 results webpage) Component Software ================== Item Name and No Component Type Version Description ---- ------------ ------------ ------------ ----------------------------------- 1 RHEL 9.5 Operating RHEL 9.5 Operating System (OS) for the System (Kernel 5.14 workload clients .0-503.38.1. el9_5.x86_64 ) Hardware Configuration and Tuning - Virtual =========================================== +----------------------------------------------------------------------+ | Client Network Settings | +----------------------------------------------------------------------+ Parameter Name Value Description --------------- --------------- ---------------------------------------- Accelerated Enabled Accelerated Networking enables single Networking root I/O virtualization (SR-IOV) on supported virtual machine (VM) types +----------------------------------------------------------------------+ | Storage Network Settings | +----------------------------------------------------------------------+ Parameter Name Value Description --------------- --------------- ---------------------------------------- Network Standard Standard Network Features allows Azure features VNet features such as network security groups, user-defined routes and others. Hardware Configuration and Tuning Notes --------------------------------------- None Software Configuration and Tuning - Virtual =========================================== +----------------------------------------------------------------------+ | Clients | +----------------------------------------------------------------------+ Parameter Name Value Description --------------- --------------- ---------------------------------------- rsize,wsize 262144 NFS mount options for data block size protocol tcp NFS mount options for protocol nfsvers 3 NFS mount options for NFS version nconnect 8 NFS mount options for multiple TCP connections actimeo 600 NFS mount option to modify the timeouts for attribute caching nocto present NFS mount option to turn off close-to- (boolean) open consistency noatime present NFS mount option to turn off access time (boolean) updates nofile 102400 Maximum number of open files per user nproc 10240 Maximum number of processes per user sunrpc.tcp_slot 128 Sets the number of (TCP) RPC entries to _table_entries pre-allocate for in-flight RPC requests net.core.wmem_m 16777216 Maximum size of the socket send buffer ax net.core.rmem_m 16777216 Maximum size of the socket receive ax buffer net.core.wmem_d 1048576 Default setting in bytes of the socket efault send buffer net.core.rmem_d 1048576 Default setting in bytes of the socket efault receive buffer net.ipv4.tcp_rm 1048576 8388608 Minimum, default and maximum size of the em 33554432 TCP receive buffer net.ipv4.tcp_wm 1048576 8388608 Minimum, default and maximum size of the em 33554432 TCP send buffer net.core.optmem 4194304 Maximum ancillary buffer size allowed _max per socket net.core.somaxc 65535 Maximum tcp backlog an application can onn request net.ipv4.tcp_me 4096 89600 Maximum memory in 4096-byte pages across m 8388608 all TCP applications. Contains minimum, pressure and maximum. net.ipv4.tcp_wi 1 Enable TCP window scaling ndow_scaling net.ipv4.tcp_ti 0 Turn off timestamps to reduce mestamps performance spikes related to timestamp generation net.ipv4.tcp_no 1 Prevent TCP from caching connection _metrics_save metrics on closing connections net.ipv4.route. 1 Flush the routing cache flush net.ipv4.tcp_lo 1 Allows TCP to make decisions to prefer w_latency lower latency instead of maximizing network throughput net.ipv4.ip_loc 1024 65000 Defines the local port range that is al_port_range used by TCP and UDP traffic to choose the local port. net.ipv4.tcp_sl 0 Congestion window will not be timed out ow_start_after_ after an idle period idle net.core.netdev 300000 Sets maximum number of packets, queued _max_backlog on the input side, when the interface receives packets faster than kernel can process net.ipv4.tcp_sa 0 Disable TCP selective acknowledgements ck net.ipv4.tcp_ds 0 Disable duplicate SACKs ack net.ipv4.tcp_fa 0 Disable forward acknowledgement ck vm.dirty_expire 30000 Defines when dirty data is old enough to _centisecs be eligible for writeout by the kernel flusher threads. Unit is 100ths of a second. vm.dirty_writeb 30000 Defines a time interval between periodic ack_centisecs wake-ups of the kernel threads responsible for writing dirty data to hard-disk. Unit is 100ths of a second. Software Configuration and Tuning Notes --------------------------------------- Tuned the necessary client parameters as shown above, for communication between clients and storage over Azure Virtual Networking, to optimize data transfer and minimize overhead. Service SLA Notes ----------------- Service Level Agreement (SLA) for Azure NetApp Files Storage and Filesystems ======================= Item Stable No Description Data Protection Storage Qty ---- ------------------------------------- ------------------ -------- ----- 1 Azure NetApp Files large volume, Azure NetApp Files Stable 1 Flexible Service Level, 50 TiB, 21504 Flexible, Storage MiB/s Standard, Premium and Ultra service levels are built on a fault- tolerant bare- metal fleet powered by ONTAP, delivering enterprise-grade resilience, and uses RAID-DP (Double Parity RAID) to safeguard data against disk failures. This mechanism distributes parity across multiple disks, enabling seamless data recovery even if two disks fail simultaneously. RAID-DP has a long-standing presence in the enterprise storage industry and is recognized for its proven reliability and fault tolerance. Number of Filesystems 1 Total Capacity 50 TiB Filesystem Type Azure NetApp Files large volume Filesystem Creation Notes ------------------------- Large volumes were created via the public Azure API using the azure cli tool. Creation commands are available here: https://learn.microsoft.com/en-us/cli/azure/netappfiles/volume?view=azure-cli-latest#az-netappfiles-volume-create
Creating the Azure NetApp Files Account: az netappfiles account create --account-name [account-name] --resource-group [resource-group] --location [location]
Creating the Azure NetApp Files Capacity Pool: az netappfiles pool create --account-name [account-name] --resource-group [resource-group] --location [location] --pool-name [pool-name] --service-level Flexible --size 54975581388800 --CustomThroughputMibps 12800
Creating the Azure NetApp Files Volume: az netappfiles volume create --resource-group [resource-group] --account-name [account-name] --location [location] --pool-name [pool-name] --name [volume-name] --usage-threshold 51200 --file-path [mount-point] --protocol-types NFSv3 --vnet [vnet-id] --zones 1 --throughput-mibps 21504 --breakthrough-mode true
Storage and Filesystem Notes ---------------------------- n/a Transport Configuration - Virtual ================================= Item Number of No Transport Type Ports Used Notes ---- --------------- ---------- ----------------------------------------------- 1 16 Gbps Virtual 12 Each Workload Linux Virtual Machine has a NIC single Network Adapter with Accelerated Networking enabled. 2 12.5 Gbps 1 The Prime client has a single Network Adapter Virtual NIC with Accelerated Networking enabled. Transport Configuration Notes ----------------------------- Azure Virtual machines allocated bandwidth limits egress (outbound) traffic from the virtual machines. The virtual machines ingress bandwidth rates may exceed 16 Gbps depending on other resources available to the virtual machine (https://learn.microsoft.com/en-us/azure/virtual-network/virtual-machine-network-throughput) Switches - Virtual ================== Total Used Item Port Port No Switch Name Switch Type Count Count Notes ---- -------------------- --------------- ------ ----- ------------------------ 1 Azure Virtual Virtual Network 18 18 The Azure virtual Network - Germany network had 6 West Central connections for the Azure NetApp Files storage endpoints and 12 (1 per) RHEL client. Azure virtual networks allow up to 65,536 Network interface cards and Private IP addresses per virtual network. This Azure VNet was Peered to the VNet in Canada Central to communiate with the Prime client 2 Azure Virtual Virtual Network 1 1 This Azure VNet was Network - Canada Peered to all other Central VNets to allow Prime client communication between itself and the workload clients Processing Elements - Virtual ============================= Item No Qty Type Location Description Processing Function ---- ---- -------- -------------- ------------------------- ------------------- 1 384 vCPU Azure Cloud Intel(R) Xeon(R) Platinum Client Workload 8370C CPU @ 2.80GHz (32 Generator cores allocated to each Workload VM) 2 2 vCPU Azure Cloud Intel(R) Xeon(R) Platinum Prime Client 8370C CPU @ 2.80GHz (2 cores allocated to Prime VM) Processing Element Notes ------------------------ n/a Memory - Virtual ================ Size in Number of Description GiB Instances Nonvolatile Total GiB ------------------------- ---------- ---------- ------------ ------------ Client Workload Generator 128 12 V 1536 Prime Client 8 1 V 8 Grand Total Memory Gibibytes 1544 Memory Notes ------------ None Stable Storage ============== Azure NetApp Files utilizes non-volatile battery-backed memory of two independent nodes as write caching prior to write acknowledgement. This protects the filesystem from any single-point-of-failure until the data is de-staged to disks. In the event of an abrupt failure, pending data in the non-volatile battery-backed memory is replayed to disk upon restoration. Solution Under Test Configuration Notes ======================================= All clients accessed the Azure NetApp Files large volume over one storage endpoint (They were round robin assigned to one of the six endpoints).
Unlike a general-purpose operating system, Azure NetApp Files does not provide mechanisms for customers to run third-party code (https://learn.microsoft.com/en-us/security/benchmark/azure/baselines/azure-netapp-files-security-baseline?toc=/azure/azure-netapp-files/TOC.json#security-profile). Azure Resource Manger allows only a allow-listed set of operations to be executed via the Azure APIs (https://learn.microsoft.com/en-us/azure/azure-netapp-files/control-plane-security).
Underlying Azure infrastructure was patched for Spectre/Meltdown on or prior to January 2018. (https://azure.microsoft.com/en-us/blog/securing-azure-customers-from-cpu-vulnerability/ and https://learn.microsoft.com/en-us/azure/virtual-machines/mitigate-spectre). Other Solution Notes ==================== None Dataflow ======== 12 clients were used to generate the workload and 1 prime client in Canada Central.
Each workload client used one 16 Gbps virtual network adapter, through a single vnet connected to one Azure NetApp Files endpoint. Each client mounted one ANF large volume as an NFSv3 filesystem within the same Azure Region.
Breakthrough mode enables six network endpoints per volume. The clients were round robin assigned to one of the six endpoints (2 clients per endpoint).
The Prime client communicated with all workload clients outside of the virtual network it was connected to using virtual network peering.
Client to storage traffic was contained within each virtual network created per region/availability zone. Other Notes =========== There is 1 mount per client. Example mount commands from one server are shown below. /etc/fstab entry:
10.254.171.4:/germany-westcentral-vol /mnt/eda nfs hard,proto=tcp,vers=3,rsize=262144,wsize=262144,nconnect=8,nocto,noatime,actimeo=600 0 0
mount | grep eda
10.254.171.4:/germany-westcentral-vol on /mnt/eda type nfs (rw,noatime,vers=3,rsize=262144,wsize=262144,namlen=255,acregmin=600,acregmax=600,acdirmin=600,acdirmax=600,hard,nocto,proto=tcp,nconnect=8,timeo=600,retrans=2,sec=sys,mountaddr=10.254.171.4,mountvers=3,mountport=635,mountproto=tcp,local_lock=none,addr=10.254.171.4) Other Report Notes ================== None =============================================================================== Generated on Tue Mar 17 13:19:49 2026 by SpecReport Copyright (C) 2016-2026 Standard Performance Evaluation Corporation