Introduction

This article presents true life feedbacks on recently deployed Hyper-Converged clusters running Windows Server 2016 Datacenter with Hyper-V and Storage Spaces Direct for SMB needs (Small and Medium Business) based on 2 nodes clusters. With these this types of configuration, customers can access a new level of performance and also limit investment, administration and support costs.

So, talking about performance, all results are really impressive with entry level clusters running on HPE Gen9 servers.

  • The first configuration is based on Full Flash SSD storage while the second uses SSD storage plus NVMe cache.
  • By considering customers constraints based on network dependencies (no DCB protocol was available on existing netwok switches) iWarp RDMA high performance dual port / 10 GbE Unified adapter T520 CR provided by Chelsio were used for network storage. I will comme back soon with results on Mellanox ConnectX-4 based on RoCE protocol on a future project.

Hardware Platform #1: “Full SSD”

Hardware description

Characteristics of this 2 nodes cluster are listed below:

  • 2 x HPE servers DL380 Gen9 running Windows Server 2016 Datacenter. On each one:
    • 2 x CPU Intel E5-2699v 2.30 GHz / 18 cores / Cache L1 2.3 MB / Cache L2 9 MB, Cache L3 90 MB
    • RAM : 256 GB RAM
    • Converged network with 2 x 10 GB/s with LACP and Dynamic mode for Hyper-V infrastructure. VLANs for Management, Live Migration, Cluster and VM.
    • RDMA dedicated network with 2 x 10 GB/s via Chelsio T520 and dedicated VLAN.
    • S2D storage capacity tiers (14.4 TB) with 9 x HPE 1.6TB 6Gb SATA SSD.

About this S2D configuration

In this configuration, customer needs performance but limited costs.
So, to respect this requirement, we propose a Full Flash S2D storage based on SSD only disks. In this configuration, S2D doesn’t need dedicated NVMe disks for ultra high performance cache.

SSD-only

Performance Results with Full Flash SSD

Bench #1 with 4K blocks – 100% Read: 136000 IOPS 

  • Diskspd.exe -b4K -d60 -h -L -o2 -t200 -r –w0 -c2000M d:\io.dat
  • Get-StorageSubSystem Cluster* | Get-StorageHealthReport

S2D_Perf_SSD_100R_W0

Bench #2 with 4K blocks – 70% Read: 106000 IOPS / 30% Write: 45100 IOPS for 151500 total IOPS

  • Diskspd.exe -b4K -d60 -h -L -o2 –t200 -r –w30 -c2000M d:\io.dat
  • Get-StorageSubSystem Cluster* | Get-StorageHealthReport

S2D_Perf_SSD_70R_W30.png

Bench #3 with 4K blocks – 50% Read: 64000 IOPS / 50% Write: 62500 IOPS for 126500 total IOPS

  • Diskspd.exe -b4K -d60 -h -L -o2 –t200 -r –w50 -c2000M d:\io.dat
  • Get-StorageSubSystem Cluster* | Get-StorageHealthReport

S2D_Perf_SSD_50R_W50

Bench #4 with 4K blocks – 30% Read: 32500 IOPS / 70% Write: 72400 IOPS for 105000 total IOPS

  • Diskspd.exe -b4K -d60 -h -L -o2 –t200 -r –w70 -c2000M d:\io.dat
  • Get-StorageSubSystem Cluster* | Get-StorageHealthReport

S2D_Perf_SSD_30R_W70

Bench #5 with 4K blocks – 100% écriture for 90200 total IOPS

  • Diskspd.exe -b4K -d60 -h -L -o2 –t200 -r –w100 -c2000M d:\io.dat
  • Get-StorageSubSystem Cluster* | Get-StorageHealthReport

S2D_Perf_SSD_0R_W100

Hardware Platform #2: “NVMe cache and  SSD capacity tiers”

Hardware description

Characteristics of this 2 nodes cluster are listed below:

  • 2 x HPE servers DL380 Gen9 running Windows Server 2016 Datacenter. On each one:
    • 2 x CPU Intel E5-2699v 2.30 GHz / 18 cores / Cache L1 2.3 MB / Cache L2 9 MB, Cache L3 90 MB
    • RAM : 256 GB RAM
    • Converged network with 2 x 10 GB/s with LACP and Dynamic mode for Hyper-V infrastructure. VLANs for Management, Live Migration, Cluster and VM.
    • RDMA dedicated network with 2 x 10 GB/s via Chelsio T520 and dedicated VLAN.
    • S2D storage cache tiers (1.6 TB size) with 2 x HP 800GB NVMe MU HH PCIe
    • S2D storage capacity tiers (4.8 TB size) with 6 x HP 800GB 6G SATA MU-2 SFF SC SSD

About this S2D configuration

Because customer need maximum performances for multiple Oracle VM running Linux, we propose a storage based on SSD disks and 2 additional NVMe disks / server for high performance caching.

NVME-SSD

Performance Results with Full Flash SSD + NVMe cache

Bench #1 with 4K blocks – 100% Read: 177700 IOPS

  • Diskspd.exe -b4K -d60 -h -L -o2 -t200 -r –w0 -c2000M d:\io.dat
  • Get-StorageSubSystem Cluster* | Get-StorageHealthReport

S2D_Perf_SSD_NVMe_100R_W0

Bench #2 with 4K blocks – 70% Read: 132000 IOPS / 30% Write: 56300 IOPS for 177700 total IOPS

  • Diskspd.exe -b4K -d60 -h -L -o2 –t200 -r –w30 -c2000M d:\io.dat
  • Get-StorageSubSystem Cluster* | Get-StorageHealthReport

S2D_Perf_SSD_NVMe_70R_W30

Bench #3 with 4K blocks – 50% Read: 73500 IOPS / 50% Write: 70600 IOPS for 144200 total IOPS

  • Diskspd.exe -b4K -d60 -h -L -o2 –t200 -r –w50 -c2000M d:\io.dat
  • Get-StorageSubSystem Cluster* | Get-StorageHealthReport

S2D_Perf_SSD_NVMe_50R_W50

Bench #4 with 4K blocks – 30% Read: 33400 IOPS / 70% Write: 67500 IOPS for 100000 total IOPS

  • Diskspd.exe -b4K -d60 -h -L -o2 –t200 -r –w70 -c2000M d:\io.dat
  • Get-StorageSubSystem Cluster* | Get-StorageHealthReport

S2D_Perf_SSD_NVMe_30R_W70

Bench #5 with 4K blocks – 100% Write: 82200 IOPS

  • Diskspd.exe -b4K -d60 -h -L -o2 –t200 -r –w100 -c2000M d:\io.dat
  • Get-StorageSubSystem Cluster* | Get-StorageHealthReport

S2D_Perf_SSD_NVMe_0R_W100

About DISKSPD stress tool

Microsoft DiskSpd tool is easy to use to simulate workloads but note that CrystalDiskMark is another nice tool also based on Microsoft DiskSpd code (MIT License) to easily bench bandwidht and IOPS on storage subsystems.

CrystalDM

Summary: (Hyper-V + S2D) is really a winning HCI solution!

Really, I encourage You to evaluate Windows Server 2016 storage solutions as S2D – Storage Spaces Direct, without forgetting SR – Storage Replica!

When comparing results between the 2 clusters (Full Flash SSD vs Full Flash SSD plus NVMe cache) considers to have a plus 20% benefit in all tests and certainly more when the platform is overloaded.

By moving from VMware or Hyper-V 2012 R2 to Windows Server 2016 Datacenter, it becomes easy to switch from traditional infrastructure to an attractive HCI solution because the Hyper-V / S2D tandem offers scalability, failure tolerance, high availability and very strong performance.

Storage management tasks are simple, maintenance is minimized because strongly automatic and no hardware dependencies exists.

Right now, it is clear that Software Defined Storage is the future and it is Now!”.
So, to learn more:

And stay tuned for updates for Windows Server 2016 and Windows Server RS3 new features…
Jeff/