Windows2016
Windows 2016 server - Storage spaces Direct - Hyper converged - Hardware used
Compute
- 2028TP-HC1R
- 4x BPN-ADP-6SATA3P (changed out the 3108 BPN`s since storage spaces direct will NOT work with RAID cards)
- Intel Xeon E5-2680 V3
- DDR4 2133MHz
- 4x NICs: Mellanox Connect-x 3 Single port
- 4x OS drive: Samsung SSD 256GB SATA
- 4x Cache drive: Sandisk 128GB SATA
- 16x Hard drives: 250GB Seagate SATA hard drive
Networking
- Supermicro SSE-G24-TG4 (1Gbe RJ-45 24 port)
- Supermicro (10Gbe SFP+ 24 port)
Windows 2016 Server - Storage Space Direct - Issues/Requirements
Software
- Storage Spaces Direct is a licensed feature set ONLY available in Datacenter edition. So Standard, storage editions CANNOT be used.
- You can setup storage spaces either with GUI desktop or with server CORE.
Compute Hardware
- Do NOT use RAID controllers since the storage spaces configuration checks this and will stop setup on a RAID array. It has to detect media type (HDD, SSD, NVME, SATA, SAS) itself to function correctly. You can use standard onboard SATA or SAS HBA. I learnt this the hard way and had to change hardware used.
- minimum disk requirements per node will be at least: 1 OS disk, 1 Caching disk and 1 capacity disk.
Networking
- Requires both 1Gbe and 10Gbe.
- Will require 10Gbe that supports RDMA. This cannot be worked around if your cards do not support RDMA then find some that do.
- The demo system used 1x 1Gbe and 10Gbe switches. For production i would suggest 2 switches each for 10 and 1 for HA.
- The demo system used Mellanox Connect x -3 single port network cards. Solely down to RDMA being support and availability in the lab. Again i would use a dual port network card for production to allow for redundant links/load balancing.
- 10Gbe switch must support tagged VLAN if using one switch. If using two then LCAP support will be required to allow for ISL and port channels of ports A/B.
Windows 2016 server - Storage spaces Direct - Hyper converged install
- Install windows server 2016 Datacenter either with GUI or without. In this example i did with GUI but apparently you can use server core without issue. Not sure if you can use nano server. (Storage spaces direct is a Datacenter feature, this will not work or be licensed in anything less. For example Standard)
- Install hardware drivers and fully update OS. Make sure every node has the same update level otherwise cluster manager will complain. Plus network drivers will be required to get RDMA working (Storage spaces direct cannot function without RDMA)
- Tagged vlan will be required for RDMA. For my example i set a VLAN of 12 on the super micro switch on every port that its required and this is set to tagged. You can do this in the GUI or CLI of the swtich.
- Every node will need certain services. these can be installed using:
Install-WindowsFeature -Name File-Services
Install-WindowsFeature -Name Failover-Clustering -IncludeManagementTools
Install-WindowsFeature -Name Hyper-V -IncludeManagementTools -Restart- Add all the nodes you wish to use for storage spaces direct to your active directory server. In my example i made a new 2016 AD server with a new forest
"win16.local" since i could never get adding to the cluster working on "bostonlabs.co.uk" This could be down to bostonlabs labs being 2008 R2 AD.
- Verify that the internal drives are online, by going to Server Manager > Tools > Computer
Management > Disk Management. If any are offline, select the drive, right-click it, and click Online. Alternatively, PowerShell can be used to bring all 14 drives in each host online with a single command.
- For the Mellanox NICs used in this solution, we need to enable Data Center Bridging (DCB),
which is required for RDMA. Then we create a policy to establish network Quality of Service (QoS) to ensure that the Software Defined Storage system has enough bandwidth to communicate between the nodes, ensuring resiliency and performance. We also need to disable regular Flow Control (Global Pause) on the Mellanox adapters, since Priority Flow Control (PFC) and Global Pause cannot operate together on the same interface.
Enable Data Center Bridging (required for RDMA)
Install-WindowsFeature -Name Data-Center-BridgingConfigure a QoS policy for SMB-Direct
New-NetQosPolicy "SMB" -NetDirectPortMatchCondition 445 -PriorityValue8021Action 3Turn on Flow Control for SMB
Enable-NetQosFlowControl -Priority 3Make sure flow control is off for other traffic
Disable-NetQosFlowControl -Priority 0,1,2,4,5,6,7Apply a Quality of Service (QoS) policy to the target adapters. In my demo i only used one NIC named in windows as "mellanox1", so to use these commands successfully make sure the name your stating is actually the one you have applied to the NIC in windows.
Enable-NetAdapterQos -Name "Mellanox1"(,"examplenic2")Give SMB Direct a minimum bandwidth of 50%
New-NetQosTrafficClass "SMB" -Priority 3 -BandwidthPercentage 50 -Algorithm ETSDisable Flow Control on physical adapters. If you have two or more ports then just replicate this line and change the name of the NIC targeted.
Set-NetAdapterAdvancedProperty -Name "Mellanox1" -RegistryKeyword "*FlowControl" -RegistryValue 0For an S2D hyperconverged solution, we deploy a SET-enabled Hyper-V switch and add RDMA-enabled host virtual NICs to it for use by Hyper-V. Since many switches won't pass traffic class information on untagged vLAN traffic, we need to make sure that the vNICs using RDMA are on vLANs.
To keep this hyperconverged solution as simple as possible and since we are using single port 10GbE NIC, we will pass all traffic on vLAN 12. If you need to segment your network traffic more, for example to isolate VM Live Migration traffic, you can use additional vLANs. Example 10 shows the PowerShell script that can be used to perform the SET configuration, enable RDMA, and assign vLANs to the vNICs. These steps are necessary only for configuring a hyperconverged solution.
For a dis aggregated solution these steps can be skipped since Hyper-V is not enabled on the S2D storage nodes.
Create a SET-enabled vSwitch supporting multiple uplinks provided by the Mellanox adapter
New-VMSwitch -Name S2DSwitch -NetAdapterName "Mellanox 1", "Mellanox 2" -EnableEmbeddedTeaming $true -AllowManagementOS $falseAdd host vNICs to the vSwitch just created. Add another line here for anymore SMB names created. You would normally have
Add-VMNetworkAdapter -SwitchName S2DSwitch -Name SMB1 -ManagementOSEnable RDMA on the vNICs just created. The second SMB2 is in brackets since i only have one 10Gbe nic in my demo, so removed the second SMB from my command when applied. But if you have more than one SMB then you will need to add these as shown in the brackets below.
Enable-NetAdapterRDMA -Name "vEthernet (SMB1)"(,"vEthernet (SMB2)")Assign the vNICs to a vLAN. You will need to assign per SMB, so if as above you have applied more than one then copy the below and change the adapter name.
Set-VMNetworkAdapterVlan -VMNetworkAdapterName SMB1 -VlanId 12 -Access –ManagementOSNow setup networking IP`s for the created Vethernet "SMB1" and team the 1Gbe network ports if you have multiple. Each has to have a unique DNS name.
Dns names used in my demo (win16.local is the active directory name):
Win16-nodea.win16.local
Win16-nodeb.win16.local
Win16-nodec.win16.local
Win16-noded.win16.localIn my example i had 4 nodes, 4 10Gbe links and 8 1Gbe links. The DNS and AD server in my example is a VM i created for this specifically.
The 10Gbe does not need to be externally rout able, as long as each windows storage server can see the other.
1Gbe Example:
Win16-nodea: 1Gbe (2x 1Gbe teamed): 10.7.7.35, 255.0.0.0, Gateway: 10.0.0.3, DNS: 10.7.7.7
Win16-nodeb: 1Gbe (2x 1Gbe teamed): 10.7.7.36, 255.0.0.0, Gateway: 10.0.0.3, DNS: 10.7.7.7
Win16-nodeb: 1Gbe (2x 1Gbe teamed): 10.7.7.37, 255.0.0.0, Gateway: 10.0.0.3, DNS: 10.7.7.7
Win16-nodeb: 1Gbe (2x 1Gbe teamed): 10.7.7.38, 255.0.0.0, Gateway: 10.0.0.3, DNS: 10.7.7.710Gbe Example:
Win16-nodea: 10Gbe: 192.168.0.5, 255.255.255.0
Win16-nodeb: 10Gbe: 192.168.0.6, 255.255.255.0
Win16-nodeb: 10Gbe: 192.168.0.7, 255.255.255.0
Win16-nodeb: 10Gbe: 192.168.0.8, 255.255.255.0References
Main guide used: https://lenovopress.com/lp0064.pdf