Windows2016

From Define Wiki
Revision as of 11:38, 1 February 2017 by Matth (talk | contribs)
Jump to navigation Jump to search

Windows 2016 server - Storage spaces Direct - Hyper converged - Hardware used

Compute:

  • 2028TP-HC1R
  • 4x BPN-ADP-6SATA3P (changed out the 3108 BPN`s since storage spaces direct will NOT work with RAID cards)
  • Intel Xeon E5-2680 V3
  • DDR4 2133MHz
  • 4x NICs: Mellanox Connect-x 3 Single port
  • 4x OS drive: Samsung SSD 256GB SATA
  • 4x Cache drive: Sandisk 128GB SATA
  • 16x Hard drives: 250GB Seagate SATA hard drive

Networking:

  • Supermicro SSE-G24-TG4 (1Gbe RJ-45 24 port)
  • Supermicro (10Gbe SFP+ 24 port)

Windows 2016 server - Storage spaces Direct - Hyper converged install

  • Install windows server 2016 Datacenter either with GUI or without. In this example i did with GUI but apparently you can use server core without issue. Not sure if you can use nano server. (Storage spaces direct is a Datacenter feature, this will not work or be licensed in anything less. For example Standard)
  • Install hardware drivers and fully update OS. Make sure every node has the same update level otherwise cluster manager will complain. Plus network drivers will be required to get RDMA working (Storage spaces direct cannot function without RDMA)
  • Tagged vlan will be required for RDMA. For my example i set a VLAN of 12 on the super micro switch on every port that its required and this is set to tagged. You can do this in the GUI or CLI of the swtich.
  • Every node will need certain services. these can be installed using:
Install-WindowsFeature -Name File-Services
Install-WindowsFeature -Name Failover-Clustering -IncludeManagementTools
Install-WindowsFeature -Name Hyper-V -IncludeManagementTools -Restart
  • Add all the nodes you wish to use for storage spaces direct to your active directory server. In my example i made a new 2016 AD server with a new forest

"win16.local" since i could never get adding to the cluster working on "bostonlabs.co.uk" This could be down to bostonlabs labs being 2008 R2 AD.

  • Verify that the internal drives are online, by going to Server Manager > Tools > Computer

Management > Disk Management. If any are offline, select the drive, right-click it, and click Online. Alternatively, PowerShell can be used to bring all 14 drives in each host online with a single command.

  • For the Mellanox NICs used in this solution, we need to enable Data Center Bridging (DCB),

which is required for RDMA. Then we create a policy to establish network Quality of Service (QoS) to ensure that the Software Defined Storage system has enough bandwidth to communicate between the nodes, ensuring resiliency and performance. We also need to disable regular Flow Control (Global Pause) on the Mellanox adapters, since Priority Flow Control (PFC) and Global Pause cannot operate together on the same interface.


Enable Data Center Bridging (required for RDMA)

Install-WindowsFeature -Name Data-Center-Bridging

Configure a QoS policy for SMB-Direct

New-NetQosPolicy "SMB" -NetDirectPortMatchCondition 445 -PriorityValue8021Action 3

Turn on Flow Control for SMB

Enable-NetQosFlowControl -Priority 3

Make sure flow control is off for other traffic

Disable-NetQosFlowControl -Priority 0,1,2,4,5,6,7

Apply a Quality of Service (QoS) policy to the target adapters. In my demo i only used one NIC named in windows as "mellanox1", so to use these commands successfully make sure the name your stating is actually the one you have applied to the NIC in windows.

Enable-NetAdapterQos -Name "Mellanox1"(,"examplenic2")

Give SMB Direct a minimum bandwidth of 50%

New-NetQosTrafficClass "SMB" -Priority 3 -BandwidthPercentage 50 -Algorithm ETS

Disable Flow Control on physical adapters. If you have two or more ports then just replicate this line and change the name of the NIC targeted.

Set-NetAdapterAdvancedProperty -Name "Mellanox1" -RegistryKeyword "*FlowControl" -RegistryValue 0

For an S2D hyperconverged solution, we deploy a SET-enabled Hyper-V switch and add RDMA-enabled host virtual NICs to it for use by Hyper-V. Since many switches won't pass traffic class information on untagged vLAN traffic, we need to make sure that the vNICs using RDMA are on vLANs.

To keep this hyperconverged solution as simple as possible and since we are using single port 10GbE NIC, we will pass all traffic on vLAN 12. If you need to segment your network traffic more, for example to isolate VM Live Migration traffic, you can use additional vLANs. Example 10 shows the PowerShell script that can be used to perform the SET configuration, enable RDMA, and assign vLANs to the vNICs. These steps are necessary only for configuring a hyperconverged solution.

For a dis aggregated solution these steps can be skipped since Hyper-V is not enabled on the S2D storage nodes.


Create a SET-enabled vSwitch supporting multiple uplinks provided by the Mellanox adapter

New-VMSwitch -Name S2DSwitch -NetAdapterName "Mellanox 1", "Mellanox 2" -EnableEmbeddedTeaming $true -AllowManagementOS $false

Add host vNICs to the vSwitch just created. Add another line here for anymore SMB names created. You would normally have

Add-VMNetworkAdapter -SwitchName S2DSwitch -Name SMB1 -ManagementOS

Enable RDMA on the vNICs just created. The second SMB2 is in brackets since i only have one 10Gbe nic in my demo, so removed the second SMB from my command when applied. But if you have more than one SMB then you will need to add these as shown in the brackets below.

Enable-NetAdapterRDMA -Name "vEthernet (SMB1)"(,"vEthernet (SMB2)")

Assign the vNICs to a vLAN. You will need to assign per SMB, so if as above you have applied more than one then copy the below and change the adapter name.

Set-VMNetworkAdapterVlan -VMNetworkAdapterName SMB1 -VlanId 12 -Access –ManagementOS

Now setup networking IP`s for the created Vethernet "SMB1" and team the 1Gbe network ports if you have multiple. Each has to have a unique DNS name.

Dns names used in my demo (win16.local is the active directory name):

Win16-nodea.win16.local
Win16-nodeb.win16.local 
Win16-nodec.win16.local 
Win16-noded.win16.local

In my example i had 4 nodes, 4 10Gbe links and 8 1Gbe links. The DNS and AD server in my example is a VM i created for this specifically.

The 10Gbe does not need to be externally rout able, as long as each windows storage server can see the other.

Example:

  • Win16-nodea.win16.local