Difference between revisions of "Windows:2016"
| (21 intermediate revisions by the same user not shown) | |||
| Line 12: | Line 12: | ||
== Windows 2016: Demo installs == | == Windows 2016: Demo installs == | ||
| − | *[[ | + | *[[Windows2016 | Hyper converged]] |
| − | *[[ | + | *[[Windows2016 | Converged]] |
| + | |||
| + | == S2D user guide == | ||
| + | |||
| + | === Display storage pools and tiers === | ||
| + | |||
| + | Display all known Storage pools and there health | ||
| + | <syntaxhighlight> | ||
| + | Get-StoragePool S2D* | FT FriendlyName, FaultDomainAwarenessDefault, OperationalStatus, HealthStatus -autosize | ||
| + | </syntaxhighlight> | ||
| + | Find out what storage tiers have been created with this command: | ||
| + | <syntaxhighlight> | ||
| + | Get-StorageTier | FT FriendlyName, ResiliencySettingName, MediaType, PhysicalDiskRedundancy -autosize | ||
| + | </syntaxhighlight> | ||
| + | |||
| + | === Where to find the shared pool? === | ||
| + | |||
| + | All the vols that you will create and the shared storage pool itself can be found on every server in the pool at: | ||
| + | |||
| + | C:\ClusterStorage\ | ||
| + | |||
| + | So if you were to create volumes and name these "test1","test2" | ||
| + | |||
| + | C:\ClusterStorage\test1 | ||
| + | |||
| + | C:\ClusterStorage\test2 | ||
| + | |||
| + | === Volumes === | ||
| + | |||
| + | There are 3 types of volume you can create within the storage pool: Mirror, parity and multi resilient. | ||
| + | |||
| + | {| class="wikitable" | ||
| + | ! | ||
| + | ! Mirror | ||
| + | ! Parity | ||
| + | ! Multi-resilient | ||
| + | |- | ||
| + | | optimized for | ||
| + | | Performance | ||
| + | | Best use of disk | ||
| + | | mix of the previous two | ||
| + | |- | ||
| + | | Use case | ||
| + | | All data is hot | ||
| + | | All data is cold | ||
| + | | Mix of hot and cold | ||
| + | |- | ||
| + | | Storage efficiency | ||
| + | | Least (33%) | ||
| + | | Most (50+%) | ||
| + | | Medium (~50%) | ||
| + | |- | ||
| + | | FS | ||
| + | | NTFS or ReFS | ||
| + | | NTFS or ReFS | ||
| + | | ReFS | ||
| + | |- | ||
| + | | Minimum nodes | ||
| + | | 3 | ||
| + | | 4 | ||
| + | | 4 | ||
| + | |} | ||
| + | |||
| + | You can create volumes with different file systems in the same cluster. This is important since each file system supports a different feature set. | ||
| + | |||
| + | [[ ReFS or NTFS | File systems in S2D vols ]] | ||
| + | |||
| + | ==== Volume types supported based on min node amount ==== | ||
| + | |||
| + | With two servers only: Two-way mirroring and 50% efficiency. So for 1Tb of data written you need 2TB of space. Can tolerate one side of the mirror failing. | ||
| + | |||
| + | With three servers only: Three-way mirroring and 33.3% efficiency. So for 1TB of data written you need 3TB of space. Can tolerate two failures (server or hard drive ). While you can use two-way still, three-way is best practice for this number of nodes. | ||
| + | |||
| + | Creating a vol without specifying the resiliency type, will result in it being automatically assigned based on number of nodes in cluster. So if only two nodes then it will create a two way vol and if three nodes a three way vol will be created. | ||
| + | <syntaxhighlight> | ||
| + | New-Volume -FriendlyName "Volume1" -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S2D* -Size 1TB | ||
| + | </syntaxhighlight> | ||
| + | |||
| + | Command to specify three way: | ||
| + | <syntaxhighlight> | ||
| + | New-Volume -FriendlyName "Volume2" -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S2D* -Size 1TB -ResiliencySettingName Mirror | ||
| + | </syntaxhighlight> | ||
| + | |||
| + | With Four servers or more: Dual parity provides the same fault tolerance as three-way mirroring but with better storage efficiency. With four servers, its storage efficiency is 50.0% – to store 2 TB of data, you need 4 TB of physical storage capacity in the storage pool. This increases to 66.7% storage efficiency with seven servers, and continues up to 80.0% storage efficiency. The trade off is that parity encoding is more compute-intensive, which can limit its performance. | ||
| + | |||
| + | <syntaxhighlight> | ||
| + | New-Volume -FriendlyName "Volume2" -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S2D* -Size 1TB -ResiliencySettingName Mirror | ||
| + | New-Volume -FriendlyName "Volume3" -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S2D* -Size 1TB -ResiliencySettingName Parity | ||
| + | </syntaxhighlight> | ||
| + | ==== Which resiliency model should i choose? ==== | ||
| + | |||
| + | Mirroring is best out of the lot for performance. So if your work load is performance sensitive then go for mirroring. | ||
| + | |||
| + | If you require the most capacity possible out of the hardware provisioned then the best solution is dual party. | ||
| + | |||
| + | You should mix volumes to best meet your requirements, you could mirror for your SQL database and then for other vms on the same node you could have other vols with dual parity for example. like with everything it really depends on your requirements! | ||
| + | |||
| + | ==== Volumes can be created across media types as well ==== | ||
| + | |||
| + | So S2D will automatically out the box choose SSD as caching drives for the entire storage pool(s) that have been created on your S2D cluster. | ||
| + | |||
| + | However you can specify the storage media that you wish to use per volume. So whether that be performance (SSD) or capacity (HDD). | ||
| + | |||
| + | First you will need to learn what drives the cluster sees and how many. | ||
| + | |||
| + | This command will show you: | ||
| + | |||
| + | <syntaxhighlight> | ||
| + | Get-StorageTier | Select FriendlyName, ResiliencySettingName, PhysicalDiskRedundancy | ||
| + | </syntaxhighlight> | ||
| + | |||
| + | You can within one volume specify the amount of capacity to sit in both tiers: | ||
| + | <syntaxhighlight> | ||
| + | New-Volume -FriendlyName "Volume4" -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S2D* -StorageTierFriendlyNames Performance, Capacity -StorageTierSizes 300GB, 700GB | ||
| + | </syntaxhighlight> | ||
| + | ==== Number of volumes and max sizing ==== | ||
| + | |||
| + | Microsoft suggest a maximum of 32 volumes per cluster | ||
| + | |||
| + | A Volume can be up to 32TB in size. | ||
| + | |||
| + | Further information can be found: | ||
| + | |||
| + | https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-spaces/plan-volumes | ||
| + | |||
| + | === Managing your S2D cluster! === | ||
| + | |||
| + | ==== Ellativing your user privilege ==== | ||
| + | |||
| + | I found when running some of the following commands that local admin was not enough and the powershell window needed to have the same access rights | ||
| + | as my Domain administrator login (in my example this was win16.local\mh). | ||
| + | |||
| + | I achieved this with: | ||
| + | <syntaxhighlight> | ||
| + | Start-Process powershell.exe -Credential “win16.local\mh” -NoNewWindow -ArgumentList “Start-Process powershell.exe -Verb runAs” | ||
| + | </syntaxhighlight> | ||
| + | |||
| + | When you run the above command (you credential should be whatever you AD administrator login is), it should generate a popup box for you to type in | ||
| + | your password. Once you have done this, it should re open your powershell window (it will still stated "administrator: windows powershell" at the top) but you will now have the same rights as your credentials stated. in my case "win16.local\mh" which is my domain administrator user name. | ||
| + | |||
| + | https://blogs.technet.microsoft.com/benshy/2012/06/04/using-a-powershell-script-to-run-as-a-different-user-elevate-the-process/ | ||
| + | |||
| + | ==== Adding Nodes ==== | ||
| + | |||
| + | First much like when you first deployed the server you will need to test the server to make sure it has all updates, networking configuration etc to be added correctly to your cluster: | ||
| + | <syntaxhighlight> | ||
| + | Test-Cluster -Node <Node>, <Node>, <Node>, <NewNode> -Include "Storage Spaces Direct", Inventory, Network, "System Configuration" | ||
| + | </syntaxhighlight> | ||
| + | Then once confirmed it meets requirements (if one storage pool storage is added automatically, if not this will need to be manually added): | ||
| + | <syntaxhighlight> | ||
| + | Add-ClusterNode -Name NewNode | ||
| + | </syntaxhighlight> | ||
| + | |||
| + | Existing volumes cannot be upgraded to a high number of mirrors or dual parity. If you wish to go from say a two-way mirror to three way since you have added another node. Then all data must be copied off the existing vol. | ||
| + | |||
| + | ==== Removing Nodes ==== | ||
| + | |||
| + | ==== Add Drives ==== | ||
| + | |||
| + | When adding new drives make sure that these are online and wiped within disk manager. if they have old data on S2D will not beable to add them to the storage pool. You can check whether they can be added or not with: | ||
| + | |||
| + | <syntaxhighlight> | ||
| + | Get-PhysicalDisk | Select SerialNumber, CanPool, CannotPoolReason | ||
| + | </syntaxhighlight> | ||
| + | |||
| + | If the drives you have added can be added to the pool then it will do this automatically if you have one storage pool. | ||
| + | |||
| + | If you have more than one storage pool then you will need to do: | ||
| + | <syntaxhighlight> | ||
| + | Add-PhysicalDisk -PhysicalDisks (Get-StorageSubSystem -Name savtstfcspcdir.savilltech.net | Get-PhysicalDisk |? CanPool -ne $false) -StoragePoolFriendlyName S2D* | ||
| + | </syntaxhighlight> | ||
| + | |||
| + | ==== Remove Drives? ==== | ||
| + | |||
| + | ==== Finding Drive firmware versions and updating firmware versions as required ==== | ||
| + | |||
| + | To find all firmware versions within a given 2016 server node: | ||
| + | <syntaxhighlight> | ||
| + | Get-PhysicalDisk | Get-StorageFirmwareInformation | ||
| + | </syntaxhighlight> | ||
| + | Get the firmware from the drive manufacturer and use the below command to flash the firmware. Below is just a example, so you will have to change imagepath to the directory you downloaded the firmware file and change the slot number based on which drive is in which slot. | ||
| + | <syntaxhighlight> | ||
| + | Update-StorageFirmware -ImagePath C:\Firmware\J3E160@3.enc -SlotNumber 0 | ||
| + | </syntaxhighlight> | ||
| + | |||
| + | more info look here: | ||
| + | |||
| + | https://technet.microsoft.com/en-us/windows-server-docs/storage/update-firmware | ||
| + | |||
| + | ==== Extending volumes in size ==== | ||
| + | |||
| + | Make sure you have the capacity in your storage pool that the volume you wish to extend sits in. | ||
| + | |||
| + | You cannot make a volume bigger in capacity than the availability capacity of the parent pool. | ||
| + | |||
| + | Look: https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-spaces/resize-volumes | ||
| + | |||
| + | === Testing S2D with VMFleet === | ||
| + | |||
| + | A MS engineer created some power shell scripts for testing S2D. | ||
| + | |||
| + | Basically it creates some worker VMs and then places these on all the hosts in the cluster. | ||
| + | |||
| + | Then allows you to assign IO jobs to the cluster and see what IOPS you get. | ||
| + | |||
| + | https://blogs.technet.microsoft.com/larryexchange/2016/08/17/leverage-vm-fleet-testing-the-performance-of-storage-space-direct/ | ||
| + | |||
| + | https://blogs.technet.microsoft.com/larryexchange/2016/09/11/the-power-of-rdma-in-storage-space-direct-2/ | ||
Latest revision as of 11:49, 8 February 2017
Windows 2016: Storage spaces Direct Overview
- Unrivaled Performance. Whether all-flash or hybrid, Storage Spaces Direct easily exceeds 150,000 mixed 4k random IOPS per server with consistent, low latency thanks to its hypervisor-embedded architecture, its built-in read/write cache, and support for cutting-edge NVMe drives mounted directly on the PCIe bus.
- Fault Tolerance. Built-in resiliency handles drive, server, or component failures with continuous availability. Larger deployments can also be configured for chassis and rack fault tolerance. When hardware fails, just swap it out; the software heals itself, with no complicated management steps.
- Resource Efficiency. Erasure coding delivers up to 2.4x greater storage efficiency, with unique innovations like Local Reconstruction Codes and ReFS real-time tiers to extend these gains to hard disk drives and mixed hot/cold workloads, all while minimizing CPU consumption to give resources back to where they're needed most - the VMs.
- Manageability. Use Storage QoS Controls to keep overly busy VMs in check with minimum and maximum per-VM IOPS limits. The Health Service provides continuous built-in monitoring and alerting, and new APIs make it easy to collect rich, cluster-wide performance and capacity metrics.
- Scalability. Go up to 16 servers and over 400 drives, for multiple petabytes of storage per cluster. To scale out, simply add drives or add more servers; Storage Spaces Direct will automatically onboard new drives and begin using them. Storage efficiency and performance improve predictably at scale.
Windows 2016: Demo installs
S2D user guide
Display storage pools and tiers
Display all known Storage pools and there health
Get-StoragePool S2D* | FT FriendlyName, FaultDomainAwarenessDefault, OperationalStatus, HealthStatus -autosizeFind out what storage tiers have been created with this command:
Get-StorageTier | FT FriendlyName, ResiliencySettingName, MediaType, PhysicalDiskRedundancy -autosizeAll the vols that you will create and the shared storage pool itself can be found on every server in the pool at:
C:\ClusterStorage\
So if you were to create volumes and name these "test1","test2"
C:\ClusterStorage\test1
C:\ClusterStorage\test2
Volumes
There are 3 types of volume you can create within the storage pool: Mirror, parity and multi resilient.
| Mirror | Parity | Multi-resilient | |
|---|---|---|---|
| optimized for | Performance | Best use of disk | mix of the previous two |
| Use case | All data is hot | All data is cold | Mix of hot and cold |
| Storage efficiency | Least (33%) | Most (50+%) | Medium (~50%) |
| FS | NTFS or ReFS | NTFS or ReFS | ReFS |
| Minimum nodes | 3 | 4 | 4 |
You can create volumes with different file systems in the same cluster. This is important since each file system supports a different feature set.
Volume types supported based on min node amount
With two servers only: Two-way mirroring and 50% efficiency. So for 1Tb of data written you need 2TB of space. Can tolerate one side of the mirror failing.
With three servers only: Three-way mirroring and 33.3% efficiency. So for 1TB of data written you need 3TB of space. Can tolerate two failures (server or hard drive ). While you can use two-way still, three-way is best practice for this number of nodes.
Creating a vol without specifying the resiliency type, will result in it being automatically assigned based on number of nodes in cluster. So if only two nodes then it will create a two way vol and if three nodes a three way vol will be created.
New-Volume -FriendlyName "Volume1" -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S2D* -Size 1TBCommand to specify three way:
New-Volume -FriendlyName "Volume2" -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S2D* -Size 1TB -ResiliencySettingName MirrorWith Four servers or more: Dual parity provides the same fault tolerance as three-way mirroring but with better storage efficiency. With four servers, its storage efficiency is 50.0% – to store 2 TB of data, you need 4 TB of physical storage capacity in the storage pool. This increases to 66.7% storage efficiency with seven servers, and continues up to 80.0% storage efficiency. The trade off is that parity encoding is more compute-intensive, which can limit its performance.
New-Volume -FriendlyName "Volume2" -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S2D* -Size 1TB -ResiliencySettingName Mirror
New-Volume -FriendlyName "Volume3" -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S2D* -Size 1TB -ResiliencySettingName ParityWhich resiliency model should i choose?
Mirroring is best out of the lot for performance. So if your work load is performance sensitive then go for mirroring.
If you require the most capacity possible out of the hardware provisioned then the best solution is dual party.
You should mix volumes to best meet your requirements, you could mirror for your SQL database and then for other vms on the same node you could have other vols with dual parity for example. like with everything it really depends on your requirements!
Volumes can be created across media types as well
So S2D will automatically out the box choose SSD as caching drives for the entire storage pool(s) that have been created on your S2D cluster.
However you can specify the storage media that you wish to use per volume. So whether that be performance (SSD) or capacity (HDD).
First you will need to learn what drives the cluster sees and how many.
This command will show you:
Get-StorageTier | Select FriendlyName, ResiliencySettingName, PhysicalDiskRedundancyYou can within one volume specify the amount of capacity to sit in both tiers:
New-Volume -FriendlyName "Volume4" -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S2D* -StorageTierFriendlyNames Performance, Capacity -StorageTierSizes 300GB, 700GBNumber of volumes and max sizing
Microsoft suggest a maximum of 32 volumes per cluster
A Volume can be up to 32TB in size.
Further information can be found:
https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-spaces/plan-volumes
Managing your S2D cluster!
Ellativing your user privilege
I found when running some of the following commands that local admin was not enough and the powershell window needed to have the same access rights as my Domain administrator login (in my example this was win16.local\mh).
I achieved this with:
Start-Process powershell.exe -Credential “win16.local\mh” -NoNewWindow -ArgumentList “Start-Process powershell.exe -Verb runAs”When you run the above command (you credential should be whatever you AD administrator login is), it should generate a popup box for you to type in your password. Once you have done this, it should re open your powershell window (it will still stated "administrator: windows powershell" at the top) but you will now have the same rights as your credentials stated. in my case "win16.local\mh" which is my domain administrator user name.
Adding Nodes
First much like when you first deployed the server you will need to test the server to make sure it has all updates, networking configuration etc to be added correctly to your cluster:
Test-Cluster -Node <Node>, <Node>, <Node>, <NewNode> -Include "Storage Spaces Direct", Inventory, Network, "System Configuration"Then once confirmed it meets requirements (if one storage pool storage is added automatically, if not this will need to be manually added):
Add-ClusterNode -Name NewNodeExisting volumes cannot be upgraded to a high number of mirrors or dual parity. If you wish to go from say a two-way mirror to three way since you have added another node. Then all data must be copied off the existing vol.
Removing Nodes
Add Drives
When adding new drives make sure that these are online and wiped within disk manager. if they have old data on S2D will not beable to add them to the storage pool. You can check whether they can be added or not with:
Get-PhysicalDisk | Select SerialNumber, CanPool, CannotPoolReasonIf the drives you have added can be added to the pool then it will do this automatically if you have one storage pool.
If you have more than one storage pool then you will need to do:
Add-PhysicalDisk -PhysicalDisks (Get-StorageSubSystem -Name savtstfcspcdir.savilltech.net | Get-PhysicalDisk |? CanPool -ne $false) -StoragePoolFriendlyName S2D*Remove Drives?
Finding Drive firmware versions and updating firmware versions as required
To find all firmware versions within a given 2016 server node:
Get-PhysicalDisk | Get-StorageFirmwareInformationGet the firmware from the drive manufacturer and use the below command to flash the firmware. Below is just a example, so you will have to change imagepath to the directory you downloaded the firmware file and change the slot number based on which drive is in which slot.
Update-StorageFirmware -ImagePath C:\Firmware\J3E160@3.enc -SlotNumber 0more info look here:
https://technet.microsoft.com/en-us/windows-server-docs/storage/update-firmware
Extending volumes in size
Make sure you have the capacity in your storage pool that the volume you wish to extend sits in.
You cannot make a volume bigger in capacity than the availability capacity of the parent pool.
Look: https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-spaces/resize-volumes
Testing S2D with VMFleet
A MS engineer created some power shell scripts for testing S2D.
Basically it creates some worker VMs and then places these on all the hosts in the cluster.
Then allows you to assign IO jobs to the cluster and see what IOPS you get.