Difference between revisions of "Windows:2016"
Jump to navigation
Jump to search
| Line 14: | Line 14: | ||
*[[Windows2016 | Hyper converged]] | *[[Windows2016 | Hyper converged]] | ||
*[[Windows2016 | Converged]] | *[[Windows2016 | Converged]] | ||
| + | |||
| + | == S2D user guide == | ||
| + | |||
| + | === Display storage pools and tiers === | ||
| + | |||
| + | Display all known Storage pools and there health | ||
| + | <syntaxhighlight> | ||
| + | Get-StoragePool S2D* | FT FriendlyName, FaultDomainAwarenessDefault, OperationalStatus, HealthStatus -autosize | ||
| + | </syntaxhighlight> | ||
| + | Find out what storage tiers have been created with this command: | ||
| + | <syntaxhighlight> | ||
| + | Get-StorageTier | FT FriendlyName, ResiliencySettingName, MediaType, PhysicalDiskRedundancy -autosize | ||
| + | </syntaxhighlight> | ||
| + | |||
| + | === Where to find the shared pool? === | ||
| + | |||
| + | All the vols that you will create and the shared storage pool itself can be found on every server in the pool at: | ||
| + | |||
| + | C:\ClusterStorage\ | ||
| + | |||
| + | So if you were to create volumes and name these "test1","test2" | ||
| + | |||
| + | C:\ClusterStorage\test1 | ||
| + | |||
| + | C:\ClusterStorage\test2 | ||
| + | |||
| + | === Volumes === | ||
| + | |||
| + | There are 3 types of volume you can create within the storage pool: Mirror, parity and multi resilient. | ||
| + | |||
| + | {| class="wikitable" | ||
| + | ! | ||
| + | ! Mirror | ||
| + | ! Parity | ||
| + | ! Multi-resilient | ||
| + | |- | ||
| + | | optimized for | ||
| + | | Performance | ||
| + | | Best use of disk | ||
| + | | mix of the previous two | ||
| + | |- | ||
| + | | Use case | ||
| + | | All data is hot | ||
| + | | All data is cold | ||
| + | | Mix of hot and cold | ||
| + | |- | ||
| + | | Storage efficiency | ||
| + | | Least (33%) | ||
| + | | Most (50+%) | ||
| + | | Medium (~50%) | ||
| + | |- | ||
| + | | FS | ||
| + | | NTFS or ReFS | ||
| + | | NTFS or ReFS | ||
| + | | ReFS | ||
| + | |- | ||
| + | | Minimum nodes | ||
| + | | 3 | ||
| + | | 4 | ||
| + | | 4 | ||
| + | |} | ||
| + | |||
| + | You can create volumes with different file systems in the same cluster. This is important since each file system supports a different feature set. | ||
| + | |||
| + | [[ ReFS or NTFS | File systems in S2D vols ]] | ||
| + | |||
| + | Further information can be found: | ||
| + | |||
| + | https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-spaces/plan-volumes | ||
Revision as of 09:21, 2 February 2017
Windows 2016: Storage spaces Direct Overview
- Unrivaled Performance. Whether all-flash or hybrid, Storage Spaces Direct easily exceeds 150,000 mixed 4k random IOPS per server with consistent, low latency thanks to its hypervisor-embedded architecture, its built-in read/write cache, and support for cutting-edge NVMe drives mounted directly on the PCIe bus.
- Fault Tolerance. Built-in resiliency handles drive, server, or component failures with continuous availability. Larger deployments can also be configured for chassis and rack fault tolerance. When hardware fails, just swap it out; the software heals itself, with no complicated management steps.
- Resource Efficiency. Erasure coding delivers up to 2.4x greater storage efficiency, with unique innovations like Local Reconstruction Codes and ReFS real-time tiers to extend these gains to hard disk drives and mixed hot/cold workloads, all while minimizing CPU consumption to give resources back to where they're needed most - the VMs.
- Manageability. Use Storage QoS Controls to keep overly busy VMs in check with minimum and maximum per-VM IOPS limits. The Health Service provides continuous built-in monitoring and alerting, and new APIs make it easy to collect rich, cluster-wide performance and capacity metrics.
- Scalability. Go up to 16 servers and over 400 drives, for multiple petabytes of storage per cluster. To scale out, simply add drives or add more servers; Storage Spaces Direct will automatically onboard new drives and begin using them. Storage efficiency and performance improve predictably at scale.
Windows 2016: Demo installs
S2D user guide
Display storage pools and tiers
Display all known Storage pools and there health
Get-StoragePool S2D* | FT FriendlyName, FaultDomainAwarenessDefault, OperationalStatus, HealthStatus -autosizeFind out what storage tiers have been created with this command:
Get-StorageTier | FT FriendlyName, ResiliencySettingName, MediaType, PhysicalDiskRedundancy -autosizeAll the vols that you will create and the shared storage pool itself can be found on every server in the pool at:
C:\ClusterStorage\
So if you were to create volumes and name these "test1","test2"
C:\ClusterStorage\test1
C:\ClusterStorage\test2
Volumes
There are 3 types of volume you can create within the storage pool: Mirror, parity and multi resilient.
| Mirror | Parity | Multi-resilient | |
|---|---|---|---|
| optimized for | Performance | Best use of disk | mix of the previous two |
| Use case | All data is hot | All data is cold | Mix of hot and cold |
| Storage efficiency | Least (33%) | Most (50+%) | Medium (~50%) |
| FS | NTFS or ReFS | NTFS or ReFS | ReFS |
| Minimum nodes | 3 | 4 | 4 |
You can create volumes with different file systems in the same cluster. This is important since each file system supports a different feature set.
Further information can be found:
https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-spaces/plan-volumes