Windows:2016

From Define Wiki
Jump to navigation Jump to search

Windows 2016: Storage spaces Direct Overview

  • Unrivaled Performance. Whether all-flash or hybrid, Storage Spaces Direct easily exceeds 150,000 mixed 4k random IOPS per server with consistent, low latency thanks to its hypervisor-embedded architecture, its built-in read/write cache, and support for cutting-edge NVMe drives mounted directly on the PCIe bus.
  • Fault Tolerance. Built-in resiliency handles drive, server, or component failures with continuous availability. Larger deployments can also be configured for chassis and rack fault tolerance. When hardware fails, just swap it out; the software heals itself, with no complicated management steps.
  • Resource Efficiency. Erasure coding delivers up to 2.4x greater storage efficiency, with unique innovations like Local Reconstruction Codes and ReFS real-time tiers to extend these gains to hard disk drives and mixed hot/cold workloads, all while minimizing CPU consumption to give resources back to where they're needed most - the VMs.
  • Manageability. Use Storage QoS Controls to keep overly busy VMs in check with minimum and maximum per-VM IOPS limits. The Health Service provides continuous built-in monitoring and alerting, and new APIs make it easy to collect rich, cluster-wide performance and capacity metrics.
  • Scalability. Go up to 16 servers and over 400 drives, for multiple petabytes of storage per cluster. To scale out, simply add drives or add more servers; Storage Spaces Direct will automatically onboard new drives and begin using them. Storage efficiency and performance improve predictably at scale.


https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-spaces/storage-spaces-direct-overview

Windows 2016: Demo installs

S2D user guide

Display storage pools and tiers

Display all known Storage pools and there health

Get-StoragePool S2D* | FT FriendlyName, FaultDomainAwarenessDefault, OperationalStatus, HealthStatus -autosize

Find out what storage tiers have been created with this command:

Get-StorageTier | FT FriendlyName, ResiliencySettingName, MediaType, PhysicalDiskRedundancy -autosize

Where to find the shared pool?

All the vols that you will create and the shared storage pool itself can be found on every server in the pool at:

C:\ClusterStorage\

So if you were to create volumes and name these "test1","test2"

C:\ClusterStorage\test1

C:\ClusterStorage\test2

Volumes

There are 3 types of volume you can create within the storage pool: Mirror, parity and multi resilient.

Mirror Parity Multi-resilient
optimized for Performance Best use of disk mix of the previous two
Use case All data is hot All data is cold Mix of hot and cold
Storage efficiency Least (33%) Most (50+%) Medium (~50%)
FS NTFS or ReFS NTFS or ReFS ReFS
Minimum nodes 3 4 4

You can create volumes with different file systems in the same cluster. This is important since each file system supports a different feature set.

File systems in S2D vols

Volume types supported based on min node amount

With two servers only: Two-way mirroring and 50% efficiency. So for 1Tb of data written you need 2TB of space. Can tolerate one side of the mirror failing.

With three servers only: Three-way mirroring and 33.3% efficiency. So for 1TB of data written you need 3TB of space. Can tolerate two failures (server or hard drive ). While you can use two-way still, three-way is best practice for this number of nodes.

With Four servers or more: Dual parity provides the same fault tolerance as three-way mirroring but with better storage efficiency. With four servers, its storage efficiency is 50.0% – to store 2 TB of data, you need 4 TB of physical storage capacity in the storage pool. This increases to 66.7% storage efficiency with seven servers, and continues up to 80.0% storage efficiency. The tradeoff is that parity encoding is more compute-intensive, which can limit its performance.

Which resiliency model should i choose?

Mirroring is best out of the lot for performance. So if your work load is performance senstive then go for mirroring.

If you require the most capacity posible out of the hardware provisioned then the best solution is dual party.

You should mix volumes to best meet your requirements, you could mirror for your SQL database and then for other vms on the same node you could have other vols with dual parity for example.

Number of volumes and max sizing

Microsoft suggest a maximum of 32 volumes per cluster

A Volume can be up to 32TB in size.

Further information can be found:

https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-spaces/plan-volumes