User Profile
Steven Ekren
Former Employee
Joined 9 years ago
User Widgets
Recent Discussions
Re: SES is still requested on 17093
Ziad, I wanted to check with you on a couple of things. 1. Did you create the virtual disks after you installed this build or were the volumes already existing? If it was already existing, it could be that the redundant copies of data were not distributed across nodes. In RS1 SES is uses for SAS disks to identify which nodes the disks are connected to. In RS4/RS5 insiders builds we changed the logic to use the same association method we use for NVMe and SATA disks. So, if on this system you created a virtual disk without SES the data may not have been distributed correctly. If the virtual disk was crated after you put on this build we would expect the extents to be distributed correctly. 2. Please Check the virtual disk's properties for the FaultDomainAwareness (Get-VirtualDisk | fl *) and verify that the value is "StorageScaleUnit", which means a node. If it's blank, check the storage pools FaultDomainAwarenessDefault for the same value (if the virtual disk value is blank then it takes the storage pools default). If they are set to Disk or Enclosure, this will cause the same problem. It should be StorageScaleUnit. 3. Make sure you have a witness for the cluster. Either Cloud or file share. If not, the cluster cannot stay up with one node, with a witness it can. 4. Make sure you have the same number of physical disks in the pool from both nodes. The storage pool requires 50% +1 disk for the pool to have it's quorum satisfied. Being on an active cluster node satisfies the +1, but you need to have the same number of disks to allow either node to fail and the other node keep the pool online. I hope this helps, Steven Ekren Senior Program Manager Windows Server2.5KViews0likes3CommentsRe: Short questions
Q5 Will WS2016 support SATA controller for S2D? A5 Yes, but S2D requires SES so there needs to be a SES controller. Q7 What is Scoped Spaces? A5 Scoped spaces is a new functionality in the 1709 release that allows specifying which nodes of an S2D system that the data will be put on when creating a new volume. If you have 8 nodes, you can scope some volumes to 4, and other volumes to other sets of 4. This means that if you have more than 2 failures across nodes it will limit the number of volumes effected.4.2KViews1like1CommentRe: Storage Space Question
For S2D systems we will support SCM as S2D cache devices backed by either SSD or NVMe as capacity devices, or when the hardware supports many SCM devices you can have an all SCM system. For non-S2D storage spaces there is no cache devices so the SCM devices can be used in tiers.1.3KViews0likes0CommentsRe: Parity Space Write Performance
S2D uses dual parity and if you look at any system and compare Mirror to Parity, Parity is always slower for writes and about the same for reads. When you write to a parity volume you have to read-in the parity, make the write change and re-calculate party and write that back. S2D is a distributed system where these writes happen across nodes so there are network transits for the write actions. S2D is the only solution I'm aware of that provides mirror accelerated parity which allows a volume to take writes in the mirror part of the volume (fast writes) and then in the background rotates the data to parity. We have done enhancements to the parity in this falls 1709 release. If you use a scale of 0 100 with mirror performance being 100, Windows Server 2016 performance would be at the lower part of the scale and the 1709 release it will be closer to the 100 side of the scale, However due to the calculations for parity it will never be as good as memory. When comparing parity to mirror on S2D, it's also good to realize that you are transiting servers for the I/Os so network latency/bandwidth also has an impact that you don't see if you are using a stand-alone system where all I/O is going through a local storage bus.2.5KViews0likes1Comment