JumalaSika ltd - Fotolia

Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Storage Spaces Direct a key option in Windows Server 2016 features

Listen to this podcast

Windows Server 2016 includes new and updated capabilities with Storage Spaces, Storage Replica, Storage Quality of Service, data deduplication and Resilient File System.

Microsoft has upped the ante in its bid to entice more customers to use software-defined storage with the release of a host of enhanced Windows Server 2016 features.

 At the top of the list is Storage Spaces Direct, an update to the Storage Spaces feature that Microsoft shipped with Windows Server 2012. Storage Spaces technology enables users to pool storage drives and create virtual disks from the available capacity. Microsoft said Storage Spaces Direct would allow users to scale up to 16 servers, with more than 400 drives, for multiple petabytes of storage per cluster.

 Storage Replica, another new Windows Server 2016 feature, enables synchronous and asynchronous replication. The operating system's Storage Quality of Service addition lets users set policies at the cluster shared volume level and assign them to virtual disks on Hyper-V virtual machines (VMs) to manage and monitor storage performance.

 An additional area of storage improvement is data deduplication, where Microsoft added support for larger files and volumes. Microsoft also released a new iteration of its Resilient File System (ReFS), which boosts the performance of VM operations through the introduction of block cloning.

Windows Server 2016 became generally available on Oct. 12, 2016, two years after Microsoft released the first technical preview of the server OS.

 In this podcast Brian Posey, a Microsoft MVP and frequent TechTarget contributor, explains the Windows Server 2016 features and enhancements in Storage Spaces Direct, Storage Replica, Storage Quality of Service, Data Deduplication, and ReFS. Posey takes stock of the main benefits and shortcomings, and discusses the impact the storage features could have on the small- and medium-sized businesses for which he thinks most of the storage features are geared.

Transcript - Storage Spaces Direct a key option in Windows Server 2016 features

(This transcript has been shortened and edited for clarity.)

What's new with Storage Spaces Direct in Windows Server 2016?

Brien Posey: Storage Spaces was first rolled out in Windows Server 2012, and continued to exist in 2012 R2. Originally, Storage Spaces was a replacement for the aging disk management console that had been a part of Windows Server for many years.

The main goal behind Storage Spaces was to bring disk tiering to Windows. It allowed you to take a server's local storage, pull that storage ogether and then carve that storage up on an as needed basis so that you could create virtual disks on top of that storage using mirroring, parity and whatever else you needed, without a lot of regard for the underlying storage hardware.

Storage Spaces Direct builds on that. It brings hyper-converged infrastructure-like behavior. In other words, you can take that storage that exists within a server, and you can scale what's on that storage by adding additional servers that are each equipped with their own local storage. That way, you can achieve scalability and high availability just by adding more nodes.

One of the really great things about this is that it allows you to use commodity hardware. For example, Windows fully supports the use of SATA-based SSDs. And it also eliminates the need for shared storage or cluster shared volumes if you're building a Windows failover cluster.

You mentioned hyper-converged infrastructure. In what ways does Storage Spaces Direct differ from hyper-converged infrastructure?

Posey: The main difference is that, with hyper-converged infrastructure, typically, you purchase a prepackaged solution from a vendor. That solution includes network, compute and storage resources, along with a hypervisor, and typically some sort of management software, all bundled together.

I've seen a number of articles online that refer to Storage Spaces Direct as being hyper-converged infrastructure, but I don't think that's really an accurate assessment. I tend to think of it as being more like do-it-yourself hyper-converged infrastructure because you have a lot more flexibility on the hardware side than you would if you were to purchase a prebundled solution.

With a prebundled solution, you're more or less locked into using the individual components that the vendor allows you to use. With Storage Spaces Direct, you can use pretty much anything that you want, as long as you adhere to the Windows hardware compatibility list.

Another thing is that Windows Server comes bundled with Hyper-V. So, you've got your hypervisor bundled in just like you would with the off-the-shelf hyper-converged infrastructure solution. But it doesn't really come with management software for your virtualization solution beyond just the Hyper-V Manager. For a more comprehensive management solution, you would have to get something like System Center Virtual Machine Manager.

Moving on to another feature, can you tell us how Storage Replica works, and what sorts of benefits it might bring to an IT administrator?

Posey: Microsoft has actually dabbled with storage replication for quite some time. I can't remember if it was Windows Server 2012 or 2012 R2, but Microsoft added a storage replication feature to Hyper-V. And it allowed you to replicate individual virtual hard disks from oe Hyper-V server to another.

Now, with the new storage replication feature, Microsoft has essentially eliminated the dependence on Hyper-V.
Brien Posey

For example, in my own environment, I've got one production virtual machine that acts as a file server, and I replicate the contents of that file server over to a standby server so that I've got an online spare, should I ever need it.

Now, with the new storage replication feature, Microsoft has essentially eliminated the dependence on Hyper-V. We have a storage replication feature that works at the operating system level rather than at the virtualization level. And like the Hyper-V solution, it's based on SMB 3.0, and your replication can be synchronous or asynchronous.

If you're using synchronous replication, you can replicate between two servers roughly on a metropolitan scale. If you need to replicate content to a server that's across the city, you can do that. If you need longer distance, you can use the asynchronous replication option. And that will get you pretty much wherever you want to go.

Are there any disadvantages or shortcomings to this feature?

Posey: Not that I've really run into yet.

One thing that I will say is that, when Microsoft rolled out the Hyper-V storage replication feature, they had one major problem with it. It worked great for smaller virtual hard disks, but for bigger virtual hard disks, it had a chronic problem where replicas would go out of sync from time to time. Sometimes you were able to fix that. Other times, it couldn't be fixed. You had to break the replicas and completely re-establish synchronization, which meant that you had to start from square one, resynchronizing everything.

From what I can tell, it seems like Microsoft has gotten this problem fixed. I haven't run into any instances of replicas falling out of sync yet. But that doesn't mean that it can't happen. Based on past experience, that's something you should at least be on the lookout for.

How does Storage Quality of Service (QoS) work, and what are the main benefits?

Posey: If you look on Microsoft's website, where they talk about new storage features in Windows Server 2016, they list Storage Quality of Service as being a brand new feature. Actually, Storage QoS existed within Hyper-V in Windows Server 2012 R2. It might have been called something slightly different, but I do recall the words quality of service or QoS being used.

At that time, it was applied on an individual virtual hard disk basis. You could limit the number of IOPS that were dedicated to a particular virtual hard disk, so that if you had something [with] really high demand, it wouldn't deplete all of the IOPS from your entire system. Or, you could use the QoS feature to reserve IOPS if you had something that needed to receive a certain level of IOPS in order to be able to perform correctly.

What Microsoft has done with the new storage QoS feature is move it out of Hyper-V and into the operating system. And I think it's really interesting the way that Microsoft's done this.

They're letting you create policies at the cluster shared volume level. And then you can take those policies and apply them to individual Hyper-V virtual disks. On the surface, that sounds like we're doing the exact same thing all over again, just in a slightly different way with the creation of policies.

But the way that the groupings work, it gives you a much higher degree of flexibility, because now you don't have to deal with those virtual hard disks on an individual basis and set a separate policy for every single one. You can lump them in together. That means that you can apply limits or reservations to a specific virtual hard disk, or maybe to all of the virtual hard disks within a single virtual machine [VM].

Or, maybe you've got a group of virtual machines that all need to perform the same way. You can take all of the virtual hard disks from those VMs and lump those in together with one policy. So, it's a great way of being able to collectively manage multiple virtual machines that have similar needs.

Another feature that isn't brand new is data deduplication. What has Microsoft changed with deduplication, and in what ways will this be important for the IT managers who use it?

Posey: In a lot of ways, data deduplication in Windows Server 2016 is very similar to what we had before, in the previous Windows Server release. The biggest change that Microsoft has made is with regard to scalability.

With the first iteration of data deduplication, the deduplication process started to break down or suffer from poor performance once individual files started approaching, roughly, a terabyte. You could start seeing performance slowdowns well before you hit the terabyte mark. With the new version, deduplication is supported all the way up to a terabyte. Microsoft is at least guaranteeing that it's going to work well up to that point.

There were also some volume restrictions with the previous version. Before, the entire volume size that was supported for data deduplication was about 10 TB. Again, it wasn't a hard number. It was just a guideline. Now, they're saying that data deduplication is officially supported for volumes up to 64 TB. So we have a huge increase in volume size where you can use this.

Other changes that Microsoft has made are adding support for using data deduplication with the Nano Server and providing native support for using deduplication on virtualized backup appliances. One other nice thing that they've done is make it so that data deduplication can be used with rolling cluster operating system upgrades.

Are there any particular applications that will benefit from the deduplication improvements?

Posey: The first one that comes to mind is backup and recovery. Backup target capacity is always a challenge.

Having that native deduplication capability for virtualized backup appliances could truly be helpful to maximize space within the backup target.

Moving on to the last of the Windows Server 2016 features we're going to talk about, Microsoft made improvements to its Resilient File System. Can you tell us about the changes?

Posey: Just to give you a little bit of background information, Microsoft first announced the Resilient File System for use with Windows Server 2012. Ultimately, it turned out to be one of those file systems that almost nobody is using. It was designed for extreme scalability, and to automatically detect and repair data corruption, which sound like good things.

But the big problem with ReFS is that, even though it was loosely based on NTFS, a lot of the NTFS features that we've all been using for what seems like forever aren't supported in ReFS. So, you don't have things like the encryptable file system or disk quotas.

With the 2016 version, ReFS is still really lacking. But the big thing that Microsoft has done is improve performance with block cloning. That tends to help a lot with Hyper-V virtual hard disk checkpoint merge operations.

Another big change is that Microsoft [now] supports using ReFS across multiple resiliency tiers. Let's say you set up a volume with Storage Spaces Direct, and it uses multiple storage tiers. Maybe you have a mirrored tier for your high-performance layer and a parity tier for your high-capacity layer.

Now it's supported to use ReFS on something like that, whereas it wasn't officially supported before.

Do you think ReFS is ready for prime time, so to speak, or is it still a work in progress?

Posey: It depends on the organization's individual needs. My experience with ReFS is that it seems to work pretty well. But you have to be careful with it because, even in the 2016 version, there are a lot of features that just aren't supported.

Assuming that you don't need those features, then ReFS is a great choice. But, if you do need to use things like operating system-level deduplication or encryption or anything like that, then ReFS isn't the file system that you should be using.

Are there any workloads that are well-suited to ReFS?

Posey: Anything that's going to require extreme scalability or high performance might be a good fit for ReFS, assuming that you don't need some of those capabilities that don't exist. I'm thinking things like databases.

When you look collectively at what Microsoft is doing in the area of storage with Windows Server 2016, on which types of organizations will the approach have the greatest impact?

Posey: I tend to think that most of these features are really heavily geared towards the SMB market. Take Storage Spaces Direct, for example. In spite of the reputation for being the 'inexpensive solution,' the typical hyper-converged infrastructure [HCI] still costs a lot of money. You can easily spend $150,000 on a four-node hyper-converged system, whereas with something like this, you can roll out some Windows servers on commodity hardware, use commodity storage built locally into the servers and save a lot of money. This is going to open up the door for smaller organizations to have HCI-like benefits without the cost that typically comes with HCI.

The same thing with Storage Replica. All of a sudden, this is going to make it possible for smaller organizations to have a really solid disaster recovery solution without having to spend big bucks to do it.

Are the features easy enough for administrators who might not be storage specialists to use?

Posey: I haven't used every single one of these features yet, but based on what I have seen, I would say absolutely. Just about anybody with basic Windows administrative experience shouldn't have too much trouble deploying them.

Looking ahead, are there any storage areas that Microsoft needs to improve in Windows Server?

Posey: The first improvement that I can think of would be with Storage Replica. I think Microsoft's got a really good thing going here.

But if you take a look back at what they had done previously with Hyper-V in the transition from Server 2012 to 2012 R2, they had a replica feature built into Hyper-V in 2012. It allowed you to replicate a virtual hard disk from one server to another. That was all good and well, but then, in Server 2012 R2, they introduced three-way replication. You could have two replication targets, so you could have multiple off-site copies, or you could have one on-site copy and one off-site copy; however you wanted to set things up. I think Microsoft needs to add similar capabilities to their Storage Replica feature, so that, instead of being limited to having one replica, you can have multiple replicas.

In closing, do you have any advice for IT professionals about the new storage aspect of Windows Server 2016 features?

Posey: My advice would be to spend a little bit of time working with Windows Server 2016. If nothing else, set up a few VMs, maybe on Azure or AWS [Amazon Web Services], so that you're not having to spend a lot of time deploying it.

And just get to know how these features work. Based on what I've seen so far, they seem to work really well, and I think they're going to benefit a lot of people.

+ Show Transcript

Next Steps

Assessing software-defined storage benefits

Challenges in implementing software-defined storage

Guide to Windows Server 2016 features

The benefits and drawbacks of Windows Server 2016 

 

Disaster Recovery
Data Backup
Data Center
Sustainability and ESG
Close