Windows Server 2012 features essential to storage

Storage expert Brien Posey talks about which features of Windows Server 2012 have the biggest influence on storage and virtualization.

Windows Server 2012 has a number of features that affect storage and virtualization, some directly, others indirectly. These features can aid in everything from increasing capacity, to data protection, to simplifying the storage infrastructure. In this podcast, storage expert and independent analyst Brien Posey discusses the most significant Windows Server 2012 features that relate to storage and virtualization. Find out what they are by listening to the podcast or reading the transcript below.

Microsoft introduced a number of Windows Server 2012 features that are related to storage. Let's talk about the ones that are particularly significant to a virtual server environment. First up is deduplication: How does it work in Windows Sever 2012 and why is it important to virtualization?

Posey: The way that it works is that it's a server-side, post-process deduplication. What that means is that your data is initially stored in an uncompressed format, and then there's a scheduled process that comes along later on to perform the actual deduplication.

In any virtual environment, there's going to be some level of redundancy because a host server probably has multiple virtual machines, and typically administrators like to be consistent with things. So that means a lot of those virtual machines, if not all of them, are going to be running the same operating systems and probably the same patches, and maybe even some of the same applications. So if you're able to do deduplication, then you're able to compress the volume that is being used to store all of those virtual machines, which means that you greatly reduce the storage footprint consumed by your virtual environment.

The most important thing to know about that is that you can actually consume more storage space, at least temporarily, using this type of deduplication than you might if you weren't deduplicating your data at all, because the data has to be initially stored in an uncompressed format and the operating system also needs workspace that it can use to perform the deduplication alongside the uncompressed data. Now, eventually, you do get a lot of that space back because the uncompressed data can be removed once the deduplication process completes, but you do have to have enough space to accommodate both copies of data, at least for a period of time.

What about Resilient File System (RFS)? What is that and why is it significant in a virtual environment?

Posey: Resilient File System is a new kind of file system that Microsoft made, and it's kind of a next-generation alternative to NTFS. Now, RFS doesn't have a direct impact on virtualization, but it's important for another reason, and that's because the RFS takes steps to preserve the integrity of your data. One example of how that's done: Think about the way that data normally gets written to disk. Suppose that you're updating a file. Well, when you go to update that file in an NTFS environment, the process works by overwriting the existing file. But what happens if the power goes out in the middle of that write operation? Well, your file is corrupted because you've overwritten part of that file with the new file, but that transaction was incomplete. Well, what RFS does differently is that when a write operation occurs, it actually performs that write operation, assuming that we're talking about a file update. It performs that write operation to a different part of the disk, so that way, if the power goes out in the middle of the write operation, you haven't lost the original copy of the file; it's still there and hasn't been overwritten. So in a virtual environment, this doesn't necessarily directly impact the virtualization process; but, it does help to improve the overall reliability of the underlying file system, particularly in times of power failures and system crashes and things like that.

Storage Spaces is another Windows Server 2012 feature. What do administrators of virtual servers need to know about it?

Posey: Windows Storage Spaces is really interesting because it's a replacement for the old-school disk management console that's been with Windows since Windows NT server way back when. Incidentally, the disk management console does still exist, but Microsoft prefers that you use Windows Storage Spaces. The way that that works is it allows you to aggregate the server's physical storage and then abstract that so you can virtualize that storage by creating virtual hard disks that reside across multiple physical drives. So the neat thing about that is Microsoft has designed this in a way that allows you to implement spare drives; you can implement mirroring and parity redundancy -- all of these different things that you can do to improve the performance of the storage, but also to protect your data, which is critically important in a virtual environment where you're juggling multiple virtualized workloads.

There's also support for Server Message Block 3.0. Windows Server 2008 supported an earlier version of SMB. What does SMB 3.0 support bring to the table?

Posey: Previously, with Microsoft Hyper-V server, if you wanted to create a clustered environment, then your only option was to build what is called a "Cluster Shared Volume," which means that you were using centralized storage to hold all the virtual machines. But unfortunately, this made clustering unattainable for [most] smaller organizations, just because of the cost of building a Cluster Shared Volume and connecting all the cluster nodes to it. So what SMB 3.0 does is it actually allows you to store virtual machines on a file server as opposed to having to build a Cluster Shared Volume. This greatly brings down the storage cost, and it allows organizations to build clusters using centralized storage, but without having to use a Cluster Shared Volume.

Do you have any points of caution about Windows Server 2012 and storage that anyone involved in virtualization should know about?

Posey: The biggest point of caution that I would recommend is if you're using local storage for Windows Server 2012, make sure you use Windows Storage Spaces to provision that, as opposed to using the disk management console like you were probably used to with older versions of Windows.

Also, if you're going to be using a Hyper-V environment, Microsoft does give you the option of storing virtual machines either on a Cluster Shared Volume (as you did in the past), or you can use SMB like we talked about a few minutes ago, or you can store virtual machines locally, without the need for any sort of shared storage, and still do live migrations from one host server to another. Now, having said that, the use of local storage and SMB storage for live migrations -- those are solutions that are only geared toward smaller environments, and bandwidth implications make them impractical for large environments that have a lot of virtual machines. So, the preferred method of using storage with Hyper-V to facilitate live migrations is still to use a Cluster Shared Volume, as you have in the past.

This was last published in December 2012

Dig Deeper on Enterprise storage, planning and management



Find more PRO+ content and other member only offers, here.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.