Microsoft expert Brien Posey delves into the technical aspects of a Hyper-V deployment and tells storage managers...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
to use their best private cloud strategy decisions when working with Hyper-V to focus on scalability, access and storage classification.
Hyper-V users are increasingly finding ways to leverage the technology, as the product itself has improved. However, the most common misconception surrounding Hyper-V storage is that a SAN provides instant storage scalability. While a SAN does facilitate scalability, the real key to achieving good storage scalability for Hyper-V servers is the way you manage the storage hardware.
This might seem intuitive, but organizations using Hyper-V often experience growing pains related to storage. This is a result of not thinking about long-term storage scalability at the outset. Consider this common scenario: An organization has already deployed VMware and decides to try out Hyper-V because it has the potential to greatly decrease licensing costs. In this situation, the organization would typically start with a small Hyper-V deployment consisting of a few Hyper-V servers.
In time, the organization might choose to increase the size of its Hyper-V deployment. At that point, the administrator may find that the management burden increases as the size of the Hyper-V deployment grows.
The problem with taking this approach, of starting small and expanding later, is that scalability is often ignored during the early phases of a Hyper-V deployment.
Hyper-V deployments require private cloud approach
In a small Hyper-V deployment, host servers are usually managed individually. Each Hyper-V host is connected to storage (DAS, SAN, iSCSI or Server Message Block [SMB] 3.0-based NAS), then virtual machines (VMs) are created on a specific host server. This approach works well in smaller deployments, but in larger deployments the Hyper-V hosts and available storage (whatever it may be) are typically treated as a pool of resources that can be collectively managed and allocated on an as-needed basis.
Although the Hyper-V Manager has the ability to manage multiple hosts, those abilities are extremely limited. The only way to treat resources such as compute, storage and networking as a pool of resources that can be freely allocated anywhere within the Hyper-V deployment is to use System Center Virtual Machine Manager (SCVMM).
It's possible to use SCVMM 2012 R2 to configure your Hyper-V deployment as a private cloud. Doing so allows storage, compute and network resources to be centrally managed and allocated. This approach has a huge impact on scalability because hardware is treated as a pooled resource and additional hardware can be added whenever needed.
As administrators plan to bring production workloads into a Hyper-V environment, they may quickly realize that different VMs have different storage needs. For example, a virtualized SQL Server will likely require storage that can deliver a high volume of IOPS, whereas a simple virtualized Dynamic Host Configuration Protocol server can probably make use of inexpensive, low-performance JBOD storage.
On the surface, it would seem that lumping available storage resources into a collective pool might be a bad idea because all the storage would be treated equally. However, there are a few ways you can differentiate among the various storage types.
Storage classifications ease administrator role
One way to differentiate among storage types is to use storage classification. System Center 2012 and System Center 2012 R2 allow administrators to classify storage devices based on their I/O characteristics. In other words, an administrator can use the software to define several classes of storage based on the underlying storage hardware.
It's also become very common for organizations to take a three-tier approach to storage, but SCVMM doesn't limit you to defining only three storage classifications. It's possible to classify storage based on I/O characteristics, with the naming convention decided by the administrator.
As previously discussed, one of the most effective ways of getting a handle on Hyper-V storage is to adopt a private cloud model. Doing so allows you to pool your storage resources and to classify the various storage types so that they aren't all treated equally.
That strategy can be very effective, but it can cause other problems if left unchecked. Remember, one of the major goals behind the adoption of a private cloud is to improve scalability. Even though most people tend to think of scalability in terms of hardware and software, sometimes it's the administrative burden that limits scalability. This can be especially true in Hyper-V environments, where a single administrator may be tasked with creating and provisioning all the organization's VMs. The good news is that in a typical deployment, an administrator creates a library of template VMs that adhere to the organization's policies. At that point, the administrator can grant some people permission to create their own VMs as needed. However, it's a process that must be closely monitored.
The first thing an administrator must do to ensure users don't add unnecessary complexity to their Hyper-V deployments is to limit the types of storage an authorized user can consume when creating VMs. When an authorized user creates a VM, they do so through a Web interface called the App Controller. This interface allows the user to provision new VMs based on the predefined templates they've been given access to.
But it's important to know that it's possible to bind a storage classification to a VM template; this way, when a VM is created from a template, the appropriate type of storage is used. That storage classification is part of any VM's hardware profile.
The importance of user roles
A user role can span one or more SCVMM-based private clouds, and essentially determines the actions role members are allowed to perform. And, yes, it's possible to bind quotas to user roles.
There are two separate storage quotas that can be bound to each user role. The first is a role-level quota. There are typically multiple user accounts assigned to each user role. Therefore, the role-level quota applies collectively to all members of the user role. For example, you can use this quota to ensure that all of the role members combined never consume more than a specific amount of storage.
The second type of quota is a member-level quota. As the name implies, this quota applies to each role member individually; it prevents any single user from consuming an excessive amount of storage.
It's worth noting that the quota mechanism isn't storage-exclusive. It can be used to apply quotas to such things as virtual CPU, memory consumption or the number of VMs that can be created.
Storage pros familiar with private clouds will recognize the above strategies and tips. It's becoming increasingly common for organizations to take a private cloud approach to Hyper-V because it decentralizes VM management and places some management tasks into the hands of authorized users -- while also limiting their power so it's not abused.
About the author:
Brien Posey is a Microsoft MVP with two decades of IT experience. Before becoming a freelance technical writer, he worked as CIO for a national chain of hospitals and health care facilities. He has also served as a network administrator for some of the nation's largest insurance companies and for the Department of Defense at Fort Knox.