Best Practices: Viewing virtualization from every angle

Virtualization can be a tricky technology for storage managers who need to apply traditional standards while navigating new obstacles. But it's prudent to embrace it now so the storage team can enjoy the same benefits that the systems and applications teams have realized.

This article can also be found in the Premium Editorial Download: Storage magazine: Hot storage technology for 2008:

Virtualization can be a tricky technology for storage managers who need to apply traditional methods while navigating new obstacles.


Storage teams with lots of activity on the systems virtualization side face an interesting challenge: How should they deal with virtual hosts? Should they be treated as individual servers or as applications running on host systems? And what's the best way to ensure that virtual hosts don't cause a tilt in the delicate IO balance? In this article I outline key considerations for storage architects and administrators who design and implement virtualized systems.

Before we start, let's review the different kinds of virtualized systems. Virtualization can exist at the physical layer or logical layer. Physical layer virtualization lets you have system resources dynamically assigned to operating systems. Logical layer virtualization lets you have a host operating system (also known as a hypervisor) that runs on a single physical box. Logical layer virtualization comes in two flavors: a type-1 or bare-metal architecture hypervisor, and a type-2 or hosted architecture hypervisor. With physical layer virtualization, the hypervisor runs as the primary operating system on the physical box; in the latter case, it actually runs as an application or shell on another already running operating system. Operating systems running on the hypervisor are then called guest or virtual operating systems. Regardless of the type of virtualization in play, the challenges from the storage side are similar.

Virtualization isn't just VMware
When we talk about virtualization, the first name that comes to mind is VMware. While VMware is the leader in this space with its ESX Server, it isn't alone. AIX shops will more than likely be familiar with IBM's logical partitions (commonly known as LPARs) that can run AIX and Linux. There's also Microsoft's Virtual Server and PC, and Sun with its Solaris Zones. And the Open Source movement isn't far behind with Xen.

When people talk about server virtualization, ask for specifics. What's the rationale behind their selection? More importantly, is it supported on your SAN/storage environment with minimal/no changes or are costly modifications/ additions necessary to make it work?

A new take on sharing IO
While it's true that in most cases the connectivity to the guest operating system is directly or indirectly via the hypervisor itself, don't assume that it's always the case. In some types of systems, the guest operating system can have access to its own independent host bus adapters (HBAs). In those situations, you have to work with the system teams to figure out which guests have direct access and which share IO via the hypervisor.

Then there's the issue of how the LUNs are presented and mapped from the hypervisor to the guest. For example, IBM has introduced VIO, in which the hypervisor (or VIO server) can present LUNs directly via a virtual SCSI adapter to the LPAR. The VIO server knows nothing about how the LPAR uses these LUNs. Another way of doing it is to create logical volumes from the VIO server or hypervisor, and then present those to the LPAR instead of raw disks. This not only gives you visibility and control over how IO is distributed but, by way of the logical volume manager (LVM), it minimizes performance bottlenecks. So you have two ways to do it. The first way is simpler but results in more administrative overhead because disk numbers can change in AIX for any number of reasons. The latter way, using LVM, allows all of the presentation to be controlled by an underlying layer that controls disk IDs.

To me, how LUNs are presented and IO is shared between the virtual servers is a critical design issue. This is where the most attention should be paid to create a healthy and scalable virtual environment.

Boot from SAN (with reminders)
The concept of boot from SAN used to be fairly simple. Rather than using local disks for the boot image, make a Fibre Channel or iSCSI LUN the boot disk. If you run a tight and disciplined operation, there's value in booting your systems from SAN. Indeed, a key benefit SANs bring to systems virtualization is the ability to create this huge high-performance repository for storing all of the virtual server boot images. Fair enough. But I have two comments.

The first is that when you aggregate multiple boot images onto a single large LUN, the performance of that LUN comes under scrutiny by the virtual servers running on it. Care should be taken to maintain a healthy boot image number to LUN ratio. Instead of creating a single large LUN, you may be better off creating smaller LUNs and spreading the virtual server images onto them, thus maintaining a lower and well-balanced ratio.

The second reminder is that you still need to decide if the hypervisor itself boots from SAN or not. The answer to this depends on whether you follow boot from SAN as a standard practice for the other nonvirtualized hosts in your environment. If you're moving to a fully virtualized environment, it may make sense to go the route of booting the hypervisor from SAN as well.

Treat your virtual servers as "independent apps"
Virtual servers should be treated as independent entities when storage resources are allocated. You wouldn't want your Oracle database to share the same file system as a Web server on a single host if both are very IO intensive--in the same manner, you should take care that a virtual server running Oracle and a Web server don't share storage resources or, if they do, that there are clear separators between them.

This means keeping your data and boot images separate. It's a bad idea to have everything for all virtual servers on a single resource, whether it's a LUN or a file system.

It also isn't a wise idea to create and share large metaLUNs (each vendor chooses to call them different things) among all of the virtual servers. I see a lot of VMware environments where LUNs in the range of 600GB and higher are presented to a single hypervisor. This doesn't make sense to me. Virtual servers or hypervisors should be treated in a traditional sense from a storage provisioning perspective and all of the standard storage practices should apply to them.

Be careful of that host mode
Host connectivity is sometimes misinterpreted or ignored when it comes to virtualized servers. Many vendors are trying to address this. For example, some of the newer arrays now contain a "VMware" mode to be set only if you're running VMware ESX Server. If the arrays in your environment don't have such a mode, please check with your vendor to learn the recommended settings. Care should be taken to ensure that specific settings such as SCSI reservations to implement clustered hypervisors are set in advance to prevent any hiccups during failover. Keep in mind that standard multipathing software may not work on hypervisors or virtual systems in the same manner it works on standard operating platforms. Some virtualization vendors bundle their own path management software, which eliminates the need for a third-party add-on.

Virtualization and iPod revolutions
A parallel could be drawn between virtualization and the iPod. For every iPod sold, there are plenty of auxiliary technologies sold (from docking stations to boom boxes) that are designed to make the experience better. Virtualization appears to be headed that way.

One technology I'd like to highlight is N_Port ID Virtualization (NPIV). It's fairly new and was created by IBM with virtualization on its System z9. NPIV allows a single FC port to register more than one World Wide Port Name (WWPN) with a fabric name server. Each registered WWPN is thus assigned a unique N_Port ID. With NPIV, a single physical HBA port can appear as multiple WWPNs in the FC fabric. It also lets you create and map a WWPN for each virtual server. Therefore, from a LUN masking perspective, each virtual server can have a set of unique LUNs on the same storage port. Most major virtualization vendors support it.

Remember, virtualization is your friend and it's here to stay. It's prudent to embrace it now, and in the right way, so the benefits aren't felt by just the systems and applications teams but also by the storage crew.

This was first published in December 2007

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close