| Virtualization can be a tricky technology for storage managers who need to apply traditional methods while navigating new obstacles.
Before we start, let's review the different kinds of virtualized systems. Virtualization can exist at the physical layer or logical layer. Physical layer virtualization lets you have system resources dynamically assigned to operating systems. Logical layer virtualization lets you have a host operating system (also known as a hypervisor) that runs on a single physical box. Logical layer virtualization comes in two flavors: a type-1 or bare-metal architecture hypervisor, and a type-2 or hosted architecture hypervisor. With physical layer virtualization, the hypervisor runs as the primary operating system on the physical box; in the latter case, it actually runs as an application or shell on another already running operating system. Operating systems running on the hypervisor are then called guest or virtual operating systems. Regardless of the type of virtualization in play, the challenges from the storage side are similar.
| Virtualization isn't just VMware
When we talk about virtualization, the first name that comes to mind is VMware. While VMware is the leader in this space with its ESX Server, it isn't alone. AIX shops will more than likely be familiar with IBM's logical partitions (commonly known as LPARs) that can run AIX and Linux. There's also Microsoft's Virtual Server and PC, and Sun with its Solaris Zones. And the Open Source movement isn't far behind with Xen.
When people talk about server virtualization, ask for specifics. What's the rationale behind their selection? More importantly, is it supported on your SAN/storage environment with minimal/no changes or are costly modifications/ additions necessary to make it work?
A new take on sharing IO
Then there's the issue of how the LUNs are presented and mapped from the hypervisor to the guest. For example, IBM has introduced VIO, in which the hypervisor (or VIO server) can present LUNs directly via a virtual SCSI adapter to the LPAR. The VIO server knows nothing about how the LPAR uses these LUNs. Another way of doing it is to create logical volumes from the VIO server or hypervisor, and then present those to the LPAR instead of raw disks. This not only gives you visibility and control over how IO is distributed but, by way of the logical volume manager (LVM), it minimizes performance bottlenecks. So you have two ways to do it. The first way is simpler but results in more administrative overhead because disk numbers can change in AIX for any number of reasons. The latter way, using LVM, allows all of the presentation to be controlled by an underlying layer that controls disk IDs.
To me, how LUNs are presented and IO is shared between the virtual servers is a critical design issue. This is where the most attention should be paid to create a healthy and scalable virtual environment.
| Boot from SAN (with reminders)
The concept of boot from SAN used to be fairly simple. Rather than using local disks for the boot image, make a Fibre Channel or iSCSI LUN the boot disk. If you run a tight and disciplined operation, there's value in booting your systems from SAN. Indeed, a key benefit SANs bring to systems virtualization is the ability to create this huge high-performance repository for storing all of the virtual server boot images. Fair enough. But I have two comments.
The first is that when you aggregate multiple boot images onto a single large LUN, the performance of that LUN comes under scrutiny by the virtual servers running on it. Care should be taken to maintain a healthy boot image number to LUN ratio. Instead of creating a single large LUN, you may be better off creating smaller LUNs and spreading the virtual server images onto them, thus maintaining a lower and well-balanced ratio.
The second reminder is that you still need to decide if the hypervisor itself boots from SAN or not. The answer to this depends on whether you follow boot from SAN as a standard practice for the other nonvirtualized hosts in your environment. If you're moving to a fully virtualized environment, it may make sense to go the route of booting the hypervisor from SAN as well.
Treat your virtual servers as "independent apps"
This means keeping your data and boot images separate. It's a bad idea to have everything for all virtual servers on a single resource, whether it's a LUN or a file system.
It also isn't a wise idea to create and share large metaLUNs (each vendor chooses to call them different things) among all of the virtual servers. I see a lot of VMware environments where LUNs in the range of 600GB and higher are presented to a single hypervisor. This doesn't make sense to me. Virtual servers or hypervisors should be treated in a traditional sense from a storage provisioning perspective and all of the standard storage practices should apply to them.
| Be careful of that host mode
Host connectivity is sometimes misinterpreted or ignored when it comes to virtualized servers. Many vendors are trying to address this. For example, some of the newer arrays now contain a "VMware" mode to be set only if you're running VMware ESX Server. If the arrays in your environment don't have such a mode, please check with your vendor to learn the recommended settings. Care should be taken to ensure that specific settings such as SCSI reservations to implement clustered hypervisors are set in advance to prevent any hiccups during failover. Keep in mind that standard multipathing software may not work on hypervisors or virtual systems in the same manner it works on standard operating platforms. Some virtualization vendors bundle their own path management software, which eliminates the need for a third-party add-on.
Virtualization and iPod revolutions
One technology I'd like to highlight is N_Port ID Virtualization (NPIV). It's fairly new and was created by IBM with virtualization on its System z9. NPIV allows a single FC port to register more than one World Wide Port Name (WWPN) with a fabric name server. Each registered WWPN is thus assigned a unique N_Port ID. With NPIV, a single physical HBA port can appear as multiple WWPNs in the FC fabric. It also lets you create and map a WWPN for each virtual server. Therefore, from a LUN masking perspective, each virtual server can have a set of unique LUNs on the same storage port. Most major virtualization vendors support it.
Remember, virtualization is your friend and it's here to stay. It's prudent to embrace it now, and in the right way, so the benefits aren't felt by just the systems and applications teams but also by the storage crew.