Guide to LUN configuration and virtualisation
A comprehensive collection of articles, videos and more, hand-picked by our editors
Deploying block storage for a VMware environment brings a number of benefits compared with file storage. Among them is improved performance stemming from protocol offloads that allow the CPU to perform other tasks, and, in the case of a Fibre Channel SAN, a dedicated high-speed network. However, block storage for VMware has its drawbacks, such as costliness and complexity of LUN management. In this video from a Storage Decisions seminar, storage expert Howard Marks discusses VMware SAN configuration issues that could cause problems in your environment. Read the transcript below or watch the video.
How does VMware talk to block storage?
VMware can support up to 256 Fibre Channel or iSCSI LUNs. (LUN means logical volume. In the SCSI world, a SCSI disk has a logical unit number, and when we started building storage area networks, somehow "volume" was too difficult a concept or too big a word to use, so we started calling them LUNs. ... LUN stands for logical unit number. So whenever you see LUN, just say, "Yes, a disk -- a virtual disk.")
One VMware server can talk to up to 256 disks. It includes basic multipathing, [which means] one server can talk through multiple network ports to the same disk. This is good because if a switch fails or if somebody unplugs a cable that they shouldn't have … it will fail over. That also means you have more bandwidth, so for those very busy systems, you can load balance across multiple links. Although one plus one does not equal two; one plus one equals about 1.6 [because] the load balancing is not 100% efficient.
More on VMware SAN configuration
The problem is, in that clustered file system in VMFS [Virtual Machine File System], multiple servers can access … one logical disk at the same time, because any virtual server is a VMDK [Virtual Machine Disk Format] file, and only one host at a time accesses that VMDK. So arbitration is managed by [rules that say], "This virtual server belongs to you, and that virtual server belongs to me."
But whenever anything occured that requires changes to the file system metadata -- when a new VMDK gets created, when a snapshot gets created, when a VMDK size changes – [that] required coordination amongst all the hosts that are accessing the shared disk. That meant that one host had to tell all the other ones, "I am going to reserve this whole disk for the time it takes for me to make these changes so that we do not step on each other's toes."
The time it takes to make changes is a second or two, but if you have more than 10 or 20 virtual machines in a single VMFS data store, and you do things like backups that generate snapshots in the middle of the day, you might see some performance impact from that. So, the rule of thumb for VMware without VAAI [vStorage APIs for Array Integration] is that you shouldn't have more than about eight virtual servers in a single data store, on a SAN system. … VMFS can support multiple extents -- that is, one file system can spin over multiple logical disks.
Most VMware administrators avoid doing that, because it didn't work so well in VMware 2 and VMware 3, and we have long memories. In VMware 4, one extent is limited to 2 TB; in VMware 5, that gets extended to 64 TB. Raw device mappings -- that is, a path where a virtual server has direct access to a disk -- can be up to 60 GB, but raw device mappings have some limitations.