Not too long ago, the idea of storing a server's operating system on the SAN was treated with skepticism. Without a high-performance Fibre Channel SAN and experienced support professionals, few organizations felt it was worth the risk of deploying boot from SAN.
Now, with the growing adoption of iSCSI storage, some businesses moving to networked storage are reconsidering having the servers boot from shared storage. Although 10 Gigabit Ethernet iSCSI arrays are not yet widely available, iSCSI is closing the speed gap with Fibre Channel systems. Also, because Ethernet knowledge is common among IT professionals, it is easier to set up and maintain. So, there are some compelling reasons to implement boot from SAN in an iSCSI array environment.
First, let's take a look at the benefits. An operating system and any application that has installed files on the server's boot volume are unlikely to consume much more than 10 GB to 15 GB of space. The smallest disk that is offered on many server vendors' web sites is 36 GB. Spread over a dozen or so servers that a small business may have, the amount of space purchased that is unlikely to be used represents more physical space than many businesses have available on their networked storage.
From a power-consumption perspective, having all those underused disks spinning in servers is difficult to justify. Floor space is an issue, too. With boot from SAN, it may be possible to replace physically larger servers -- sometimes 3U or 4U in height -- with diskless 1U servers that have comparable processing and memory capabilities.
You do need a little additional infrastructure in place or, at the very least, a slight change to your existing technology. To find its boot volume, a server needs to get an IP address and needs to be told where to get its boot volume from. One way to do this today is to use a product from Emboot Inc., which will set up a DHCP server on your storage network.
All servers available today include network cards that can be "PXE booted," that is, get an IP address and boot instructions from the network rather than any locally held information. Because it's highly recommended that the storage network be on a separate network, or at least a separate VLAN, placing this DHCP server on the storage LAN presents no difficulties or conflicts with the DHCP service used for your workstations.
Armed with an IP address and the IP address of the storage platform it needs to talk to, the server presents itself and hands over its iSCSI worldwide name, which has been predefined and configured. The SAN hands over all the logical unit numbers (LUNs) the server needs, in the correct order, and the server starts the operating system in the normal way. From that point, there's no functional difference between a server with a local set of disks and the server that has no local disks.
Onboard the SAN something clever can happen as well. In many cases, servers are loading identical versions of the OS. For application servers (for example, Exchange servers), even the application files may be identical. This can allow even greater disk use efficiency. Here's how it can work. The storage and Windows administrators will work together and create a boot LUN with a Windows installation on it that meets the company's needs. The Windows administrator then prepares the disk for commoditization and shuts the server down. The storage administrator then effectively clones that LUN 10 or more times.
These cloned images take up zero space, but have read access to the original LUN and look like a full set of data to the server owning this LUN. The cloned LUNs are presented to the servers and the server boots as normal. As files are added, such as installing IIS, Exchange, SQL Server, etc. These cloned LUNs start to grow in size. Rather than have 20 10 GB LUNs, you end up with one 10 GB LUN and 19 LUNs of just 2 GB or 3 GB. That's a pretty good savings, especially when you consider that you started off with a pair of 36 GB disks (72 GB total raw capacity) and now you're down to 3 GB or so space used on disk.
So then, what have you ended up with? You have a rack full of servers that are generating a lot less heat because they don't have internal disks. You may even be able to eliminate entire racks of servers. In addition, you have maximized your investment in the disk space you have purchased; rather than dozens of half-used RAID 1 pairs, you have a consolidated set of disks that are used more efficiently.
About the author: Mark Arnold, MCSE+M, Microsoft MVP, is Principal Consultant with LMA Consulting LLC, a Philadelphia, PA-based private messaging and storage consultancy. Mark assists customers in designs of SAN-based Exchange implementations. You can contact him at firstname.lastname@example.org.