This article can also be found in the Premium Editorial Download "Storage magazine: Hot tips for buying storage technology."
Download it now to read this article plus other related content.
|Blade servers can lower storage costs|
Blade servers are becoming popular as a way to centralize server-computer resources and share common components. In that sense, blade servers are similar to disk subsystems, where the devices all benefit from common power and packaging.
Obviously, the discussion about saving money on storage area network (SAN) costs and eliminating unnecessary ports for server farms leads to questions regarding whether or not blades can also share storage and SAN connections, further reducing costs. As with all computer configuration topics, the answer is: "it depends."
For starters, it's advisable to take boot drives off blade servers to make the blades as reliable as possible. There's no need to add the cost of mirrored disk drives to a server blade. That means the blades should be able to use network boot technology such as that provided by Intel's PXE. If network boot can be used, then all the blades can share a common boot image, which could be a set of high-availability mirrored disk drives or a memory/flash memory disk. Network access to the boot image is made through the network connections integrated in the backplane connections of the blade server.
Similarly, to reduce costs further and increase reliability, SAN connections can also be integrated into a blade server's backplane, and integrated with an internal SAN switch. For example, IBM blade servers include an integrated 16-port Fibre Channel switch that communicates to optional host bus adapter (HBA) modules in the blade server cards. It's possible to connect to an external switch or director using dual connections for reliability from the embedded switch.
If the blade server package is done well, the management of all this can be straightforward, including the setup of boot configurations and switch fencing (zoning or virtual SANs). Using blade servers doesn't eliminate the need for HBAs and switches, so the number of ports isn't really decreased beyond a single connection SAN, but the management and ease of integration is likely to pay for itself many times over in the life cycle of the blade server.
Shrinking SAN costs
With PC server hardware costing less than $5,000, it's hard to justify a 40% to 50% SAN tax for dual connections. Instead, it makes much more sense to consider using inexpensive spare systems. Using an N+1 approach where a single spare server provides redundancy for 20 or so production systems, it's possible to have fairly inexpensive data redundancy. If you lose a server or a SAN connection, the spare can step in and do the work. There's no immediate and automatic failover, but these servers aren't usually shouldering critical applications.
If you ask several people what the cost of a SAN connection is, you'll get many different answers. There are list prices, street prices and even eBay prices. For our calculations, a cost of $1,500 per connection is used. That figure was chosen because it's a conservative number that doesn't exaggerate the cost of a SAN connection too much. The use of small, inexpensive Fibre Channel (FC) switches with port prices of approximately $500 per port and an HBA price of $1,000 was assumed.
Let's crunch the numbers: If you save $1,500 per server by not using redundant connections on 20 servers, that amounts to $30,000. Then, if you install a spare server that costs $6,500 ($5,000 + $1,500 for HBA and switch ports), the amount of money saved is $23,500, compared to outfitting 20 dual-connected systems.
With single connections, a SAN still provides flexible access to data resources, superior scalability and centralized management control. Not only that, but backing up these systems over a SAN is a thousand times easier than backing them up over the LAN. There's no reason why single-connection SANs can't be as much a part of the SAN infrastructure as a large director-class implementation. Single-connection SANs are just targeted to a different set of requirements.
The main problem to watch for in a single-connection SAN is a switch failure. Obviously, if a switch fails none of the systems connected through it will be able to access their storage. So, the goal of the topology for the single-connection SAN is to reduce the overhead needed to accommodate the loss of a single switch. In other words, use eight-port switches. With eight-port switches, you'll have more switches to manage, but the number of connections to manage per switch is limited.
For example, assume an eight-port switch has six systems connected to it with two ports left over to connect to storage subsystems or other switches. If a switch fails, it will be necessary to reconnect the servers from the failed switch to any six available ports. In other words, you need to reserve six ports that you can use at a moment's notice. One way to guarantee there are ports available without running into other access control problems is to have a spare eight-port switch ready to take the place of a failed switch. Keep in mind that this isn't necessarily a matter of reconnecting cables and it is advised that you keep configuration and zoning information available for all production switches. Fortunately, an eight-port switch is much easier to configure than a 32-, 64- or 128-port switch.
Cranking through the dollar wheel again, let's assume this time that there are 40 production servers, two spare servers and a spare switch. Saving $1,500 per server on 40 systems by avoiding dual connections equals $60,000. To offset that, there are two spare servers at $6,500 each and a spare eight-port switch at $6,000 (calculated here at eight multiplied by $750) for an alternative redundancy cost of $19,000, resulting in a total SAN cost reduction of $41,000. There are many ways to spin these numbers, but the key variables to consider are the number of spare systems and the cost of the spare switch. Users with older 16-port switches that no longer have a place in their primary SAN could redeploy them at very low costs.
This is an environment where a core-edge topology makes a lot of sense. Connecting the eight-port switches to existing corporate SAN switches allows PC servers to use storage resources on existing storage subsystems, where they can be centrally managed. While systems connect to switches over a single-SAN connection, there are dual connections for interswitch links (ISLs), or links connecting to storage. This provides redundant protection on the paths carrying data for all systems connected to the switch. The performance of this design should be more than adequate; in fact, there's an abundance of bandwidth for server connections.
However, the single-connection SAN doesn't have to be connected to another SAN to be effective. Medium-sized businesses without SANs that can't afford a fully redundant SAN could build a single-connection SAN for much less money. With no existing SAN storage subsystem, the design would need to include a way to connect to storage. This could either be done through one or two additional switches functioning as backbone switches or by connecting the eight-port switches directly to a multiported storage subsystem. Additional switches certainly skew the cost calculations here, but they also provide room for expansion, including such niceties as connecting to centralized tape backup equipment.
Of course, there are many more things to look at to make network storage more affordable than the price of the technology. But by keeping a realistic focus on the availability needs of the application and by leveraging system redundancy techniques, it's possible to significantly cut the cost of SAN components--extending the benefits of the SAN to many more systems.
This was first published in March 2004