Over the last few months, few technologies have grabbed the interest and imagination of storage industry analysts the way blade servers have.
These rack mount chassis are capable of integrating many small servers, each mounted onto separate blades in a single framework. Typically, each server blade holds one or two CPUs, some memory and some disk storage, with each of the blades sharing I/O devices built into their host chassis.
The purported benefits blade servers bring in contrast to more-standard rack mountable servers have been highlighted as lower acquisition cost, improved ease of management, increased server density and enhanced power management.
At first glance, many environments could gain from these technology improvements, not the least of which are High Performance Computing (HPC) labs, both in research establishments and enterprise data centers, that to date have built their server clusters using large racks filled with standard servers.
However, what has been interesting to me is that in these environments and many others, where blade server technology would appear to be a sure winner, there has yet been little to no adoption of this new technology.
My curiosity piqued by this lack of adoption, I have been speaking to administrators who currently deploy rack-mounted servers to determine why this is the case.
First, I was surprised to learn that the cost of a blade server, despite its limited functionality, is often as high as that for a fully configured rack mount server with similar specifications. Given the additional costs for the blade server chassis, the overall blade server acquisition cost is higher than that of a complete set of rack mounted servers.
Secondly, in a blade server, the I/O paths are shared, leading to limitations in the number of peripheral I/O's that can take place, such as disk I/O or server-to-server network communication. While I admit this is not a problem if the communication is occurring internally to the blade server, in a real environment, this is rarely the case. In enterprise and HPC server environments, the server count can easily grow into the dozens or hundreds of servers (while most blade servers are limited to single digit numbers of servers in the same chassis). Power consumption comparisons for blade servers also are reported to be more or less on par with that of rack mounted servers.
System administrators I spoke with cited a number of additional reasons why blade servers did not make sense for their environment. Blade servers cannot be retired and replaced in the same way regular rack mount servers can, and there is a loss of flexibility in the way servers can be interconnected. In HPC environments, where the layout and design of server interconnection is paramount, this was seen as a serious issue. Potential lack of upgradability was also cited as an issue.
Given all of these seeming restrictions, the issue of increased server density remains the solitary driver for blade server adoption, but when matched against the loss of flexibility one has to accept when using a blade server device, and a corresponding lack of reduced cost, this becomes less attractive. Until blade servers overcome these limitations, it is exceedingly difficult to see how they will be able to expand their role from that of a niche sale.
Copyright 2003, Blue Arc Corporation.