Midrange products are hot. What's your take on why growth seems to be happening in that area? Rich Lechner, vice president of storage systems, IBM: We are seeing growth in midrange products in the market at large. For one thing, SMBs [small and midsized businesses] are growing at about twice the rate of the market at large. SMBs -- which we characterize as less than 1,000 employees -- are going to be more interested in midrange products. Also, as people begin to deploy a tiered storage environment, midrange storage becomes very attractive for nearline storage and data retention, just as tape would be very attractive for long-term retention. Another driver is the rapid growth of open systems on the server side --
From a user standpoint, where's the line, currently, between high-end and midrange equipment? What features or functionality can they only get in a high-end system?Lechner: That's a great question in the sense that there has been this great divide between enterprise-class and midrange storage. The differences between the two have generally been in terms of redundancy and availability characteristics. The replication and copy services in the enterprise class have not generally been available in midrange storage offerings. The hardware reliability has typically not been as high -- we're approaching a field availability record of 5 "9's" -- that's 99.99999% availability with the [high-end] Shark, and approaching 6 "9's" of availability with the DS8000 [array]. 5 "9's" is what we've been delivering for years with the mainframe -- kind of the gold standard in the industry. Those capabilities aren't generally seen in the midrange. Typically, with a midrange array like EMC's Clarion, you'll see 4 "9's" at the most,because you won't see multipathing to each individual drive.
With our DS6000 [midrange array], though, there's no single point of failure anywhere in the device, but with your typical midrange storage device you'll find multiple single points of failure … Historically in the industry, vendors have provided a completely different set of tools and capabilities with the high end than with midrange devices. So you could say we have significantly blurred the boundary between the two domains. From IBM's point of view, then, today the difference is in vertical scalability -- how much storage can you put into a single footprint?
What about IBM's deal to resell NetApp boxes? Can you clarify which NetApp products IBM plans to sell and when?Lechner: Our intention is to rebrand and resell pretty much the entire NetApp [Network Appliance Inc.] portfolio. We'll be introducing the first of those offerings later this month. We haven't announced those dates yet, but it will probably be shipping in the second half of this year and first quarter of next year. It'll include not only NAS offerings at the entry point all the way up across NetApp's portfolio, but also includes virtualization technologies. We believe those complement our own technologies very well with SVC and SFS.
How so?Lechner: Our virtualization offerings provide virtualization at the block level and at the file level with SFS. What NetApp provides is virtualization for NAS devices and that complements our block-based virtualization capabilities. That covers the broad spectrum of network types that our customers may have. Charlie Andrews, director of storage marketing, IBM: As we integrate the pieces of more of a NAS- and IP-structured file-based set of products into our portfolio, we want to extend the virtualization capabilities that we have across the entire portfolio and make a stronger overall offer.
Lechner: The strength of a virtualization strategy from any vendor is the breadth and depth of the coverage. What I mean by that is -- we believe it's critically important that we help clients virtualize their entire infrastructure across all layers and that we supply as broad a set of devices as possible in the marketplace because we assume that our clients have highly diversified environments, and that diversity can manifest itself in many places. One manifestation is in the different types of physical devices the customer may have chosen -- they may have a wide variety of host environments. The value of a virtualization strategy is really founded on just how wide an environment it supports and how deep into a fabric it can go. The reason we think there's good synergy with NetApp is that they allow us to cover a part of the client's fabric we hadn't been able to get to before and allows us to virtualize that.
It seems like IBM may be "dipping its toe in the water", so to speak, with NetApp. If there's any traction with reselling these products, would it make sense for IBM to acquire the company?Lechner: I don't think so. It's not our strategy to acquire them. We are partners with NetApp just as we partner with other vendors in the industry such as Cisco, Brocade and McData -- sharing innovations and making sure our products work together for the client, but we're not discussing or even considering any sort of acquisition.