On the management front:
Here is where it gets real interesting. Everyone wants to own this space because here is where vendors can differentiate themselves. There are two approaches being taken in this space by vendors. One approach is for the hardware vendors to try and code all the parts needed to own the space. This means creating software to manage everyone's storage, switches, virtualization engines, and servers, and try and tie all this into one cohesive console. This also means that the vendor must continually update that code when any one of the other vendors changes something, and requires cooperation from all competitors. Doing this is virtuous, but is a Herculean task.
The other approach taken by hardware vendors is to write an API for managing the device that conforms to industry standard protocols (like CIM, SNMP, and XML) and let the framework vendors like Openview, Veritas, Unicenter, BMC and Tivoli tie into that API to enable management of those devices directly through the framework console. This approach lets the hardware vendors create the interface from a device perspective and lets the software vendors utilize that code to get a complete picture of the entire enterprise. This will mean any customer may be able to use a single console for application management all the way down through the stack to the bits on the disk and get good reporting and error management for all devices in the stack.
On the virtualization front:
Here is where I think we will see serious consolidation over the next 18 months. This is where I also see the most exciting activity in the storage industry. There have been numerous startups over the last year or two involving cool wigits in this area. Most have been an in-band virtualization solution using industry standard servers running the virtualization software. This allows for many diverse storage arrays to be tied together into a virtual pool, and sliced up and handed out to application server clients over Fibre or IP connections. Some solutions include remote mirror and/or replication software with their solution. This method is done more at the storage level, or you could even consider it as "halfway" between the fabric and the storage (all I/O must go through the servers with the virtualization software). The vendor names in this space are two many to mention, and we have already seen some consolidation in this area. Some of these startups are being bought by the larger server companies and others are still out there and doing well. I can see further consolidation in this segment as the market decides who the winners will be.
The harder virtualization method, and the one where standards need to be implemented before it takes off, is the out-of-band approach. SNIA is currently considering standards in this area. This approach uses IP as the metadata server (control) interface and is out-of-band of the actual I/O taking place in the SAN. Out-of-band virtualization can scale to the enterprise level, and can even be used for managing "Global" SANs. This fits into the "utility" model for storage so that any server just gets what it needs when it's plugged into the network and the SAN and comes close to true "plug and play" for servers on the storage side. Out-of-band virtualization is implemented at the fabric level and there are competing standards here also.
Dig Deeper on Data center storage
Related Q&A from Christopher Poelker
SAN expert Chris Poelker compares connecting a SAN with wavelength cabling and dark fiber and discusses the pros and cons of each. Continue Reading
SAN expert Chris Poelker discusses how to change the size of a LUN in a Microsoft cluster server environment. Continue Reading
Storage expert Chris Poelker discusses SATA/SCSI compatibility issues in this expert advice article. Continue Reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.