As storage technology has evolved, storage management has become more and more sophisticated - and sometimes problematic due to the vast selection of products and vendors.
The weakest area of storage management may be the process of determining requirements, evaluating new technology and designing an effective solution based on the findings.
The following are a few points that could be taken into consideration while searching for new storage solutions and/or proceeding with storage designs:
Have a clear understanding of SAN/NAS components (i.e. storage server, tape library, Fibre director and switches, iSCSI gateway, SAN data gateway, etc.). What can and can't they do? Components are still platform dependent and you'll struggle with interoperability across vendors, regardless of what the marketing specs say. If not properly designed, a highly advanced component may still function poorly.
Know what storage virtualization really means. What can and can't it do? Where is it feasible? Will it work as the vendor claims, regardless of the underlying components? For example, if platform A is not supported by storage server B, could DataCore "virtually" fill that gap to fully utilize the underlying storage as they claim? Storage virtualization should not be considered a solution for interoperability in a storage network.
Decide what disaster recovery (DR) plan fits best in the current workflow. A simple but effective setup is most likely the least costly and most efficient, regardless of what the vendors claim. Unless a real comprehensive test is performed and validated, it can't be guaranteed that the total solution will work as designed, especially if it's a sophisticated solution. Small individual tests are not sufficient.
Account for the five 9's and high availability. When it's costly to test this implemented feature thoroughly, especially in a large environment and/or with the "hot" DR site, it's commonly expected to work as specified. Thus, the potential problems or conflicts in different failover scenarios would not be identified or discovered until after the fact.
For marketing info and benchmarks, ask are those numbers practical? One of the common mistakes in solution design is to use these ultimate numbers to calculate the performance of a new setup. For example, depending on the connected topology and the driving software, the data rate of an LTO drive could vary from 15GB/hr to 100GB/hr.
Stay focused on the requirements while debating what's really needed, what's affordable and what promotion the sales rep tries to push. Any alternative has its own trade-off. What it really costs is not only what you have to pay now, but also what you'll have to pay in the long run.
For any new implementation discussion, include the inhouse and vendor's technical folks to validate the collaborated info. Whenever it's possible, put aside budget factors until solution options are determined or narrowed down.
About the author: Giao Tran is currently a technical consultant at Sirius Computer Solutions, an IBM business partner. His technical areas are in IBM pSeries (RS/6000s), HACMP, SAN/NAS and TSM. You can reach him via e-mail at Giao.Tran@siriuscom.com.
Dig Deeper on Storage management tools