This can be two separate technical issues and each with its own set of problems. One is about the physical consolidation of the storage infrastructure using such technology as storage area networking but which by itself does not address data sharing. For the most part, a SAN allows one or more servers to have storage provisioned from one or more storage arrays via a transport that is faster and more reliable than IP for block data. This has a number of advantages in performance, scalability, manageability, backup and availability. The other would be true data consolidation using some means to enable direct access to data from two or more client systems. This would involve either having one or more of the attached servers move data in a NAS or file server configuration over standard Ethernet/IP networks or other transport such as InfiniBand or Myrinet using NFS, CIFS, IP. As a side note, using high-speed transport to accelerate file server performance is not just in the realm of compute clusters -- in my opinion, it is a very interesting way to store data over this specialized network faster than could be saved locally, even to Fiber Channel -- great for high end media environments such as video editing or people working with massive files. Another means would be to have some type of shared file system where the data instead is still moved over the storage area network but some or all of the servers access the same volumes of physical storage simultaneously allowing all participating peers to access the files at SAN speeds along with enjoying the other benefits of SAN storage. In either case, in a SAN consolidation solution the "pitfalls" would be similar because both may have similar infrastructure requirements and here are a few: 1. Data consolidation usually involves making changes to existing production systems that are vital to mission critical business. This is true whether you are a Fortune 500 company or a small business that relies on a few file, Web or application servers. When planning this you need to think through on not only the logistical aspects of the implementation (such as production windows where changes can have the minimal impact) and human resources (such as system admin experience) but also think through on the financial considerations over at least the first one or two years of the project. a. What are the maintenance costs for the hardware (storage, switches, HBAs, storage management software, licenses, etc.) b. What are the upgrade and expansion roadmaps going to look like in terms of handling load, and what will the costs be to implement. c. Failing to make educated decisions and making political ones instead. There is a lot of good technology out there. Addressing what your requirements are and finding a solution for it is the way to go, not the other way around. 2. Storage experience of the IT team and/or implementation team (in-house or otherwise). Experienced system admins, DBAs, network engineers and application people might be left out of the planning process and you end up with a solution that doesn't meet the requirements of critical applications. Storage vendors can bring in application guru's who can address specific issues of the a given application. CLICK HERE for Part 2.
Dig Deeper on Data storage management
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.