When setting up a tiered-storage system using low-cost disk in order to reduce or contain costs, the last thing you would want to do is to include more complexity in the form of technology, additional storage management activities, polices and procedures. Keep in mind that complexity increases costs, even if the technology that you are awaiting costs less, it's the total sum of all technology and their associated costs to implement and manage on a day-to-day long-term basis. In other words, you need to consider a horizontal or a holistic TCO beyond the simple TCO of a vertical solution or issue.
Be leery of deploying low-cost storage, such as SATA, just because it is cheaper than robust storage products. Again, it's not just the initial acquisition costs that must be considered: You also have to factor in what it will cost you to integrate the new technology into your environment and the cost to identify and migrate data on an on-going basis to utilize the lower-cost storage. Lower cost does not always mean the best value, so be cautious of adopting lower-cost storage just because it is cheaper, have a good reason as well as targeted candidate data to place on that storage. Think more in terms of effective storage usage instead of cheapest storage. Effective storage can be in terms of performance (latency, bandwidth), capacity (how much you can actually use), reliability and applicability to your specific needs.
To elaborate, do you have a plan as to what data and/or applications will be placed onto different tiers of storage? Are you looking at just storage media, or are you also considering tiered access and tiered protection as part of your tiered-storage strategy and plan? Do you have any SRM-type tools (e.g. TekTools, AppIQ, etc) or homegrown tools (e.g. scripts, Excel, etc.) that allow you to identify candidate data based upon how often the data is accessed, who is accessing it, when it is accessed, how it is accessed (read or write), it's size and so forth?
Part of this process of identifying what data to place on what tier of storage is to also understand your application service requirements including performance and availability. From a performance standpoint, is the data accessed by a single I/O stream or multiple I/O streams? Is it read or write data? Is it small or large I/O; is it random or sequential and what are the response time requirements? Is the I/O pattern cyclical based upon time of day, week, month, or season or is it constant? From an availability perspective what level of data protection is needed? Does the data need to be replicated to a secondary storage device locally or remotely? What RAID level is required and what are the backup and data retention requirements? Can the application and data be taken off-line for backup? Are files open continually? What are your RTO and RPO requirements for the different tiers and classes of data?
Understand how SATA, or any new technology including Serial Attached SCSI (SAS) or low cost, near-line Fibre Channel (aka FATA, low-cost FC) technology, will integrate into your environment. Can it co-exist in or with existing storage systems? Can it co-exist in a separate storage system yet utilize the same management tools and utilities you currently are using? What are the availability and configuration options and do they meet your specific needs?
You might find some bargains and lower-cost items that in the long run cost you more. Some things to keep in mind include:
So the bottom line is this, have a strategy and a plan to work from it, instead of just jumping in and buying storage because it is cheaper.
For more information:
About the author: Greg Schulz is a senior analyst with the independent storage analysis firm, The Evaluator Group Inc. Greg has 25 years of IT experience as a consultant, end user, storage and storage networking vendor, and industry analyst. In addition, Greg is the author and illustrator of "Resilient Storage Networks". Greg holds both a computer science and software engineering degree from the University of St. Thomas.