Buying storage is an ongoing process that may be periodic in some environments and seemingly continuous in others. There are several primary reasons given for storage purchases:
- New application deployments that require storing significant amounts of data.
- Performance or features are required on storage systems to optimize environments such as server virtualization.
- A technology transition is necessary to replace systems that have reached the end of their economic life.
- Additional capacity is required to handle the demand to store more information.
A common decision point in purchasing storage for all of these reasons is how much capacity to buy. There may be different types or classes of storage that segment the purchase, but the question of how much remains. Finding the answer to this is more complicated than it seems. It starts with evaluating the requirements.
In working with many IT operations on capacity planning, I’ve seen quite a variety of approaches to coming up with the amount of capacity involved in storage purchases. The different methods range from elaborate capacity planning models to taking what is asked for by application and business owners and multiplying by 10.
One reason a multiplier is used for deciding on the amount of storage to purchase is that capacity demands continue to increase faster than expected, and failure to meet storage demand immediately has negative consequences. Another reason is the budgetary issues within companies. There is a feeling that it may be more difficult to get funding for the purchase in the future. This means IT buyers are not sure when they will get another chance to purchase storage because of a potential “freeze.”
The information typically provided for the amount of storage required may prove inaccurate. An example of this comes from the deployment of a new application that stores information in a database. The systems analyst may have determined that the capacity for the database may ultimately be 20 TB. The database administrator will request 100 TB, allowing for extra capacity needed for testing and for a buffer in case the systems analyst has underestimated the needs. The storage admin may double that request for primary capacity to 200 TB and then add another 200 TB for backup to disk target. Now the purchase for a 20 TB primary need has expanded to 10 times that for primary and an equal amount for data protection.
It is becoming rarer for organizations to upgrade existing storage systems. There are several reasons for this, but the main reasons are that organizations don’t want to extend old technology or their economic models for depreciation makes it easier to move to new systems.
In general, storage capacity always gets used – for one reason or another. Managing it effectively requires effort and discipline. Unfortunately it is not the most efficient process, given the tools required and the time commitment necessary. And no real revolutionary change to improve the situation seems be in the adoption phase.
The goal for companies is to never run out of required storage capacity. Mostly, the prediction of how much to acquire is based on hard-won experience. The best practice is always to purchase storage proactively and not when desperate for capacity. The other guiding fact in purchasing storage is to keep up with technology changes and make transitions to take advantage of the new developments.
(Randy Kerns is Senior Strategist at Evaluator Group, an IT analyst firm).