This article can also be found in the Premium Editorial Download "Storage magazine: What you need to know about data storage provisioning."
Download it now to read this article plus other related content.
Provisioning storage is mostly a manual job. But you can still tune your storage for greater performance and higher disk utilization.
Fifty years after IBM Corp. invented the disk drive, storage provisioning remains a complicated, and mostly manual, labor- intensive task. Each vendor has its proprietary tools and recommended approach. And the task keeps getting harder due to the increasing complexity of enterprise storage environments.
Storage provisioning is the process of logically carving up physical disk space to meet an organization's need for storage capacity, performance, security and efficiency. It encompasses the assignment of servers/hosts to appropriate storage, specifying network paths between hosts and storage, and masking servers and zoning the network to ensure access by the right servers. The entire task involves dozens of individual steps.
|Storage provisioning pitfalls|
When done correctly, provisioning results in storage that performs well and scales nondisruptively. When provisioning isn't done correctly, application performance can degrade, security is compromised, scalability is inhibited and capacity utilization is low, resulting in wasted storage space (see "Storage provisioning pitfalls," at right).
Complicating the provisioning challenge are today's widely heterogeneous enterprise storage environments: more storage, additional arrays, different types of array devices, multiple storage tiers, more RAID levels, increasing complexity, and many best-practice provisioning guidelines or standards. "Some day there will have to be a standard way to deal with storage provisioning, but there isn't now," says Rhoda Phillips, research manager, storage software at IDC, Framingham, MA. And, she adds, that day won't be coming anytime soon.
Without a set of best practices for provisioning, storage managers are left trying to hammer out what works best for their organization based on application requirements and the organization's particular mix of storage technology. For most, it's a manual trial-and-error process in which the lessons learned the last time may no longer apply when a new array, large-capacity disk drive or new tier of storage is thrown into the mix.
This was first published in October 2006