My february 2003 column, ("Utilization: it's probably worse than you think") which discussed measuring storage...
utilization, generated the most feedback I've received so far in my year of writing for Storage. I can't say I was surprised, though. Storage utilization is a part of every storage manager's life, with users complaining about having too little space and management demanding users make do with what they have. Even with growing disks and rapidly shrinking per-GB prices, there's always too much storage installed and yet too little available for use.
No matter how much storage is available, it never seems to be enough. Storage resource management software is touted as a tool to hunt down inappropriate storage uses, which shows that managers are already asking these questions: How can I match value with expenditures? How can I improve usage of storage resources? How do I know if I'm doing a good job?
Measuring quality: value at risk
I recently presented metrics for measuring the utilization of storage space. For each link in the chain--from array to volume manager to application--three measurements of space can be made: raw, usable and used. Some measurement must also be made of the quality or value of the data stored.
This value question is critical: Data must be classified according to the value at risk if it's lost and placed where appropriate. High-value data deserves expensive protection offered by enterprise storage systems, extra mirroring, replication and careful management. Less-valuable data is more common, and shouldn't get the same level of protection. Typically, data is given varying degrees of protection with little regard to its actual value to the business. This usually evolves where the value of data changes over time. Today's research data could become tomorrow's key product. In an optimized environment, the resources used to protect data are aligned with its value. In other words, the value at risk if data is lost dictates the effort and cost spent on protecting it.
This model of value at risk affects the utilization equation. A high-priced storage solution full of low-value data isn't effectively utilized, whereas a low-protection storage device might pose a real risk to the business. Right now, the only way to determine if your utilization is in line with the classification of your data is to perform an audit manually. Once a plan for appropriate protection is in place, storage resources can be reassigned for more effective utilization.
|Average host storage utilization by OS|
Source: GlassHouse Technologies
Connectivity: the key constraint
The No. 1 factor restraining better utilization--both in terms of space and value--is limited connectivity. Today's advanced storage systems offer numerous Fibre Channel (FC) ports for host connectivity, and FC switches can be used to increase the fan-out ratio between storage and hosts. Network-attached storage (NAS) over Ethernet offers an even larger fan-out, potentially at a lower cost, if file-based storage is acceptable. In addition, iSCSI promises to extend these benefits to block storage soon.
The trouble is most storage environments aren't using advanced storage systems. Storage installed is low-end direct-attached disk with little or no ability to be shared by multiple systems, and usually purchased with their attached server, directly from the server vendor or reseller. Price per byte, these systems seem unbeatable. For the cost of a FC host bus adapter (HBA), servers can be configured with plenty of internal RAID-protected storage. And because booting from a SAN is still not widely practiced, servers need internal disk.
So, why not use internal or direct-attached storage for systems? It can't be optimally utilized because it can be neither shared nor well protected. Sure, RAID-5 keeps a disk failure from bringing down an application, but it can't checkpoint data at different times, replicate it offsite for disaster recovery or mirror it for offline backup. There are host-based software solutions for these requirements, but dispersed applications are difficult to manage in a large environment.
Cost is falling as well, with enterprise storage currently fetching $50 per GB, including connectivity and software. Cheap internal disks used by server manufacturers still cost $5 to $6 per GB--not the often-quoted $1 per GB price point of consumer IDE drives. Add in the cost of a decent RAID card, disk chassis and UPS, and this rises to more than $10. The typically poor utilization of internal and direct-attached storage can easily double or triple this price. Once the potential business value of data is added to the analysis, the cost benefit of internal storage is entirely erased.
Most organizations are placing only operating systems, application executables and other software on internal storage--all other data on shared storage to enable appropriate management and protection. This strategy may appear more costly on a system-by-system basis, but the entire environment benefits from the ability to manage storage appropriately. Plus, storage utilization will increase--in terms of storage space used and the alignment of the value of data--with the cost of storage.
The truth about utilization
"What's normal?" This is one of the most common questions I'm asked about storage utilization. Of course, normal isn't necessarily optimal, but it's a valid question, if only to provide the context for improvements. I normally see per-system file system utilization between 30% and 40%, but I've never been sure that this was true. Although I've heard the same average from many sources, I've never found a real source with concrete data. So, I pulled together as much real-world data as I could get to find out the truth.
I collected statistical data on more than 750 production AIX, HP-UX, Solaris, and Windows systems from more than a dozen large and small corporations. This data included the size and occupancy of file systems, the number and size of physical disks, and server hardware details. Of course, these numbers came from companies concerned enough about storage to bring in an external consultant to analyze their usage patterns.
And, lest you think these were lesser machines, the average system in the data set had six CPUs, 3.3GB of RAM, five FC ports and 325GB of storage. Plus, the file system utilization of this data set was actually much higher than we had expected. With this information, I can provide a more concrete answer to some of the questions I get, such as:
- What's the average file system utilization for a server? Across all four operating systems, the average file system utilization was 39%. Solaris and Windows tied for the lowest average at 27%, while HP boasted a 53% average with AIX leading the pack at an astonishing 60%.
- How much storage is left unused in volume managers? The average system that used a volume manager had 77GB of storage available in the volume manager, but unassigned to a logical volume. This is about 36% of the total storage contained in volume groups.
- How much storage is typically left unused in storage arrays? Of the 20 or so arrays we looked at, we identified about 25% of the storage that wasn't assigned to a host.
The upshot of these measurements is this: The average site assigned just 75% of their storage to hosts, left 36% of that unavailable to applications and only used 39% of what was usable. So a typical host might have 500GB of external storage, 375GB in volume groups, 240GB in file systems and just 93GB used. And as mentioned earlier, these were production hosts at sites proactive enough to ask for a storage utilization analysis.
- Why the differences by operating systems? Windows administrators typically format their available storage on setup and leave it empty until it's needed. HP and AIX administrators have a history of using Volume Managers to provision storage only as needed. Surprisingly, no Windows host in our sample used a volume manager, while no AIX or HP storage was configured without one. Solaris systems resided somewhere in the middle: About 25% of Solaris storage was presented without a volume manager--even on systems using one for other file systems--with no visible storage left untouched.
- How do your utilization metrics compare? Let's expand this data set beyond the baseline established here. Write me at firstname.lastname@example.org, and I'll help you collect and summarize the utilization data for your environment. In a few months, I'll update these numbers to reflect the wider world of Storage readers, and we'll all achieve a higher level of understanding on this critical issue.