Nobody wants to pay for something they’re not using, but enterprise data storage managers do it all the time. The inflexible nature of disk storage purchasing and provisioning leads to shockingly low levels of capacity utilization. Improving the efficiency of storage has been a persistent theme of the industry and a goal for most storage professionals for a decade, but only thin provisioning technology has delivered tangible, real-world benefits.
The concept of thin provisioning may be simple to comprehend, but it’s a complex technology to implement effectively. If an array only allocates storage capacity that contains data, it can store far more data than one that allocates all remaining (and unnecessary) “white space.” But storage arrays are quite a few steps removed from the applications that store and use data, and no standard communication mechanism gives them insight into which data is or isn’t being used.
Storage vendors have taken a wide variety of approaches to address this issue, but the most effective mechanisms are difficult to implement in existing storage arrays. That’s why next-generation storage systems, often from smaller companies, have included effective storage thin provisioning technology for some time, while industry stalwarts may only now be adding this capability.
Understanding “allocate on write” thin provisioning
Traditional storage provisioning maintains a one-to-one map between internal disk drives and the capacity used by servers. In the world of block storage, a server would “see” a fixed-size drive, volume or LUN and every bit of that capacity would exist on hard disk drives residing in the storage array. The 100 GB C: drive in a Windows server, for example, would access 100 GB of reserved RAID-protected capacity on a few disk drives in a storage array.
The simplest implementation of thin provisioning is a straightforward evolution of this approach. Storage capacity is aggregated into “pools” of same-sized pages, which are then allocated to servers on demand rather than on initial creation. In our example, the 100 GB C: drive might contain only 10 GB of files, and this space alone would be mapped to 10 GB of capacity in the array. As new files are written, the array would pull additional capacity from the free pool and assign it to that server.
This type of “allocate-on-write” thin provisioning is fairly widespread today. Most midrange and enterprise storage arrays, and some smaller devices, include this capability either natively or as an added-cost option. But there are issues with this approach.
One obvious pitfall is that such systems are only thin for a time. Most file systems use “clear” space for new files to avoid fragmentation; deleted content is simply marked unused at the file system layer rather than zeroed out or otherwise freed up at the storage array. These systems will eventually gobble up their entire allocation of storage even without much additional data being written. This not only reduces the efficiency of the system but risks “over-commit” issues, where the array can no longer meet its allocation commitments and write operations come to a halt.
That doesn’t suggest, however, that thin provisioning is useless without thin reclamation, but the long-term benefit of the technology may be reduced. Plus, since most storage managers assume that thin storage will stay thin, effectively reclaiming unused space is rapidly becoming a requirement.
The thin reclamation challenge
The tough part of thin provisioning technology is reclaiming unused capacity rather than correctly allocating it. Returning no-longer-used capacity to the free pool is the key differentiator among thin provisioning implementations, and the industry is still very much in a state of flux in this regard.
The root cause of the thin reclamation challenge is a lack of communication between applications and data storage systems. File systems aren't generally thin-aware, and no mechanism exists to report when capacity is no longer needed. The key to effective thin provisioning is discovering opportunities to reclaim unused capacity; there are essentially two ways to accomplish this:
- The storage array can snoop the data it receives and stores, and attempt to deduce when opportunity exists to reclaim capacity
- The server can be modified to send signals to the array, notifying it when capacity is no longer used
The first option is difficult to achieve but can be very effective, since operating system vendors don't seem eager to add thin-enhancing features to their file systems. Products like Data Robotics Inc.'s Drobo storage systems snoop on certain known partition and file system types to determine which disk blocks are unused and then reclaim them for future use. But that approach is extremely difficult in practice given the huge number of operating systems, applications and volume managers in use.
Therefore, the key topic in enterprise thin provisioning typically involves the latter approach: improving the communication mechanism between the server and storage systems.
Zero page reclaim
Perhaps the best-known thin-enabling technology is zero page reclaim. It works something like this: The storage array divides storage capacity into "pages" and allocates them to store data as needed. If a page contains only zeroes, it can be "reclaimed" into the free-capacity pool. Any future read requests will simply result in zeroes, while any writes will trigger another page being allocated. Of course, no technology is as simple as that.
Actually writing all those zeroes can be problematic, however. It takes just as much CPU and I/O effort to write a 0 as a 1, and inefficiency in these areas is just as much a concern for servers and storage systems as storage capacity. The T10 Technical Committee on SCSI Storage Interfaces has specified a SCSI command (WRITE_SAME) to enable "deduplication" of those I/Os, and this has been extended with a so-called "discard bit" to notify arrays that they need not store the resulting zeroes.
Most storage arrays aren't yet capable of detecting whole pages of zeroes on write. Instead, they write them to disk and a "scrubbing" process later detects these zeroed pages and discards them, so they appear used until they're scrubbed and discarded. This process can be run on an automated schedule or manually initiated by an administrator. And some arrays only detect zeroed pages during a mirror or migration, further reducing capacity efficiency.
Even if an array has a feature-complete zero page reclaim capability, it will only be functional if zeroes are actually written. The server must be instructed to write zeroes where capacity is no longer needed, and that's not the typical default behavior. Most operating systems need a command, like Windows' "sdelete --c" or something on the order of NetApp's SnapDrive, to make this happen, and these are only run occasionally.
Some applications, including VMware ESX volumes, do indeed zero-out new space and the ESX command "eagerzeroedthick" will even clear out space. Although certain compatibility issues remain, notably with vMotion, ESX is becoming increasingly thin-aware. The vStorage APIs for Array Integration (VAAI), added in ESX 4.1, include native "block zeroing" support for certain storage systems. ESX uses a plug-in, either a special-purpose one or the generic T10 WRITE_SAME support, to signal an array that VMFS capacity is no longer needed.
Symantec Corp. is also leading the charge to support thin provisioning. The Veritas Thin Reclamation API, found in the Veritas Storage Foundation product, includes broad support for most major storage arrays. It uses a variety of communication mechanisms to release unneeded capacity, and is fully integrated with the VxFS file system and volume manager. Storage Foundation also includes the SmartMove migration facility, which assists thin arrays by only transferring blocks containing data.
Thin awareness in other systems is coming more slowly. Another standard command, ATA TRIM, is intended to support solid-state storage, but it could also send thin reclamation signals, along with its SCSI cousin, UNMAP. Microsoft and Linux now support TRIM, and could therefore add thin provisioning support in the future as well. They could also modify the way in which storage is allocated and released in their file systems.
Thin provisioning challenges exist, but the technology provides many benefits. It's one of the few technologies that can improve real-world storage utilization even when the core issue isn't technology related. Indeed, the ability of thin provisioning to mask poor storage forecasting and allocation processes contributed to the negative image many, including me, had of it. But as the technology improves and thin reclamation becomes more automated, this technology will become a standard component in the enterprise storage arsenal.
BIO: Stephen Foskett is an independent consultant and author specializing in enterprise storage and cloud computing. He is responsible for Gestalt IT, a community of independent IT thought leaders, and organizes their Tech Field Day events. He can be found online at GestaltIT.com, FoskettS.net and on Twitter at @SFoskett.
This article was previously published in Storage magazine.
This was first published in May 2011