Feature

Achieve data storage capacity efficiency with virtual storage, SRM

Storage efficiency means optimizing infrastructure to gain the most value from a fixed quantity of resources over a given period of time. Since you only have so much data storage capacity

Requires Free Membership to View

and budget, the trick is to figure out how to spend the budget to obtain the greatest allocation and utilization efficiency of the capacity you have.

Allocation efficiency refers to the placement of data into storage in a manner that neither oversubscribes nor underutilizes the available resources. It also refers to the efficiency with which resources are allocated to workloads when requested -- part of the agility idea currently being bandied about as a description of effective IT. It refers, in part, to inventory management -- optimized allocation efficiency entails just-in-time acquisition and delivery of storage hardware resources to meet storage requirements. That's a fancy way of saying that because the cost per gigabyte (GB) of most storage media falls significantly on a year-over-year basis, it's generally more economical to buy storage capacity when needed rather than before it's needed and when it will likely cost more per GB.

Utilization efficiency refers to something quite different. It's actually a qualitative assessment of the distribution of data across an infrastructure -- an audit that reveals whether the right data is hosted on the right storage, based on a number of criteria of more or less equal importance. Optimal placement of data is on devices that respond to access requests appropriately -- with speed or latency equal to the criticality of the data and its frequency of access -- and that exposes the data to appropriate levels, types or frequencies of data protection services -- RAID, encryption services, continuous data protection, mirroring, replication, tape backup and so on. There's also a cost component to optimization. Since different kinds of storage have different associated media and management costs, the data should be placed on the least expensive storage that still meets the aforementioned access and protection requirements.

Allocation and utilization efficiency both require effective monitoring and management to be realized. You can't allocate data storage capacity effectively if you don't understand what the workload requires or how much capacity you have in inventory. In general, we all do a pretty poor job of analyzing capacity demands or characterizing the typical storage requirements of applications and end users -- mainly because there are a dearth of tools for performing such assessments. Instead, we rely on software vendors to provide best practices for provisioning storage resources, augmented by administrator experience.

A major SQL database vendor, for example, suggests that storage be allocated to a new implementation of its database software that's equal to the capacity required to store application code, multiplied by four. The efficiency of this rule of thumb approach may or may not be very high, but it's the guidance the vendor provides. A seasoned database administrator (DBA) may have a better feel, based on experience, for the amount of storage they require behind an application, but inexactitude can also be a problem for defining capacity requirements. DBAs tend to ask for more capacity than they require so they can avoid the hassle of having to request additional capacity within a short amount of time. As a result, a lot of storage capacity may be allocated, yet unused at any time.

Application-aware storage and thin provisioning are two answers in the storage industry to this dilemma. Application-aware storage, which comes in many flavors and interpretations, assesses application input and output on an ongoing basis. This discerns trends or activity patterns that might provide more granular guidance to application capacity demands.

Thin provisioning leverages a similar technology to gather data for use by a forecasting algorithm. This enables the storage system to provide capacity to an application from a shared pool or reserve in time to meet capacity demand. Simultaneously, the thin provisioning engine monitors the reserve capacity to notify system administrators well in advance of the need to purchase more capacity based on consumption forecasts.

Unfortunately, some hardware vendors have implemented application-aware and thin provisioning technologies directly on the controller of an array, where it provides services only to the trays of disk drives or other media inside that particular storage array cabinet. If you run out of space on a thin provisioning array, you will need to deploy another. However, the original thin provisioning engine doesn't extend to additional stands of disk, so you'll eventually manage multiple, independently operating thin provisioning arrays. That obviates the value of thin provisioning.

Virtualized storage, SRM help with data storage capacity allocation

An alternative is to virtualize all storage to establish an abstraction control or service layer over the hardware layer, enabling the thin provisioning service -- as well as other services -- to be shared or used across a growing hardware pool of storage capacity. From an allocation efficiency perspective, virtualizing storage makes sense because it enables a cross-platform storage services management layer that requires fewer administrators to manage a growing amount of storage capacity.

Storage virtualization also sets the stage for improved capacity utilization efficiency which, again, involves the proper placement of the right kind of data onto the right kind of storage based on data characteristics, access and update frequency, media costs and so on. Again, with storage virtualization, various types of storage can be aggregated into virtual pools from which target volumes are created and presented to applications. So you can, with minimal effort, create a tiered storage environment from all the virtualized storage with each tier providing a different mix of capacity, performance and protection services appropriate to a certain type of data storage, thereby setting the stage for utilization efficiency. Associating a certain storage volume with a certain workload provides a bucket for the initial capture of data and the application of scripts or policies that specify the conditions for data migration to less expensive and slower performing tiers over time as data re-reference rates fall.

Storage virtualization is a key technology for realizing capacity allocation and utilization efficiency. But it isn't the whole story: Storage virtualization products tend to provide great tools for storage services management, but offer little in the way of storage resource management (SRM). SRM is the management and monitoring of the health and configuration of the underlying physical storage infrastructure and for that you need an SRM capability.

All storage arrays come with element management capabilities, on-board tools for configuration management and status monitoring. A few offer a utility that enables several of the same kits to be monitored on the same management dashboard. However, no hardware vendor has found it to be in its best interests to manage another vendor's gear or to allow its gear to be readily managed by another vendor. Even work on shared standards, such as the Storage Network Industry Association's Storage Management Initiative Specification, has fallen prey to the proprietary interests of vendors, many of whom have chosen not to implement the standard management technology on their kits.

In the absence of a widely adopted open standards-based approach for cross-platform hardware monitoring and management, many consumers simply use the element that managers provide with each box they buy. Storage management is often like surfing the Web -- traveling from one status page articulated by one array controller, then moving to another to visit and check each system. This process is inherently inefficient and limits the ability to leverage useful services or to respond proactively to burgeoning infrastructure problems in an optimized way.

The quick solution is to purchase an SRM package -- there are many similar ones on the market, such as IBM Tivoli Storage Manager, Symantec Storage Foundation and the SRM features in CommVault Simpana -- and establish it as a company standard. That way, you can establish an additional storage hardware purchasing criteria -- the ability to manage the prospective gear with the chosen SRM tool -- that will optimize coherent infrastructure monitoring and management.

Ideally, the world of storage would simply embrace World Wide Web Consortium's RESTful management, pioneered in storage by X-IO, on all storage infrastructure components. Until that happens, we may need to settle for proprietary resource management software and proprietary storage virtualization software to get our infrastructure instrumented to deliver storage efficiency.


This was first published in March 2014

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: