Change that stands the test of time: Best Practices

We're embarking on a period of significant market segmentation, with vendors creating offerings to target price points and specific feature-set combinations for various audiences.

This article can also be found in the Premium Editorial Download: Storage magazine: Exploring the solid-state storage advantage:

Charting a new IT course means boldly going forward--with a plan.


Change is in the air. Whether it's national politics or IT infrastructure trends, people are yearning for new and creative solutions to their problems. In many ways, this desire for change appears to stem from a feeling that we've lost control and need to remedy the situation.

In the storage world, this trouble can be traced to what we call "hypergrowth." This growth has placed a considerable strain on the management of storage infrastructures, and the problem is now exacerbated by the following trends:

  • New categories of data: Rich media and other storage-hungry data types are becoming commonplace.


  • New application categories: As firms strive to establish competitive advantage, new application categories (many of them under the banner of Web 2.0 initiatives) are springing up. These can introduce additional service-level requirements and consumption challenges.


  • Increased service-level demands from traditional business processes.


When staking out a future technology direction that can sustain an organization through the next decade, the challenge is to embrace the right kind of change. Like steering a ship, getting from point A to point B requires navigation skills and constant course correction.

Among the biggest challenges is avoiding a design that will need major revisions in three years. Given the rapid changes in technology, there's a fine line between embracing innovative technologies and avoiding the bleeding edge. So how does one design a long-term architecture? Let's examine the process and speculate on likely technology directions.

@pb

Planning for change
It's easy to get caught up in the technology evaluation process, but before venturing too far down the path of technology selection, establishing requirements and objectives is paramount. An awareness of technology options is healthy, but it's also important to resist the temptation to select the technology first and then rationalize an architecture around it.

At a minimum, your checklist should include the following:

Scalability: By scalability, I'm not referring solely to storage capacity. It's important to plan for growth, but other attributes of system scalability are also critical. Performance scalability, in terms of the ability to increase I/O in a predictable and consistent manner, is an important design factor. Likewise, the ability of advanced features, such as mirroring, replication and load balancing, to perform optimally even with maximum capacities and mixed workloads is often overlooked when the emphasis is primarily on the number of terabytes or petabytes.

Resiliency: System availability and recoverability requirements continue to become more stringent, but they're only as reliable as the weakest link in the overall supply chain. Appropriately matching resiliency components of storage with application capabilities and other infrastructure components, as well as establishing (on an organization-wide basis) standard availability and recoverability policies, are prerequisites of an effective design.

Serviceability and support: While not as sexy as new technology bells and whistles, service and support can be a make-or-break feature. Adopting advanced technology is great until something goes wrong and help is required. What are the organizational expectations in this area? Factors to consider include response time, geographical coverage and level of vendor involvement. While larger, more established vendors may have the edge in this regard, some users report that smaller, up-and-coming vendors offer advantages such as more personalized attention and faster problem escalation.

Manageability: In an era of hypergrowth, the ease with which an environment can be configured, monitored and otherwise administered becomes increasingly important. An understanding of key processes, such as provisioning and change management, factor significantly into design and product selection. Other considerations include organizational factors like data center distribution and growing needs for remote management.

@pb

The end of an era?
For more than five years, storage startups have been striving to advance the adoption of virtualized storage in a myriad of forms: SAN-based virtualization switches and appliances, internally virtualized storage arrays, virtualized NAS, and grid- or cluster-based systems. While receiving largely positive notices from analysts, and successfully making inroads in select areas, traditional storage systems have steadfastly remained the platform of choice in the domain of mainstream business computing.

To be fair, enterprise-class storage offerings from nearly all of the leading storage vendors have managed to evolve, in terms of performance as well as advanced functionality, to meet and even anticipate customer needs. They have successfully incorporated virtualization technology (e.g., Hitachi Data Systems' USP), integrated multiple performance tiers (e.g., solid-state and SATA storage), enhanced replication capabilities (e.g., EMC's SRDF family) and added connectivity options. It's safe to say that for this class of storage, future-state technology directions are well mapped and well balanced in terms of stability and innovation.

The situation is a little less clear at the midrange level. Evidence continues to mount that the venerable dual-controller midrange storage array is showing its age and will likely evolve into or be replaced by newer, highly virtualized designs promising near-enterprise system functionality at affordable price points.

Among the indicators of the need for change are:

  • Unacceptable, long rebuild times for RAID sets consisting of high-capacity disks


  • A desire for "inside the box," multitier configurations capable of supporting workloads with diverse performance, access characteristics and connectivity needs


  • The ubiquitous deployment of server virtualization, and demand for faster provisioning and easier reconfiguration and data relocation


  • A demand for more robust advanced features, including more versatile replication options and cross-array consistency


@pb

The future is now
What might a future midrange environment look like? By looking at the available offerings, we can identify those technologies whose acceptance is likely to broaden.

In the midrange, there exists a spectrum of storage offerings characterized at one end by the classic midrange array exemplified by proven systems like the EMC Clariion and Hitachi AMS families. At the other end of the spectrum are grid- or cluster-based platforms that can range from in-house designs at companies like Google and MySpace, to software-based clusters targeting the scientific and high-performance computing markets, to innovative hardware offerings from IBM's XIV, Isilon, Xiotech and others. Between these very diverse designs is a range of products that leverage virtualization and offer varying degrees of innovation in one or more areas: performance scalability, improved reliability, fast rebuild and recovery, ease of management and so on.

Storage service levels will likely require some rethinking for designs extending out a decade or longer. Given the growing complexity and interdependency among applications, as well as the need for continuous operations, recovery time objectives for nearly all applications will likely shrink dramatically. This means that some form of replication will likely be a given at all service levels (in much the same way that "everything" is backed up today).

With improved availability and recovery common across the board, the primary service-level differentiator will become performance; from a storage-tier perspective, design considerations will emphasize various combinations of performance, aggregate connectivity and capacity. Technology like solid state will also play an important role. At the risk of oversimplifying, a future tiered-storage model could very well consist of the following:

    Tier 0: Solid state for very high transaction rate data

    Tier 1: Fibre Channel/SAS for transaction-oriented data

    Tier 2: High-capacity SATA for everything else

In some cases, Tier 1 could even disappear, resulting in a two-tiered model.

While somewhat daunting, an abundance of storage choices is ultimately a very good thing. We're embarking on a period of significant market segmentation, with vendors creating offerings to target price points and specific feature-set combinations for various audiences. This means we should be better able to tailor product selection to a specific combination of attributes. That just might result in the kind of change that will let us manage all that hypergrowth.

This was first published in July 2008

Dig deeper on Fibre Channel (FC) SAN

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close