| Charting a new IT course means boldly going forward--with a plan.
In the storage world, this trouble can be traced to what we call "hypergrowth." This growth has placed a considerable strain on the management of storage infrastructures, and the problem is now exacerbated by the following trends:
Among the biggest challenges is avoiding a design that will need major revisions in three years. Given the rapid changes in technology, there's a fine line between embracing innovative technologies and avoiding the bleeding edge. So how does one design a long-term architecture? Let's examine the process and speculate on likely technology directions.
Planning for change
At a minimum, your checklist should include the following:
Scalability: By scalability, I'm not referring solely to storage capacity. It's important to plan for growth, but other attributes of system scalability are also critical. Performance scalability, in terms of the ability to increase I/O in a predictable and consistent manner, is an important design factor. Likewise, the ability of advanced features, such as mirroring, replication and load balancing, to perform optimally even with maximum capacities and mixed workloads is often overlooked when the emphasis is primarily on the number of terabytes or petabytes.
Resiliency: System availability and recoverability requirements continue to become more stringent, but they're only as reliable as the weakest link in the overall supply chain. Appropriately matching resiliency components of storage with application capabilities and other infrastructure components, as well as establishing (on an organization-wide basis) standard availability and recoverability policies, are prerequisites of an effective design.
Serviceability and support: While not as sexy as new technology bells and whistles, service and support can be a make-or-break feature. Adopting advanced technology is great until something goes wrong and help is required. What are the organizational expectations in this area? Factors to consider include response time, geographical coverage and level of vendor involvement. While larger, more established vendors may have the edge in this regard, some users report that smaller, up-and-coming vendors offer advantages such as more personalized attention and faster problem escalation.
Manageability: In an era of hypergrowth, the ease with which an environment can be configured, monitored and otherwise administered becomes increasingly important. An understanding of key processes, such as provisioning and change management, factor significantly into design and product selection. Other considerations include organizational factors like data center distribution and growing needs for remote management.
The end of an era?
To be fair, enterprise-class storage offerings from nearly all of the leading storage vendors have managed to evolve, in terms of performance as well as advanced functionality, to meet and even anticipate customer needs. They have successfully incorporated virtualization technology (e.g., Hitachi Data Systems' USP), integrated multiple performance tiers (e.g., solid-state and SATA storage), enhanced replication capabilities (e.g., EMC's SRDF family) and added connectivity options. It's safe to say that for this class of storage, future-state technology directions are well mapped and well balanced in terms of stability and innovation.
The situation is a little less clear at the midrange level. Evidence continues to mount that the venerable dual-controller midrange storage array is showing its age and will likely evolve into or be replaced by newer, highly virtualized designs promising near-enterprise system functionality at affordable price points.
Among the indicators of the need for change are:
The future is now
In the midrange, there exists a spectrum of storage offerings characterized at one end by the classic midrange array exemplified by proven systems like the EMC Clariion and Hitachi AMS families. At the other end of the spectrum are grid- or cluster-based platforms that can range from in-house designs at companies like Google and MySpace, to software-based clusters targeting the scientific and high-performance computing markets, to innovative hardware offerings from IBM's XIV, Isilon, Xiotech and others. Between these very diverse designs is a range of products that leverage virtualization and offer varying degrees of innovation in one or more areas: performance scalability, improved reliability, fast rebuild and recovery, ease of management and so on.
Storage service levels will likely require some rethinking for designs extending out a decade or longer. Given the growing complexity and interdependency among applications, as well as the need for continuous operations, recovery time objectives for nearly all applications will likely shrink dramatically. This means that some form of replication will likely be a given at all service levels (in much the same way that "everything" is backed up today).
With improved availability and recovery common across the board, the primary service-level differentiator will become performance; from a storage-tier perspective, design considerations will emphasize various combinations of performance, aggregate connectivity and capacity. Technology like solid state will also play an important role. At the risk of oversimplifying, a future tiered-storage model could very well consist of the following:
Tier 1: Fibre Channel/SAS for transaction-oriented data
Tier 2: High-capacity SATA for everything else
While somewhat daunting, an abundance of storage choices is ultimately a very good thing. We're embarking on a period of significant market segmentation, with vendors creating offerings to target price points and specific feature-set combinations for various audiences. This means we should be better able to tailor product selection to a specific combination of attributes. That just might result in the kind of change that will let us manage all that hypergrowth.