Private storage clouds might seem like a rehash of old technology and even older ideas, but there are significant potential benefits once you cut through the hype. Here's what you need to know to get started.
Metaphors for cloud storage may be overused, but we can still relate to the notion that clouds obscure vision and can be either beneficial or turbulent. Both conditions can certainly apply to private cloud storage. Although a lot of the hype around private cloud storage promises all the benefits of a public cloud behind a firewall, private cloud storage really boils down to a new name for utility storage.
Utility storage suffered from its association with selective outsourcing in the post dot-com bust period, even though it's just about simple, certain availability. The name "utility storage" also lacks cachet -- it sounds more like a place to stash your garden tools than a sleek, sexy storage array. "Utility" just doesn't sound as cool as "cloud."
What public cloud tells us about private cloud
The name change doesn't alter the fundamental aspects of the technology being a storage infrastructure designed to provide better service levels with less effort and lower cost. And you can't deny the benefits an IT organization can gain from adopting private cloud concepts, regardless of what label is currently popular. To some extent, though, private cloud marketing catches a ride on the coattails of public cloud. One must also acknowledge that storage is only a part of the cloud solution, whether it's public or private. Server virtualization, in particular, enables cloud computing of any sort. Nevertheless, a solid data storage strategy is critical to the success of a cloud deployment.
To put private cloud storage into perspective, consider the benefits of public cloud storage:
- Availability. Capacity should be available for immediate provisioning, always on and include swift, certain recovery.
- Quality of service. Service levels should be clearly described and aligned with a services catalog. Tangible metrics should define what users can expect in terms of response time, recovery time and uptime.
- Cost certainty. The per-unit cost of storage in a cloud environment is usually available according to a price list. Users pay for what's actually used, not what's provisioned or the high-water mark, depending upon the service-level agreement.
Looking at that list, there are definitely some significant benefits, but benefits to whom? Therein lies the first key difference between public and private cloud storage. In a public cloud, these benefits accrue to the user and contracting organization as one in the same. Users get all the application support benefits, while the organization gets cost certainty and perhaps a lower cost than that of maintaining an internal infrastructure. But in a private cloud environment, only the app user gains any of these benefits. The IT organization must provide what's essentially platform-as-a-service functionality. While the business unit may gain cost certainty, pay-as-you-go chargeback and concrete service-level agreements (SLAs), none of these benefits flows to the IT organization. They must procure and manage just as much storage, establish the monitoring systems and implement disciplined cost accounting.
In addition, none of the benefits specify "the latest storage array," the "fastest disk drive" or "10 Gb Ethernet." In fact, there are no technical specifications. Public cloud benefits are all oriented around better operations: service levels, cost control and responsiveness. But storage vendors generally don't sell better operations, they sell hardware and software. So what exactly are vendors selling with respect to private cloud storage? Surely, it must be more than a hardware upgrade and a fanciful idea. Fortunately, the answer is "yes." It can be more than hardware and a vision, but only if it's all put into the proper context and environment.
Some vendors emphasize the need for scalability and flexibility as requirements for a cloud architecture. Systems that offer a lower cost model contribute to the attractiveness of a cloud scenario. But nearly all vendors claim those attributes, so the definition isn't very helpful. Moreover, a hardware architecture alone doesn't define a cloud implementation because cloud is ultimately more process than product.
Process maturity required for cloud
Many IT advisory organizations have developed business process maturity models, all of which look more or less the same. Your favorite search engine can locate a few in a couple of seconds. They usually describe five levels of maturity similar to the following:
- Level 1. Ad hoc, tactical where few processes are defined or documented.
- Level 2. Repeatable, where processes are defined and documented but may vary between functional areas even for similar tasks.
- Level 3. Processes are documented and standardized across the organization and include performance metrics.
- Level 4. Process metrics are routinely gathered, correlated to business operations and disseminated to stakeholders.
- Level 5. Continuous process improvement is enabled by quantitative feedback; proactive capabilities are implemented.
Within the context of private cloud storage, organizational process maturity is clearly a prerequisite to a successful private cloud implementation. Firms should attain a Level 3 capability at a minimum before considering a private cloud storage implementation. The reasons for standardized processes relate to standardized infrastructure, which will be discussed shortly. If your firm doesn't legitimately have a Level 3 maturity, improving processes to that level is the first step to take before embarking on the road to private cloud storage.
Developing a private storage cloud architecture
The organizational benefits from a cloud implementation flow from the discipline and standardization demanded by a cloud architecture. These include better control, optimized utilization, simplified infrastructure architecture and enterprise-wide management practices.
A key characteristic of private cloud storage is a standardized infrastructure, which is sometimes referred to as a reference architecture. Some may argue that a standardized infrastructure is necessary to standardizing procedures and there's some merit to that argument. However, backup and recovery, provisioning, monitoring and other storage management tasks can be standardized across disparate platforms.
Although a reference architecture can be single-vendor, most are not. A reference architecture is merely a specification of the systems and configurations the organization will support. This will include versions of software and firmware to ensure the technology components are consistent across the organization. For most organizations, storage consolidation will play a key role in evolving to a reference architecture. Because of business acquisitions, business unit autonomy or simply circumstance, organizations often have more variety in systems than can be economically or technologically justified. A private cloud storage initiative is a good opportunity to pare extraneous systems from the data center or to at least prevent them from expanding into other areas.
Private storage cloud building blocks
While IT organizations can develop a reference architecture for any combination of systems, they can also use preconfigured systems such as NetApp's FlexPod. FlexPod is a prequalified and preconfigured system consisting of VMware components, Cisco Unified Computing System blade servers, Nexus switching components and NetApp FAS storage. It's probably most appropriate for new application deployments or technology refreshes because it's a fresh start from existing systems and doesn't incorporate storage from other vendors. It also simplifies technical support, as all three vendors implicitly support the configuration and keep firmware levels in sync.
For organizations that want to consolidate existing diverse systems into a private storage cloud, Hitachi Data Systems' Virtual Storage Platform (VSP) storage controller allows a wide variety of arrays from other vendors to directly attach to it. This offers the benefits of heterogeneous virtualization as well as standardized tools sets (Hitachi's) across the different arrays. This approach can function as a transitional step from diverse systems to standardized configurations without losing the current equipment investment.
The software side of the storage cloud
Standardization can also be facilitated at the software level. For example, Symantec Corp. offers a software stack that can be used to bring commonality to multiple hardware systems. Its Storage Foundation product has file system, volume manager and data movement products across operating platforms. Symantec's Veritas Operations Manager and recently announced (though not yet released) Veritas Operations Manager Advanced purports to provide single-point of management across the virtual server and storage environments, including storage resource management (SRM) functionality for visibility and reporting about the storage environment. The reporting and measurement functionality of SRM apps, among other things, allows firms to determine the cost of storage delivery. This facilitates chargeback for services, which is essential to controlling costs. Some companies won't actually enforce a chargeback, but rather use the charge function to establish the relationship between cost and delivery, and to illustrate it to the IT group, user and management.
F5 Networks Inc., perhaps best known for its IP load balancers, also offers a slightly different take on integrating existing infrastructure into a cloud. F5 takes a more app-oriented perspective. Its Dynamic Services Architecture uses appliances to provide data classification services that help ensure data is located appropriately so it can be delivered at the required service level. These appliances dynamically move data to the appropriate storage tier or device.
Indeed, application classification is critical to appropriate deployment of cloud services. Part of having mature operational procedures is maintaining an application catalog for the organization that includes service-level requirements and delivery specifications. This is necessary for any cloud deployment because some applications are more appropriate for a public cloud than a private cloud.
The clearest way to segregate applications appropriately is to consider their strategic importance to the organization. Applications can be classified as either commodity or high value. Commodity applications are important but offer no competitive advantage in the marketplace, while high-value apps offer such an advantage.
To get a better picture of the difference, consider backup and recovery. Every company needs data protection, but it doesn't result in a competitive advantage in the marketplace; organizations with great backup and recovery processes won't be able to charge higher prices for their products or use their backup prowess to increase demand for the company's products. Thus, backup and recovery, once it has met the threshold of viability, is an operation where costs should be minimized. This makes it an ideal target for public cloud services. Email and contact management are two other examples of necessary but non-strategic applications.
In contrast, strategic applications differentiate a company from its competitors. Examples could be unique manufacturing processes and product design systems. In those cases, the systems may depend upon unique devices, specialized configurations of devices or operating software not commonly found in public cloud deployments. For those technological reasons, strategic apps aren't candidates for cloud outsourcing. Moreover, secure systems, such as those related to defense or other classified environments, couldn't be deployed externally. Nevertheless, they can benefit from standardization and improved operational processes and, therefore, private cloud configurations.
EMC Corp. can offer a standardized infrastructure for private cloud deployments, anchored by its Symmetrix VMAX architecture. In addition, the company has a unique filtering model for classifying applications. The filtering model is applied by EMC's consulting arm to assist organizations in their private storage cloud transformations.
This filtering model specifies an economic filter, a trust filter and a functional filter. For example, applications have economic parameters, trust requirements and functional requirements that may be improved or hindered by a specific cloud architecture. By mapping applications to the results of the filter analysis, EMC classifies applications as being best suited to a private cloud, public cloud or hybrid cloud, or simply left in a legacy environment. No organization will migrate all its applications to a cloud environment, and EMC's methodology is a useful way to classify applications and set priorities.
Hype, but hope too
The hype surrounding the private cloud can make it seem like a unique industry development with heretofore unachievable benefits; and vendor marketing has it sounding like success is only a purchase order away. Transforming data center storage systems into private storage clouds begins with disciplined processes based on a standard operating platform. Does that qualify as utility storage, cloud storage or just better infrastructure? Call it what you want; users care about better service, not labels.
BIO: Phil Goodwin is a storage consultant and freelance writer.