By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
| You can conquer IT infrastructure challenges by rethinking ownership and functional processes.
However, a current of change is propelling us away from business-as-usual systems. Server virtualization and related technologies are causing organizations to reconsider not only IT infrastructure design and architecture, but the lines of demarcation between functional responsibilities and new/updated operational processes. For example, many of the benefits associated with server virtualization are dependent upon a storage infrastructure optimized to support it; without proper planning, however, an unintended consequence might be a substantial increase in storage consumption and a decrease in utilization.
A prime example of the interdependence of servers and storage is the provisioning process. The ability to provision a server in a virtualized environment shrinks from weeks to days or even hours. However, if the requisite storage still takes weeks to provision, the net benefit to the organization may be lost. In the pre-virtualization days, server and storage provisioning times tended to be roughly equivalent and were therefore reasonably in sync. With the adoption of server virtualization, storage now becomes a major bottleneck. How do we address this? Anyone who has dealt with IT performance issues knows that one way to deal with bottlenecks is to create buffers. If we decide to overpurchase and overallocate storage to keep ahead of server provisioning demands, we can, in a sense, mitigate the problem, but at a substantial cost when it comes to efficiency. When we consider that storage is often overallocated because of limited forecasting ability, we run a significant risk of making a bad situation worse.
How is this addressed in other supply-chain models? Consider a manufacturing facility with multiple assembly lines where various subsystems are built and must come together at precisely the right time. Many years ago, these firms adopted a just-in-time approach as the most efficient way to meet their needs. In contrast, IT has operated on a just-in-case basis, forfeiting efficiency for serviceability. Server virtualization disrupts that model and will drive related functions to modify their approaches.
| Rethinking process and ownership
Avoiding gross levels of inefficiency requires bringing the storage provisioning function back into balance. This entails a two-pronged effort: considering the operational impact of virtualization to storage and the changes needed in terms of process and possibly responsibility/ownership; and identifying areas where newer technologies may be of assistance.
Virtualization technologies, and vendors such as VMware, introduce an additional layer of storage management that, in effect, causes a storage admin to cede some aspects of control regarding how storage is allocated, monitored and managed. The diagram titled "VMware storage provisioning steps" (below) outlines the provisioning process for a VMware environment, starting with the initial storage selection and LUN creation by the storage admin.
What stands out in this diagram is the additional management layer between the storage and the (now virtual) server. The storage admin hands off large disk volumes to the VMware admin to be used as VMFS containers, but the real disk allocation and server assignment responsibilities rest with the VMware admin.
The VMware admin provisions virtual machines (typically based on standardized configuration templates) and assigns standard-sized virtual disks (VMDKs) in accordance with the template definitions. There may be several templates based on defined server configuration profiles. This enables cookie-cutter provisioning and allows for some customization.
Finally, the VM server admin manager configures and assigns the volume. However, what's often missing is the end-to-end visibility to this storage, and how efficiently it's being reallocated and utilized at each level.
This can have implications for storage management with regard to tiers, data protection requirements and storage efficiency. Policies regarding the allocation of storage to apps based on service level and business value can become muddied or lost in this process and must be revised.
To maintain efficiency, operational standards are required. Many organizations are in various stages of establishing ITIL-like frameworks. The need for appropriate metrics is critical not only at each administrative layer, but across their intersections. Storage management is continually under pressure to contain costs, which to a large extent means controlling growth. Without visibility into the upper levels of the supply chain, this becomes an impossible task. It should be noted that the coordination issue impacts not only provisioning, but most other data management functions, most notably backup and DR.
| Technology to the rescue?
Server virtualization has initiated a stampede of products hoping to address these new management challenges, ranging from performance management and security to cost allocation and enhanced automation.
In terms of storage, more recent advances are likely to play an expanding role. Thin provisioning has been attracting attention for several years and is being embraced by a growing number of vendors in the SAN and NAS realms. While not appropriate for all data or application profiles, it does hold significant promise for virtualized server environments where pre-allocation of large data stores is proving to be inefficient. Thin provisioning can provide a twofold benefit: storage efficiency can be better controlled, but the inherent process of allocating storage from a common pool is often simpler and faster than traditional LUN allocation. This type of oversubscription demands first-rate monitoring and management practices to avoid serious disruptions due to overconsumption.
In the realm of secondary storage, such as disk-based backup, data deduplication technology will have particular value for virtualized environments. As disk-based approaches involving techniques like snapshots and proxy servers become more accepted, the value proposition for deduplication grows and will become a standard for secondary data storage in virtualized environments.
Meanwhile, the need for management tools that provide end-to-end configuration, change management and monitoring represents a huge opportunity for vendors. In SAN environments, tools that offer end-to-end visibility are entering the market and this need will be further addressed as N_Port ID Virtualization adoption broadens. Storage vendors are enhancing their management capabilities to better support virtualized servers, with some even integrating management functions into VMware's VirtualCenter. At the other end, server configuration, change and patch management tools will likely expand and improve to encompass storage in the future.
Server virtualization is a disruption to traditional infrastructure practices that extends well beyond the server realm. For this reason, a server virtualization project must be approached as an overall IT infrastructure redesign project both in terms of what must be delivered and how, and how well it must be understood.