Server-side storage and virtual SANs are getting their share of attention. Some server hypervisor vendors seem to be suggesting that legacy technologies such as SAN and network-attached storage (NAS) are simply too rigid and inflexible to service the requirements of next-generation agile computing, which is enabled by server virtualization. However, if the end goal is greater storage efficiency, it might make some sense to review what...
you have and determine whether there is some way to make it more serviceable, even under new workload requirements.
A lot of today's storage pitches, what I like to call "marketecture," suggest that the best way to make storage infrastructure efficient is to return to a simpler model of direct-attached storage, storage rigs directly attached to -- or storage internally mounted within -- server chassis. According to server hypervisor vendors, this server-side model, combined with some centralized value-added storage software that enables the application of services to storage shares in a manner appropriate to the hosted workload, is the next evolution in storage topology.
At first glance, the goals of this software-defined storage model (the combination of server-side topology and value-added storage services) are admirable. By creating a centralized storage software service, you can take a lot of the expense out of storage rigs by removing associated array-centric software licensing fees. Server-side storage arrays simply replicate data amongst themselves to pre-establish and synchronize data that might be needed if an application migrates from one physical host to another. That way, with replicated data and services, workloads can transition from machine to machine without requiring configuration changes at the storage layer or the application layer to adjust resources to demand.
In its simplest manifestation, this means no time-consuming manual procedures need to be added to reconfigure paths between guest applications, which are instantiated as virtual machines (VMs) riding on physical hosts, and the physical storage hardware or logical volumes that store workload data and perhaps VM "disks" -- VMDKs or virtual hard disks. This enables faster movement of guest apps from one physical host to another, whether as a function of server load balancing or performance optimization, or in response to hardware or software failures.
Freewheeling application cut and paste is just the beginning of the benefits, advocates say. Software-defined storage also means you don't need to add steps to provision new storage to the guest application when needed, or to ensure the proper services are associated with the new storage (data protection services, thin provisioning, deduplication and so on), or to change parameters and processes for managing storage with each configuration change. These things would all be enabled in the brave new world of server-attached, software-defined storage in a way they never were in legacy SAN or NAS, according to evangelists.
Four benefits of creating virtual volumes
Truth be told, there are many aspects of current SAN and NAS architecture that contribute to today's storage inefficiencies, but a debate still needs to be had regarding the best approach to resolve these inefficiencies. It's true that vendors of SAN and NAS gear tend to market products that have diluted cost savings at the disk layer in terms of price per gigabyte of disk drives, preferring to increase earnings by joining proprietary value-added software at the hip to proprietary array controllers. But the idea that this cost problem is addressed by returning to simplistic DAS topologies with centralized value-added software functionality is by no means certain.
Turning to server-side hardware topologies would seem to be a step backward in terms of evolutionary progress, which has steadily seen data storage technology move from isolated islands of storage to bigger and bigger shared pools of storage resources. An alternative to dismantling shared storage is to virtualize the capacity of all existing SAN storage and present capacity to virtualized and non-virtualized applications in the form of virtual volumes. There are four potential benefits of such an approach.
1. Replication. By virtualizing your storage capacity, virtual volumes can be created that move with guest applications from physical server to physical server. This eliminates the need for multiple replication processes -- in the case of virtualized workloads -- to pre-instantiate data behind individual servers that might host the application. It also provides the capability to support the needs of both virtualized x86 workloads and the 25% or so of non-virtualized revenue-generating apps that are expected to persist through 2016 and beyond. The storage virtualization engine can be provided as software running on a commodity server (for example, DataCore Software's SANsymphony-V) or as an appliance (such as IBM SAN Volume Controller). Either way, the cost to deploy is far less than the cost to replace all existing storage.
2. Configuration. As a virtualized workload moves from one host to another, the virtual volume containing its data remains in one place, but with new routes for application storage I/O recalculated, rebalanced and applied to the new topology transparently. No reconfiguration steps are required to reconnect the rehosted app to its data storage service.
3. Shared features. These services are no longer hosted on array controllers, creating isolated islands of functionality, but are rather shared on a software-based uber-controller. So, virtual volumes associated with a given workload can be assigned data protection services, performance management services, capacity management services and security services as appropriate for the workload -- services that remain continuously applied even if the application changes hosts.
4. Hardware longevity. Storage virtualization enables practical benefits, including keeping older storage gear in service for longer periods of time, deferring the cost of upgrades until absolutely necessary, incorporating used gear into infrastructure, sharing expensive silicon storage assets (flash storage) more efficiently among workloads, and creating pools of storage between which data can be tiered over time.
Despite all of these potential advantages, storage virtualization has been deliberately excluded from the definition of software-defined storage by some keepers of the new lexicon.
Since the front office views efficiency in terms of return on investment (ROI), it makes sense for storage efficiency to begin with the acquisition of the right kind of storage at the best possible price. Even used equipment might make for a good infrastructure addition if you ensure the gear is legal and well maintained. Another obvious must is making sure the equipment is set up and configured properly for maximum uptime, resiliency and throughput. Less obvious is why existing shared storage should be ripped and replaced, especially when the option of storage virtualization might meet all the goals of software-defined storage but may not require the expenditure of budgetary dollars on new infrastructure.
Storage inefficiency is contributing to mounting costs that are making the front-office bean counters look for their cost-cutting tools. Only a business-savvy approach to addressing the root causes of storage inefficiency will yield a workable and sustainable strategy for deriving the best performance and utilization efficiency from your storage, and demonstrating ROI that meets front-office expectations.
How virtualizing storage can help management
Server-attached storage value can vary