This article is part of an Essential Guide, our editor-selected collection of our best articles, videos and other content on this topic. Explore more in this guide:
2. - Recent storage and server developments ease BC/DR planning: Read more in this section
- Asigra cloud backup software switches to recovery-based licensing
- Where and how to use data deduplication technology in disk-based backup
- Tips on selecting BC/DR software
Explore other sections in this guide:
- 1. - Good planning and management are key for business continuity and disaster recovery success
- 3. - Security an important part of BC/DR planning
- 4. - Network disaster recovery planning and building resilient networks
ORLANDO, Fla. -- Making storage virtualization as efficient and useful as server virtualization was a key topic Tuesday among storage administrators and vendors during the opening day of Storage Networking World.
Scott Siegfried, director of IT at Denver-based kidney dialysis provider DaVita, presented a case study about his company's "virtualization evolution." DaVita virtualizes servers with VMware and uses IBM's SAN Volume Controller (SVC) storage virtualization system, but Siegfried said he still lacks the ability to diagnose performance issues because of poor visibility between his virtual machines (VMs) and virtual storage.
DaVita has about 4,200 servers and 1 PB of storage on its storage area network (SAN), Siegfried said. He said 70% of the Windows-based servers and about 65% of Unix-based servers are virtual. For storage, DaVita has two eight-node and two four-node SVC clusters in front of IBM Storwize V7000, DS8300 and DS8800 SAN arrays.
Siegfried said he uses SVC to quickly provision storage capacity on servers when creating VMs. "We deliver a significant number of IOPS through the SVCs," he said. "The other benefit is we can use the SVCs to migrate between tiers of storage."
The need to dynamically allocate capacity is critical as the company's storage grows between 30% and 40% year-over-year. But DaVita still has problems diagnosing performance issues within the VMs and the SAN. Virtualization on the server and SAN level has created multiple levels of abstraction, making it difficult to pinpoint what could be causing an application performance problem.
Other challenges include problems inside the SAN fabric. Because VMs generate higher IOPS, his SAN needed more ports, Siegfried said.
Storage virtualization reincarnated
Independent of Siegfried's session, a vendor panel discussed how storage virtualization can emulate virtual server value. Representatives from VMware, DataCore Software, Hitachi Data Systems (HDS) and IBM suggested vendors stop thinking about storage as hardware and focus on software the way servers do with hypervisors.
Not surprisingly, they used the terms "storage hypervisor" and "software-defined storage" to make their points.
"Nobody buys million-dollar servers anymore," said Mark Davis, the former Virsto Software CEO who became VMware's vice president of storage when VMware acquired Virsto in February. "They now use a virtualization layer or a software abstraction layer, so the underlying hardware is [not the concern]. The idea of a storage hypervisor as the abstraction layer is what is needed for storage, just like it is for servers. I think in the last year we have begun thinking of storage as a software problem rather than a hardware problem."
Ron Riffe, IBM's business line manager for storage software, said virtual server value has transformed the market and "it is moving in the same direction for storage largely because of economics." Data growth continues to drive hardware costs upward, so it's inevitable that storage virtualization technology will become mainstream, he said.
However, some attendees pointed out that storage virtualization has been around for at least 15 years and it still has not taken off the way server virtualization did. Integrator Per Sedihn, chief technology officer at Sweden-based Proact IT Group, said it is easier to virtualize servers than storage.
Server virtualization requires slicing up CPU and memory in standard Intel-based devices and creating separate logical entities that applications can run on, while turning array resources into a logical layer for virtual storage is more difficult. Even if disk capacity is virtualized into pools of storage, adding capacity remains a manual hardware-centric process.
"It hasn't taken off," Sedihn said. "You still have to manage each array. You are taking hardware and making it logical, and that is much harder." Storage virtualization has not really evolved beyond data migration, he said.
VMware's Davis agreed with that. "You're right. It's like you're buying VMware just for vMotion," he said. "We need to solve much bigger problems than migrating workloads. We need to commoditize storage hardware. Otherwise, we are centralizing storage and then creating another management layer."
DataCore CEO George Teixeria disagreed that storage virtualization hasn't taken off. He claims his company has 10,000 customers. Companies can buy DataCore software and an EMC midrange VNX and get everything that a high-end Vplex does at a fraction of the price, he said.
But Teixeria agreed that the industry-wide storage mindset needs to move from hardware to software. "When they think about servers, they think, 'Do I want VMware or [Microsoft] Hyper-V?' When they think about storage, people still think of hardware," he said. "Our objective is to take software and have it do everything storage arrays do, so the hardware doesn't matter."