Virtualization user bounces back from failure

After its first virtualization project failed, a Midwestern manufacturing firm found the benefits of the technology compelling enough to search for another vendor and try again.

IT people who get burned by an emerging technology rarely submit themselves to more of the same. But for one survivor of a virtualization project gone wrong, the benefits of the technology were still compelling enough to warrant giving it another shot.

Approximately three years ago, Todd Wyman, a senior Unix administrator at a Midwestern manufacturing company, began virtualizing his firm's 30 terabytes of Hitachi Data Systems Inc. (HDS) and Hewlett-Packard Co. disk using DataCore Software Inc.'s product. Approximately one

Related articles

Tech Report: Virtualization

Storage virtualization -- V is for victory

Nonprofit cuts storage costs with virtualization

QLogic buys Troika for virtualization chip

year into the project, the environment became "unstable." While Wyman and DataCore differ on the causes of the instability (Wyman blames DataCore's in-band architecture, while DataCore claims Wyman's company hadn't properly configured its redundancy), things were bad enough that the DataCore servers were yanked.

Wyman and his co-workers worked for about six weeks to restore the data center to its original, nonvirtualized state. Shortly thereafter, they started looking for another virtualization platform. "We missed things like single pane-of-glass management, being able to use open source disk, the snapshotting, etc.," Wyman said.

After a nine-month evaluation, the company settled on StoreAge Networking Technologies' Storage Virtualization Manager). It isn't as "feature-rich" as FalconStor Software Inc.'s IPStor, the other product the company evaluated, Wyman said, but they were more comfortable with its architecture in which the virtualization server sits out-of-band, but actual data still travels directly between the server and the storage device. Wyman and his colleagues use StoreAge's multiMirror to perform replication to a disaster recovery site, and make heavy use of multiCopy, a snapshot implementation, particularly for making backup or test copies of Oracle databases.

Even today, the potential failure of a virtualization system is still a huge concern for storage administrators. "I'm an old Unix guy; I had enough trouble putting my data on a SAN, but then you ask me to put a Windows box in the middle ... It was a horse pill to swallow," said John Parrish, associate vice president in charge of terminal technology at the Dallas/Fort Worth International Airport Board, which uses DataCore to replicate application data between two coproduction sites on the airport campus. Data is continuously updated from a variety of feeds and powers applications like flight information terminals and baggage handling. "If it went down, I'd be looking for another job," he said.

Parrish only considered in-band virtualization because the replication features he needed weren't available for his midrange HDS arrays and took the plunge only after extensive in-house testing. "Knock wood, it has not faltered once," he said.

Slowly, but surely, people's concerns about virtualization are being allayed, according to George Teixeira, DataCore's president and CEO. When it comes to reliability, the same rules that apply to everyone else need to be followed. "If you want high availability, you need two pipes to everything," he said. If you skimp and, for example, use only a single controller and it fails, "you're in deep yogurt."

Dig Deeper on Storage for virtual environments

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.