Feature

The case for network smarts

Ezine

This article can also be found in the Premium Editorial Download "Storage magazine: Should you consolidate your direct-attached storage (DAS)?."

Download it now to read this article plus other related content.

In the past couple of years, widespread successful storage area network (SAN) deployments have proven the value of networked storage. But they've also made it clear that a heavy management burden and substantial cost won't be cured just by putting lots of disk behind a fabric. Now the same vendors that sold you a SAN to solve your management and cost issues are peddling a new technology to solve the management and cost issues that have resulted from trying to solve your management and cost issues.

So, what's the latest cure-all? Network-based storage intelligence, and everybody's doing it.

    Requires Free Membership to View

Is it smart to cache
in the network?
Both DataCore, Ft. Lauderdale, FL, and Melville, NY-based FalconStor employ network caching as a ways of improving performance, albeit differently. For FalconStor, caching comes by way of a solid state disk used to store frequently accessed files. DataCore, meanwhile, uses the system's internal RAM cache to speed up the I/O performance of the back-end array.

"It's not just that we're minimizing latency," says Calvin Hsu, DataCore product marketing manager, "we're actually increasing the performance of the array."

In an independent evaluation, The Evaluator Group found that a DataCore SANsymphony implementation could deliver 400,000 I/Os with 100% cache hit, numbers which "far exceed any single box storage system published results encountered so far," the analysts wrote.

However, whether for political or technical reasons, larger subsystem and switch vendors seem to think that caching from the network is ill-advised. Perhaps not surprisingly, EMC's Mark Lewis, executive VP, chief technology officer, says that caching from the network is out, as is putting RAID functions in the network.

"You'd never want to put [RAID] into the network because you'd lose performance, and then you'd have to add caching and then ... you'd have a RAID array," says Lewis, adding that "it's easier to make a RAID array into a switch than a switch into a RAID array."

Similarly, Scott Gready, director of virtualization, HP storage software group, advocates "keeping caching closer to the disk drives, in the array controller."

Others worry about scalability and reliability. If you have more than one device on the network, you now need to worry about keeping cache consistent between the devices, says Dave Stevens, Brocade director of business development and strategic alliances. He also worries what happens if the network device goes down before it has time to flush the cache to disk: "You lose all your I/Os."

The idea behind moving intelligence into the storage network is simple: Get it off of the servers and the arrays, where you are bound by proprietary operating systems, and you gain the flexibility to mix and match storage environments. Or as is more often the case, integrate incompatible storage systems.

Take copy services such as Network Appliance's Snap family of point-in-time copy and replication functions. Much loved by users, it requires that the target of any Snap function also be a NetApp array. Meanwhile, if you run storage services--volume management, for example--from the host, you're limited to providing services for that one host. In environments with a large number of hosts, that can be a real management headache.

Then, there's the fact that many network-based services rely on virtualization engines, which provide additional benefits in their own right, including improved utilization, increased ease in provisioning and insulation from the details of an underlying storage device.

While centralizing storage services is a solid concept, the devil will be in the details of your particular environment. And you'll have to sort out competing models of network-based services. Finally, only time will tell if the price will be right.

Killer apps for network services
Better utilization, simpler backup and streamlined management are among the many benefits promised by network intelligence, but by far, the reason most storage managers deploy network-based storage software now is for copy services such as data replication, snapshots, cloning or simply data migration.

In today's world of array-based replication--such as EMC Symmetric Remote Data Facility (SRDF) or Peer-to-Peer Remote Copy (PPRC) on IBM Enterprise Storage Server (a.k.a. "Shark")--users that need these features often buy more expensive storage hardware than they really need, says Rich Napolitano, Sun vice president of data services platform, and a founder of Pirus Networks, which Sun acquired last summer. Because copy services must run on both the initiator and the target, users are typically locked into one particular brand.

One early adopter of network-based copy services is Rod Lucero, chief architect at Conseco Finance Corp. in St. Paul, MN, who was recently put in charge of migrating the company's data center from an AS/400 environment to open systems. After finding that EMC's SRDF didn't meet his performance and cost requirements, Conseco moved to "a poor man's replication," i.e., database dumps and Unix remote copy functions. Servers quickly got bogged down, though, and database administrators petitioned Lucero "to give them their CPUs back."

So, Lucero set out to find a way to do data replication that wouldn't run on an array or impact production servers. He decided on DataCore's SANsymphony, which he uses to replicate asynchronously between data centers, across EMC and Hitachi Data Systems arrays.

Even stalwarts of array-based replication are offering the ability to copy data between dissimilar systems. IBM's recently announced Storage Volume Controller (SVC) replicates between high-end Shark and midrange FAStT arrays, says Bruce Hillsberg, director of storage software strategy and technology, IBM Systems Group. EMC's recently released SAN Copy performs similar functions between Symmetrix and Clariion.

But contrast the limited versatility of those efforts with Hewlett-Packard's Continuous Access Storage Appliance (CASA), which--in terms of capacity--supports about 90% of arrays in production, estimates Scott Gready, director of virtualization for HP's storage software group. And because of the underlying virtualization engines that power network-based storage services, storage managers are also unearthing other benefits.

This was first published in June 2003

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: