The case for network smarts

Let's face it: SANs as they currently exist only deliver about half of what you might hope for in the way of efficiency and optimal utilization. The best bet to deliver the other 50% is network-based storage intelligence. You'll have to get past the magic-wand claims for this latest pancea from storage vendors, though. And not every incarnation of smart switches or appliances is going to be right for you.

This Content Component encountered an error
This article can also be found in the Premium Editorial Download: Storage magazine: Should you consolidate your direct-attached storage (DAS)?:

In the past couple of years, widespread successful storage area network (SAN) deployments have proven the value of networked storage. But they've also made it clear that a heavy management burden and substantial cost won't be cured just by putting lots of disk behind a fabric. Now the same vendors that sold you a SAN to solve your management and cost issues are peddling a new technology to solve the management and cost issues that have...

resulted from trying to solve your management and cost issues.

So, what's the latest cure-all? Network-based storage intelligence, and everybody's doing it.

Is it smart to cache
in the network?
Both DataCore, Ft. Lauderdale, FL, and Melville, NY-based FalconStor employ network caching as a ways of improving performance, albeit differently. For FalconStor, caching comes by way of a solid state disk used to store frequently accessed files. DataCore, meanwhile, uses the system's internal RAM cache to speed up the I/O performance of the back-end array.

"It's not just that we're minimizing latency," says Calvin Hsu, DataCore product marketing manager, "we're actually increasing the performance of the array."

In an independent evaluation, The Evaluator Group found that a DataCore SANsymphony implementation could deliver 400,000 I/Os with 100% cache hit, numbers which "far exceed any single box storage system published results encountered so far," the analysts wrote.

However, whether for political or technical reasons, larger subsystem and switch vendors seem to think that caching from the network is ill-advised. Perhaps not surprisingly, EMC's Mark Lewis, executive VP, chief technology officer, says that caching from the network is out, as is putting RAID functions in the network.

"You'd never want to put [RAID] into the network because you'd lose performance, and then you'd have to add caching and then ... you'd have a RAID array," says Lewis, adding that "it's easier to make a RAID array into a switch than a switch into a RAID array."

Similarly, Scott Gready, director of virtualization, HP storage software group, advocates "keeping caching closer to the disk drives, in the array controller."

Others worry about scalability and reliability. If you have more than one device on the network, you now need to worry about keeping cache consistent between the devices, says Dave Stevens, Brocade director of business development and strategic alliances. He also worries what happens if the network device goes down before it has time to flush the cache to disk: "You lose all your I/Os."

The idea behind moving intelligence into the storage network is simple: Get it off of the servers and the arrays, where you are bound by proprietary operating systems, and you gain the flexibility to mix and match storage environments. Or as is more often the case, integrate incompatible storage systems.

Take copy services such as Network Appliance's Snap family of point-in-time copy and replication functions. Much loved by users, it requires that the target of any Snap function also be a NetApp array. Meanwhile, if you run storage services--volume management, for example--from the host, you're limited to providing services for that one host. In environments with a large number of hosts, that can be a real management headache.

Then, there's the fact that many network-based services rely on virtualization engines, which provide additional benefits in their own right, including improved utilization, increased ease in provisioning and insulation from the details of an underlying storage device.

While centralizing storage services is a solid concept, the devil will be in the details of your particular environment. And you'll have to sort out competing models of network-based services. Finally, only time will tell if the price will be right.

Killer apps for network services
Better utilization, simpler backup and streamlined management are among the many benefits promised by network intelligence, but by far, the reason most storage managers deploy network-based storage software now is for copy services such as data replication, snapshots, cloning or simply data migration.

In today's world of array-based replication--such as EMC Symmetric Remote Data Facility (SRDF) or Peer-to-Peer Remote Copy (PPRC) on IBM Enterprise Storage Server (a.k.a. "Shark")--users that need these features often buy more expensive storage hardware than they really need, says Rich Napolitano, Sun vice president of data services platform, and a founder of Pirus Networks, which Sun acquired last summer. Because copy services must run on both the initiator and the target, users are typically locked into one particular brand.

One early adopter of network-based copy services is Rod Lucero, chief architect at Conseco Finance Corp. in St. Paul, MN, who was recently put in charge of migrating the company's data center from an AS/400 environment to open systems. After finding that EMC's SRDF didn't meet his performance and cost requirements, Conseco moved to "a poor man's replication," i.e., database dumps and Unix remote copy functions. Servers quickly got bogged down, though, and database administrators petitioned Lucero "to give them their CPUs back."

So, Lucero set out to find a way to do data replication that wouldn't run on an array or impact production servers. He decided on DataCore's SANsymphony, which he uses to replicate asynchronously between data centers, across EMC and Hitachi Data Systems arrays.

Even stalwarts of array-based replication are offering the ability to copy data between dissimilar systems. IBM's recently announced Storage Volume Controller (SVC) replicates between high-end Shark and midrange FAStT arrays, says Bruce Hillsberg, director of storage software strategy and technology, IBM Systems Group. EMC's recently released SAN Copy performs similar functions between Symmetrix and Clariion.

But contrast the limited versatility of those efforts with Hewlett-Packard's Continuous Access Storage Appliance (CASA), which--in terms of capacity--supports about 90% of arrays in production, estimates Scott Gready, director of virtualization for HP's storage software group. And because of the underlying virtualization engines that power network-based storage services, storage managers are also unearthing other benefits.

Will NAS move to the
intelligent switch?
Data replication, backup, volume management--these are just some of the storage services slated to move into the network. But what about that most commonplace of storage services--file services, e.g., NFS and CIFS support?

These days, the mainstream approach to file services is network-attached storage (NAS) appliances and heads for file services, which will probably remain the case for the foreseeable future, says Jeff Hornung, vice president of marketing and business development at Spinnaker Networks, Pittsburgh, PA, a start-up offering NAS and NAS head products.

It's conceivable that over the next couple of years, NAS heads will come as add-on blades within an intelligent switch vendor's chassis, he says. That may provide benefits in terms of space savings, but introduces the downside of becoming locked into a switch vendors' platform for NAS capabilities. Later on, we could see tighter integration with intelligent switch platforms, but NAS, "as a file service, sits at a level above" blocks, and as such, wouldn't benefit much from port-level processing.

"When you think about moving intelligence into the network," Hornung says, "you only want to do what makes sense, that is, what brings a significant benefit to IT."

At least one major OEM agrees with Hornung's sentiments. "There are people that want to load these platforms with everything but the kitchen sink--here's an appliance that's a desert topping and a floor wax," says Scott Gready, director of virtualization, HP storage software group, in charge of HP's CASA line. But while "it'd be quite easy for us to load file services on," Gready says, "we just haven't seen a demand for it."

But that doesn't mean that there isn't a role for network-based services in the NAS world. Rainfinity, San Jose, CA, for example, sells a data migration appliance called RainStorage which allows NAS administrators to do the unthinkable: move data between NAS devices during the day, without disrupting end users, says Jack Norris, Rainfinity vice president of marketing. How does it work? The RainStorage appliance simply sits out-of-band on a VPN until an administrator initiates a data migration, at which point it moves in-band, managing copy functions, and transparently redirecting end-user file requests. When the data movement is done, RainStorage moves back out-of-band until the next time an administrator wants to migrate data.

Relief for backup
Copy services also open the door to better backup. Rather than bringing down an application in order to perform backup, you take a snapshot of it, which you can then use as the basis of your backups.

According to John Webster, senior analyst and founder of the Nashua, NH-based Data Mobility Group, the advent of network-based storage services could finally result in "the long-awaited, vaunted" serverless backup.

It could also mean substantially faster disk-to-disk backup. Virtual tape software vendor Alacritus, Pleasanton, CA, has ported its software, Securitus, to the Brocade SilkWorm Fabric Application Platform (SilkWorm Fabric AP), for an anticipated 3X boost in performance (from 500MB/s to 1.5GB/s) over running it on a PC, says Don Trimmer, Alacritus co-founder and chief strategic officer. HP recently demonstrated a 3TB/hr backup, "but they threw a couple million dollars in hardware at it," Trimmer says. In contrast, Alacritus expects to achieve 5TB/hr backup rates "with $150,000 worth of hardware."

Another big bonus is improved utilization. Conseco's Lucero, who pools 70TB across various EMC arrays, has seen utilization skyrocket since installing SANsymphony approximately a year ago. For example, by assigning three servers with a total of 3TB of capacity to SANsymphony, Lucero was able to "buy back" 1.3TB of captive capacity. At $.08/MB, Lucero figures he saved his company over $100,000 with that example alone.

Virtualization can also keep utilization rates high by automatically provisioning capacity to applications. SANsymphony, for example, can assign applications capacity in granular 128MB chunks as needed.

Appliance or switch?
If network intelligence is a consensus choice, the package it should be delivered in is not. Your options will revolve around the in-band PC appliance model, such as FalconStor's IPStor, DataCore's SANsymphony, as well as IBM's SVC and HP's CASA; and the out-of-band intelligent switch, as exemplified by Brocade's Silkworm Fabric AP, Cisco's MDS 9000, and Maxxan's MXV320--technology that's thus far unproven, but that has many proponents. Certainly, the appliance model has the longest track record and most traction in the marketplace, with many happy customers.

James Dyches, director of computer operations for IT distributor Bell MicroProducts, San Jose, CA, built a new data center in Montgomery, AL, two years ago around FalconStor's IPStor. Previously, the company's main data center was housed in San Jose, CA, "but we had no backup, no disaster recovery--no nothing." Working with FalconStor, Dyches built a new data center around an active/active pair of FalconStor servers virtualizing two Rorke Data Fibre Channel (FC) arrays with a total of about 500GB of data, largely e-mail and end-user files. The data is then replicated over ATM to the San Jose data center. How does he like it? "I couldn't be happier with my access times, I couldn't be happier with my uptime, I am completely contented," Dyches says.

At the same time, there are some users who aren't about to take the risk of putting those appliances in front of enterprise class arrays.

HP CASA user Mark Deck, director of infrastructure technology at National Medical Health Card Systems Inc. (NMHC), a pharmacy benefit manager in Port Washington, NY, is one of them. In addition to midrange SAN equipment, NMHC also owns an HP XP256 array. "As much as I'd love to" virtualize the xp256 with CASA, "it could be a bottleneck," he says. Instead, he'll wait for CASA to run on Brocade's SilkWorm Fabric AP, as promised by the two companies this March.

Why does he think the PC-based CASA would be a bottleneck? For one thing, all the hosts connected to NMHC's CASA box are Wintel boxes, and "they only run at half a gigabit." You can connect a lot of servers before you saturate your virtualization appliance. The same can't be said of NMHC's large HP-UX servers, though.

Similarly, Roy Singh, an independent consultant specializing in data center design and implementation, is all for the concept of network-based virtualization, but has reservations about the PC platform approach.

"Virtualization in a network environment would make EMC and HDS [arrays] look more homogeneous, which would make my life much easier," says Singh. But "performance is my thing," he says. For that, you need to "elevate [virtualization] to another level, closer to the port."

In contrast to virtualization engines running on PCs, intelligent switch platforms are purpose-built devices with processors at every port. And whereas existing virtualization engines run in-the-data path or "in-band," intelligent switch platforms also offload some processing onto an out-of-band control path.

For example, Brocade's SilkWorm Fabric AP is built around the so-called XPath architecture, which separates out the data and control paths. Nearly all traffic (approximately 95%) travels over the data path stream, including SCSI I/O reads and writes, block-based copy and mirror and virtualized I/O translation frames. The control path, meanwhile, manages less frequent traffic--for example, volume configuration and placement, error handling and recovery and security.

According to Dave Stevens, Brocade Communications' director of business development: "Applications tend to be bi-modal--that is, 90% of the code is performance-sensitive, but not very computationally complex, and 10% is computationally complex, but not very performance-sensitive." In Brocade's architecture I/Os get pushed down to dedicated port-level processors, while non-performance sensitive data gets crunched by a general purpose processor. This approach has resulted in approximately 60,000 I/Os per port, and near wire-speed throughput of 195MB/s.

The number of intelligent switch ports you'll need in addition to basic FC SAN ports is a function of the port performance, and the I/O and throughput needs of the hosts you want to virtualize. Given Brocade's preliminary performance specifications, five hosts generating a total of 50,000 I/Os could be pooled with a single SilkWorm Fabric AP port.

Bottom line: easier management
Ultimately there's one very powerful argument for running storage services in the network: to simplify management.

Just take a look at managing licenses for your volume management software. Assuming price is not a factor, "as an administrator, would you rather manage licenses for 50 volume managers running on the host, or just one in the network?" asks Jeff Silva, VP, strategic planning, Maxxan.

Centralized management is one benefit that isn't lost on Chas Peterson, director of hosting at Sprint, which is evaluating Sun's 6300 family: Its 6120 arrays, as well as the 6320 management station, both precursors to the first standalone product based on Pirus technology (the 6920) due out in Q3 of this year. As a hosting provider, Peterson appreciates the way it can dedicate a 6120 array to individual hosting customers, but centrally manage the arrays.

And it's not just that management is simpler with network-based services--it's downright easy, says HP CASA user Mark Deck, director of infrastructure technology at National Medical Health Card Systems, Inc. (NMHC), a pharmacy benefit manager in Port Washington, NY. The virtualization provided by CASA "has done to storage management what Windows did to servers--click here, click there and you're done," Deck says.

"With 15 minutes of training, I was carving up LUNs," Deck says. "For me, who doesn't do much hands-on work anymore, it was pretty amazing."

NMHC is using CASA to virtualize its midrange SAN array, the HP Virtual Array 7400, which Deck says translates directly into lower total cost of ownership, "because I don't need to have a genius managing this thing--I can teach pretty much anyone to manage it."

Improved management is also what Scott Thomas, director of discovery IS at pharmaceutical firm AstraZeneca R&D Boston, Waltham, MA, hopes to get out of Sun's N1 data platform. Currently, AstraZeneca has 19TB of capacity on Sun T3 arrays, connected to Sun (QLogic) switches.

"Supporting a fast-paced scientific research environment requires us to continually configure storage and computer systems for new projects and experiments, which is very costly," Thomas says. "By centralizing storage provisioning and improving our ability to provision storage capacity, Sun's virtualization may allow us to reduce the cost of support."

But all this performance comes at a cost. Brocade's Stevens estimates a 2.5X per port cost out of the SilkWorm AP, but Randy Kerns, a partner at The Evaluator Group, Greenwood Village, CO, expects it to be more like 5X.

But that's just the price of the port--not the price of the software running on top of it. It's not clear how the software will be priced. For example, if you ran Veritas' volume manager on Cisco's switch, who would you make the check out to? Cisco? Veritas? A combination of both? Furthermore, how is the software priced? As a site license, or according to the total number of virtualized hosts? Perhaps you'd be paying for total capacity? For the time being, "there are a variety of different routes to market," says Kris Hagerman, Veritas senior vice president of strategic alliances.

No matter how low the price per port goes on an intelligent switch, it will be tough for it to drop as low as a PC. Conseco's Lucero runs DataCore software on $6,000 PCs with 6GB of cache, and he, for one, won't be buying an intelligent switch. "I'm going to spend a lot less money" on appliance-based software, "and get just as much out of it."

So, when will all this goodness arrive? Maxxan's MXV320 is in customer trials, but that's about as close as an intelligent switch has gotten to market. Most vendors talked vaguely about Q3 or Q4 of this year, but among analysts, that's seen as optimistic. "Nobody's going to make a dime off of intelligent switches this year," says The Evaluator Group's Kerns.

Furthermore, ramp up will be slow, and "vendors are at very different levels of sophistication and delivery," Kerns says. That in turn will "create a lot of confusion and disillusionment among potential customers." For the time being at least, "the appliance model is easier for people to swallow."

Past history suggests that's not the end of the story. Back in the mid-'90s, for example, when the Web was just coming on to the scene, IT managers found that their Web servers couldn't handle the load, recalls Peter Wang, CTO at iSCSI storage vendor Intransa, San Jose, CA. That led to the development of appliance-based load balancing software from the likes of Alteon and Arrowpoint, later acquired by Nortel and Cisco respectively, and integrated into high-end chassis switches.

The current trend of moving storage services to the network, is therefore, "something that all networks go through," says Mark Davis, senior VP of marketing at Milpitas, CA-based Candera, a start-up developing a so-called network storage controller. "It happened to the telephone network, it happened to IP--it's the natural evolution of the architecture."

This was first published in June 2003

Dig deeper on SAN management

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close