CIOs focus on hardware costs when thinking about TCO

    Requires Free Membership to View

For organizations looking to buy and design NAS for the first time, Rhonda Gass, Dell's VP of storage systems development, has four tips for end users.
First, be flexible with NAS solutions. Look to deploy solutions that easily integrate into your existing network infrastructure and will allow you to use what already you have. If possible, choose solutions that support a plug-and-play mentality. For enterprises expecting rapid growth, the solution should include a roadmap to connect the NAS to a storage area network (SAN).
Second, buy NAS solutions that scale. Many NAS appliances allow you to add additional internal storage, but optimally look to purchase solutions that permit SANs to be put on the back end.
Third, design your NAS for maximum uptime and availability. One should choose a solution that may integrate with your disaster recovery plan. It should also allow users to have options to do disk mirroring as well as to do disk-to-disk backup before moving the data off to tape for archival.
Finally, choose a NAS solution that's cost efficient. The solution chosen should fit your pocket book, but also be simple for your staff to deploy and manage.

Managing the complexity
With the recognition of this management complexity--especially in enterprise environments--a number of tools and approaches to manage or reduce this complexity are emerging. The effectiveness of each of these tools and approaches varies, depending on what each organization's current NAS environment looks like.

Needless to say, the more one owns of one vendor's NAS equipment, the easier it is to get a handle on the management of that environment. NetApp's Data Fabric Manager (DFM)--which allows administrators to manage hundreds of NetApp products from a single DFM console--can recognize its NAS filers several product generations back and manage them with this tool. This ability to manage current and legacy NetApp products gives an organization that has standardized on NetApp filers a significant management advantage over organizations with a mixed-vendor NAS environment.

Another advantage NetApp offers over their competitors comes in the backup space. While the Network Data Management Protocol (NDMP) provides backup help, NetApp's SnapVault software offers the ability to migrate data snapshots between two of their filers. This option provides organizations a 24-hour window to do backups. One set of data could remain in production, while the snapped copy of the data gets backed up to tape. NetApp also integrates well with backup products such as Veritas' NetBackup and Tivoli's Storage Manager. These products use NDMP to back up snapshots from filers to tape. Since the backup is from a snapshot, there's no risk of backing up open files and it greatly reduces the amount of time that the production application needs to be offline.

Dell offers a different vision of NAS management.
NetApp powers their filers with their proprietary ONTAP OS, contrasting with Dell's strategy to power NAS appliances based on Microsoft's Windows 2000 Server Appliance Kit (SAK). Dell's NAS Product Manager, Marc Padovani, believes this approach more accurately reflects the mind set of cost-conscious organizations. Dell sees more organizations moving toward a Windows-based environment and is aligning its storage infrastructure strategy with that corporate movement. It believes that by using NAS appliances based on Microsoft's SAK, organizations may utilize their existing Microsoft trained and certified staff to manage Dell's appliances as just another node on their Windows network. This ability to manage any Windows-based NAS appliance may possibly--by Dell's own admission--be extended to include any vendors' Windows-based NAS appliance, such as Hewlett-Packard or IBM. However, Dell hasn't extensively tested this functionality in their labs.

Mark Nagaitis, HP's director of product marketing in its infrastructure and NAS Division, says today most enterprise shops have Windows 2000 and Unix variations on their raised floors. With HP's recent merger with Compaq, it now offers NAS filers sporting an underlying Windows 2000 or Unix kernel. The company sees these choices as important in performance-sensitive environments. Nagaitis says internal HP tests show that Unix-based NAS appliances perform better when used with Unix servers and Windows NT-based NAS appliances show similar performance gains when used with Windows clients.

NAS heads
HP, however, believes the real key to NAS management lies not in NAS appliances, but in the adoption of NAS heads that provide front ends to back-end storage area networks (SANs). NAS heads provide access to an unlimited amount of storage that's filtered through the NAS head. Using this model coupled with HP's ability to offer NAS heads optimized for either Windows or Unix environments, HP may be primed to offer the storage utility model that so many companies are looking for.

EMC also primarily uses NAS gateways as front ends to back-end SANs for many of the same reasons. They look to scale storage on the back end while using one logical front end for NAS file services. Currently, EMC distinguishes itself by offering its Celerra HighRoad software with this deployment. Upon receiving a file request, HighRoad intelligently determines the fastest route back to the user, whether it be via Fibre Channel (FC) over the SAN or directly back to the NAS device over IP.

With the growing use of NAS heads fronting back-end SANs, why not just use Windows files servers in place of NAS heads? There are two reasons. First, Windows file servers require the purchase of Client Access Licenses (CALs) for all clients accessing the storage. NAS appliances based on Microsoft's SAK don't need these CALs. Second, developers may optimize the Windows-based appliance for optimal file serving performance saving end users the time and expense of figuring it out themselves.

Steve Terlizzi, vice president of marketing for CA-based Z-force Inc., argues that each of the above solutions for reducing NAS complexity requires standardization on one vendor's product line, which also means locking yourself into one vendor's pricing model. Z-force plans to offer a file switch in the first quarter of 2003 that aggregates NAS appliances, no matter who makes them. Terlizzi claims Z-force's technology increases utilization and performance and also spreads the workload across NAS filers. Z-force recently demonstrated at a storage trade show how twelve of their file switches acting as one logical unit was able to manage 47TBs of storage. A diverse group of NAS vendors including Dell, Iomega, and Xtore comprise the back end of this storage pool. This type of solution may be just the technology that shops with multiple NAS arrays from multiple vendors may be looking for to consolidate and manage their existing NAS environment.

Despite the promises these new management offerings from traditional and emerging vendors hold, other questions linger. Will NAS and SAN technologies converge, or won't they? And what is this file-based vs. block-based argument all about? After all, why can't SAN and NAS just get along?

This was first published in February 2003

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: