By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
|Roll your own NAS cluster|
Sometimes it's difficult to find a good out-of-the-box network-attached storage (NAS) clustering solution. That's the situation that Todd Moore from Dynamic Graphics Group found himself in. "We needed a cluster of NAS servers that could share with CIFS, NFS and HTTP, provide both availability and scalability and would integrate with our legacy applications," says Moore. He wasn't satisfied with available out-of-the-box products, so his team rolled their own, leveraging PolyServe's Matrix Server, a SAN file system.
Dynamic Graphics' solution consists of a four-node Linux cluster sharing 11TB of StorageTek storage area network (SAN) storage with Matrix Server. Two nodes act as NAS servers using open-source Samba software, while another serves a legacy application. The last is a dedicated backup and administration host. Matrix Server allows applications on all four hosts to see the same storage at the same time--at SAN speeds--and share it with a variety of protocols.
At first, Moore had reservations about the system. He was skeptical about mounting the same LUNs on four separate servers at the same time, and also about the difficulty of implementing the PolyServe software. In fact, neither PolyServe nor StorageTek had ever implemented the architecture Dynamic Graphics wanted. But after a customized demonstration Moore felt like "PolyServe's biggest customer." Technicians from both suppliers kept an open mind and worked together to make everything come together.
"It really does work," says Moore. "Thinking outside the box allowed us to have a system that does exactly what we wanted it to do. We couldn't have had that otherwise."
Many businesses have long used NAS arrays such as the NetApp Filer and EMC Corp. Celerra. But the capacity and performance limits of single NAS systems have pushed users to deploy multiple servers, either integrated NAS arrays with dedicated storage or NAS heads that sit in front of SAN storage. This has led to an undesirable proliferation of network-accessible file systems, or namespaces, within an environment. NAS aggregation allows multiple NAS servers to be combined and presented to clients as a single large file system, hiding the physical NAS devices behind it. There are numerous benefits to this concept, including having a single namespace with nearly unlimited capacity, as well as redundancy and performance.
SAN file systems, on the other hand, allow multiple SAN-attached servers to read and write to the same file system on the same LUNs. In other words, every server "sees" the storage and the file system on top of it as its own, and reads and writes are arbitrated by the SAN file system software, functioning as a traffic cop for storage blocks. The benefits are different, mainly in terms of performance and scalability of individual applications. Most SAN file systems serve a clustered application like Oracle 9i RAC, allowing all cluster members to access the same data at the same time over high-performance Fibre Channel (FC).
Three key points differentiate NAS aggregation from SAN file systems.
- The number of hosts: Most common SAN file systems are intended to allow just a few large servers--usually 16 or fewer--to share access to data. NAS aggregation devices, by contrast, support hundreds of hosts.
- Connectivity: NAS aggregators leverage file sharing protocols such as CIFS and NFS over Ethernet and TCP/IP, while SAN file systems require block-level access over FC.
- Implementation: NAS aggregation is usually performed by special network devices, while SAN file systems are usually implemented in software on the clients.
The proliferation of NAS systems leads to the proliferation of mount points. In other words, if each file server has four shares, and each NAS filer has four more and there is four of each type, then there are 32 independent mountable entities on the network. The problem grows if you need to add another NAS server because you would have to make sure everyone was mounting the new share in addition to the old one. Many Windows environments have run out of drive letters because Windows only supports 24 shares. Although Windows 2000 and above allows shares to be mounted without drive letters, users are so accustomed to referring to "the N drive" that they start getting confused by other mounting methods.
Unix environments have turned to the automount daemon, a network service that allows an administrator to push out a map of network shares and their mount points. This lets Unix users adapt to an ever-changing landscape of network shares, but puts even more burden on administrators to keep everything in order.
Another problem related to NAS is that systems are so easy to set up that users think they are also easy to manage. Many businesses have NAS boxes that are owned and managed by user departments, rather than central IT. These boxes tend to be huge, unmanaged repositories of random files, seldom backed up and constantly out of space.
There are two key solutions for NAS aggregation: First, some aggregation products offer a global namespace--consolidating all files into a single huge tree. And second, some offer clustering.
The scalability solution is attained by breaking the hard link between the file's network name and its actual location. For instance, NetApp's SpinServer allows files and whole directory trees to be moved between nodes in the background, even while clients are actively accessing them. This allows a SpinServer cluster to be scaled seamlessly to add new storage and redistribute current storage with no availability outages. By contrast, OnStor Inc.'s SAN Filer creates a number of virtual NAS servers containing all of the shares on the network.
If files and directories are no longer linked to their actual "homes," a single large virtual tree of directories can be created and maintained. Global namespaces combine independent file servers into a single virtual one. Even if users don't need a single namespace (see "Roll Your Own NAS cluster"), a virtual namespace can still allow scalability and availability. Most importantly, it allows the NAS directory tree to look how you want it to look, whether that means a single corporate tree or a number of departmental ones.
Global namespace is a type of meta-directory of NAS namespaces that allows storage administrators to automatically move and manage data across heterogeneous NAS environments as if they were a single filer.
An interesting global namespace product is NuView Inc.'s StorageX. It manipulates the Windows Active Directory to make any CIFS-based NAS servers--including Windows servers, NetApp Filers and anything else--appear to be a single tree. NAS clients just see a single huge NAS server, and can mount any part of the combined directory tree.
StorageX is software that leverages the Windows DFS technology, which is analogous to DNS for IP addressing. When a Windows host requests access to a file, StorageX transparently redirects the request to the NAS system that hosts the directory containing the file. For this reason, StorageX currently works only for Windows clients, but NuView promises an NFS implementation around September of this year. However, the NFS version might require a remount when the network address of the NAS system changes. And even some Windows applications, including Microsoft Exchange (the ever-present monster of the data center), can't handle the retries needed when the storage topology changes, so it requires a remount as well.
Another product in the global namespace party is Z-force's ZX-1000 File Switch. Rather than leveraging network services or building their own NAS filer, Z-force offers a file switch that sits between NAS clients and servers. The box combines the NAS resources behind them into a single namespace, enabling high availability and flexibility. Z-force File Switches can be installed in groups, allowing their performance to scale with user demands. These systems, too, are Windows only.
Not every global namespace exists outside the NAS filer, though. NetApp's SpinServer implements the namespace within a cluster of NAS servers so it can support any protocol, and client retries are not required. But a complete "forklift upgrade" of the NAS environment isn't desirable, so NetApp also re-sells NuView's StorageX, calling it Virtual File Manager (VFM).
But do you really want a global namespace? "Some customers want to see everything as a single tree, and others don't," says NetApp product manager Ravi Parthasarathy. So these products also allow the global namespace to be split into a few smaller ones while preserving the benefits of virtualization.
"Once you have a virtual namespace, you can implement policy-based intelligence," says NuView founder Rahul Mehta. "For example, in the event of a disaster, your remote offices can be redirected to NAS server A in the data center, while your corporate users would be redirected to NAS server B." But make sure your solution supports locking of open files; without locking, two clients at different sites could try to write to the same file at the same time, leading to data corruption.
|Comparing NAS products|
|Clustered file system options|
The key difference between the two clustered file system options (NAS aggregation and SAN file systems) is the location of the cluster and the file system. NAS aggregation clusters NAS servers, while SAN file systems cluster host servers.
Small NAS filers have another glaring fault--single points of failure. The availability problem has long been met by clustered NAS systems like EMC's Celerra. This system breaks the link between an IP address and the "data movers" that serve client requests. Celerra acts like a traditional NAS system in most respects, but its ability to fail over one data mover to another in the event of a failure brought it credibility in the enterprise. Today, most NAS makers, including BlueArc Corp.,
NetApp, OnStor and others offer similar clustering abilities for high availability.
BlueArc's Titan NAS server is the standard-bearer for monolithic NAS. Rather than clustering small NAS arrays with a single namespace, BlueArc seeks to scale a single box to handle all of an enterprise's NAS needs, though it does include an internal cluster for high availability. "It will always be easier to manage one array than many small ones," contends Geoff Barrall, BlueArc's CTO.
By contrast, OnStor's SAN Filer is an integrated hardware and software platform that consolidates file services on open SAN storage.
Jon Toor, director of marketing for OnStor, notes that "workloads tend to be concentrated on a few cluster members." So OnStor's Filer uses virtual NAS servers that can be shifted to other physical filers on the fly to balance the load. All of the state information for each virtual filer is stored on the SAN disk, so the physical filers can be added and removed at will. Although this approach is likely to lead to a proliferation of virtual NAS servers and shares, it scales extremely well in terms of performance and availability. For a company that needs extremely high availability and wants to leverage SAN storage, but doesn't need a single client-side view of NAS shares, OnStor's SAN Filer is compelling.
Nearly all recent NetApp filers support clustered failover; however NetApp's SpinServer provides more than basic clustering. Although SpinServer is currently targeted at high-performance Linux clusters, SpinServer can also support general business needs. SpinServer shares files with traditional protocols like NFS and CIFS, and the entire cluster appears as a single system to both clients and managers.
What to choose?
Most NAS filers today offer high availability with failover pairs. But the ability to scale an environment requires much more than this. Considerations include scalability demands, dispersed storage and the difficulty of managing a multitude of files, shares and servers.
Almost all solutions can scale, both in terms of performance and capacity, with the addition of more hardware. But not all can seamlessly integrate this new equipment into the existing environment. This is the key challenge for products that do not offer a virtual namespace--new hardware appears as new network shares and users must change their behavior to use it.
To avoid this problem, consider a global namespace product such as NuView's StorageX or Z-force's File Switch; but remember, these are Windows only. The only way to implement a cross-platform global namespace is to replace your NAS infrastructure with, for example, NetApp's SpinServer or Panasas' ActiveScale. If Oracle 10 on a Linux cluster is in your future, then the NetApp and Panasas solutions should be on your short list.
Many of these solutions may simplify your users' view of storage, but they won't necessarily make your life easier. The router-style solutions from NuView and Z-force still require management of the underlying storage infrastructure. And the integrated solutions from BlueArc, NetApp, OnStor and Panasas are entirely new storage architectures with their own learning curve. Once you have mastered these technologies, though, management should be simple since they are all controlled through a single application.
Finally, if you don't want to bother with new clustering architectures, it's perfectly okay to stick to a more traditional NAS system with high levels of scalability. EMC's Celerra is offered in a number of configurations from the single-node NS700G NAS head and NS700 two-node cluster, to the 14-node CNS. NetApp has a range of clustered systems from the small FAS200 to the much larger FAS900. IBM Corp.'s NAS Gateways are another popular scalable NAS solution.
If you have a proliferation of NAS systems and mount points, or if you are concerned that your NAS solution isn't enterprise-class, this new breed of NAS products can help shield your users from the complexity of aggregating NAS. And these systems will also help ensure that you can offer NAS users the same levels of availability and scalability found in your SAN.