Used to be, if you wanted to give users a central place to store files, you had two options: put them on a generic file server, or on a network-attached storage (NAS) device. But as companies build out more storage area networks (SANs), many storage administrators are clamoring for ways to store user files on that big, fast, centralized and highly reliable SAN storage.
Certainly, the simplest way to tap your SAN for file storage is to put a NAS "head" or gateway in front of it. Under that scenario, the NAS head is assigned its own LUN on the SAN device as its dedicated disk supply. In the enterprise space, examples of NAS heads include EMC's Celerra, NetApp's gateway into HDS storage, and Auspex's NSc3000. IBM, Dell, and Snap Appliance all offer departmental and workgroup-class NAS gateways.
This January, NAS start-up Spinnaker Networks also started selling a NAS gateway, the SpinServer 3300G, under customer pressure for a version of its flagship SpinServer 3300 for a product that could use pre-existing SAN disk resources. In its first release, the 3300G supports SAN storage from LSI Logic, with support for EMC and Hewlett-Packard arrays forthcoming.
Spinnaker's SpinServer 3300G differs from other NAS gateways in terms of scalability, says Jeff Hornung, Spinnaker's vice president of marketing and business development. Whereas SpinServer's competition can support file systems limited to under 10TB in size, SpinServer 3300G can support a single 11PB (that's 11,000TB) file system, supported by 512 gateways. In contrast, a single NetApp 960 can support a maximum of 18TB, configured as a minimum of three file systems, Hornung says.
Not that anyone would ever want a file system that big, says Jon Toor, director of marketing at ClariStor, another start-up attempting to solve the file services for SANs problem. "After a couple of terabytes, a file system starts to break down in terms of backup and restore," he says.
But like the Spinnaker's 3300G, ClariStor's product, the so-called SAN filer, will support petabyte-scale file systems when it comes out later this year. The reason? "For any-to-any connectivity," Toor says. In other words, ClariStor's SAN filers, which are diskless, will be able to be joined in a seamless cluster, and access any data in the storage pool. Traditional NAS heads, in contrast, don't share disk resources between one another, creating, in effect, "islands of NAS," despite being on a single physical pool of SAN disk.
Meanwhile, if what your clients really need is performance, a NAS head probably isn't your best bet, says Paul Rutherford, CTO of software at ADIC, which sells its StorNext File System to organizations that require high-speed access to large files stored on SAN storage.
For example, whereas a client sitting on a LAN may receive data at 2MB/s to 3MB/s, clients running the StorNext file system receive data starting at 80MB/s "to as much as we can give them," Rutherford says. "A typical HDS or EMC device can pump data out to the tune of 400MB/s to 500MB/s," Rutherford says. "Our clients would like to be able to take advantage of that."
The StorNext file system, like Sun's QFS, SGI's CXFS, and Tivoli's Sanergy, is an example of what's known as a SAN, or distributed file system. In wide use among industries such as seismic exploration and video production, SAN file systems consist of an agent that runs on the clients and a metadata server that manages functions such as data placement and file locking. Clients request files out-of-band from the metadata server, which then directs them to the exact location on the SAN device.
SAN file systems aren't for everyone though. Because of the bottleneck introduced by the metadata server, they can do so only to a limited number of clients--typically fewer than 100. Then, there's the small issue of the client-side agent. "The SAN file system's big downfall is client-side code," says Spinnaker's Hornung. "A lot of IT organizations are going to shy away from that."