Clustering low-cost Linux servers together with a network file system saves significant costs over purpose-built NAS filers; that much is a no-brainer. Now, advocates of NAS clustering argue that the performance of these offerings matches or even outperforms traditional NAS appliances.
PolyServe Inc. and Rackable Systems Inc. this week announced the first network-attached storage (NAS) benchmark to break the 1 GBps barrier in I/O throughput using a general-purpose cluster file system, a standard operating system and Intel servers.
The results of the benchmark, audited by the Taneja Group, achieved a 470% increase over the performance of a midrange Network Appliance filer (250 MBps) and a 48% increase (818 MBps) over a NAS filer from startup Isilon Systems Inc. that used a cluster of 18 servers -- twice the size of the PolyServe-Rackable Systems cluster, the companies claim.
One user SearchStorage.com contacted said the results are important, particularly when running applications like an Oracle database. Dynamic Graphics Inc., a small stock photography distributor based in Alameda, Calif., gets 80% of its revenue from e-commerce and relies heavily on its Oracle databases to process these transactions.
"We won't hit that throughput, but it's nice to know it's there," said Todd Moore, director of IT at Dynamic Graphics. Right now, Dynamic Graphics runs one instance of Oracle 9i on a production server and another instance
Each server in the PolyServe cluster is an active file server and contributes compute, storage and network bandwidth to the overall system, which according to PolyServe allows for a scalable, high-performance NAS cluster. A spokesman for PolyServe said the complete hardware and software configuration used in the NAS cluster benchmark lists at $404,698. He said a "typical" NAS benchmark configuration would cost over $1 million.
The biggest drawback to NAS clustering is the lack of software features and management functionality that users expect from purpose-built NAS filers, like volume level controls, enabling snapshots, replication and integration with backup software providers.
The benchmark setup
The complete system configuration used for the benchmark tests consisted of 10 NFS client machines and 10 NAS server machines from Rackable Systems.
The Rackable Systems servers in the NAS cluster included two public Gigabit Ethernet network interface cards, a single private 10/100 Ethernet network interface card, PolyServe Matrix Server shared data clustering software, SUSE Linux Enterprise Server 8 SP3 software, and an Emulex LP10000DC Fibre Channel host bus adapter.
It also included two Brocade Communications Systems Inc. Silkworm 3800 16-port SAN switches, one DataDirect Networks Inc. S2A8500 storage controller, configured as four file systems, 74 drives (36 GB each, 15,000 revolutions per minute) with 72 active drives and two hot spares, and 2.3 TB of usable storage.