Hewlett-Packard Co. (HP) will supply a consortium of universities in Canada with its new Scalable File Share (SFS 2.0) clustered file system, winning a significant chunk of a $20 million IT upgrade, and beating out tough competition in the process.
Canada's Shared Hierarchical Academic Research Computing Network (SharcNet) is a group of 11 universities and colleges that in 2001 decided to combine their IT resources together to get more computing bang for their buck.
"It's working," according to Hugh Couchman, scientific director of SharcNet. Put in perspective, a Canadian researcher using SharcNet can produce, in a single day, results that would have normally taken a year or more on a personal computer.
Approximately $20 million of the new grant is being spent on four new parallel compute clusters from HP. These will be in separate locations and are collectively expected to deliver more than 25 teraflops of performance and will total 1,900 servers. One cluster, affectionately dubbed the "capability cluster", will include 3,000 processors. Attached to each of these clusters will be HP's Scalable File Share (SFS) clustered file system and industry standard SATA drives. The largest cluster has 200 terabytes (TB) behind it today.
HP SFS is a self-contained file server that enables bandwidth to be shared by distributing files in parallel across clusters of industry-standard server and storage components. It is the first commercial product to leverage Lustre, a Linux clustering technology developed by Cluster Files Systems, HP and the U.S. Department of Energy.
HP launched SFS a year ago and has about half a dozen customers so far -- the majority in academic research. Interestingly, while HP's enterprise storage business has lost market share, its high-performance computing group appears to be doing just fine. According to first quarter 2005 figures by IDC, HP leads the sector with 34% market share, representing about a five percentage point jump over IBM.
This week HP announced SFS 2.0, which doubles the capacity of the file system to 512 TB and triples the bandwidth to 35 GBps.
SharcNet took bids from IBM, Sun Microsystems Inc. and Silicon Graphics Inc. (SGI) before finally choosing HP's SFS. Prior to implementing SFS, the group had experience using Compaq servers and a home-built scalable file system. "Sometimes it would hang for 20 minutes just writing out a file, so we were acutely aware of the need to pay attention to the file system," Couchman said.
The group liked the fact that SFS is based on the Lustre file system. From SharcNet's investigations "Lustre provides the best performance for large clusters," according to Couchman. Price was another factor. "In academic research we're cheap; we don't want to spend a lot on software." Because SFS uses open source Linux code, SharcNet can modify the software as needed.
Other clustered file system products include: Advanced Digital Information Corp.'s StorNext File System, Ibrix Inc.'s Fusion, PolyServe Inc.'s Matrix Cluster, Red Hat Inc.'s Global File System (formerly Sistina GFS), Network Appliance Inc.'s SpinServer, SGI's InfiniteStorage Shared Filesystem CXFS, Sun's QFS, IBM's SAN file system and Veritas Software Corp.'s Cluster File System. Noticeable by its absence from this list is EMC Corp., which doesn't make its own scalable file system, but partners with Ibrix.
"SFS is new and we are still getting our feet wet with it … so far it's pretty good … one question we have is how to integrate and use the file system across our network," Couchman said. Currently, each cluster runs a separate iteration of SFS.
"Today, to mount a system at Western [University] on the cluster at McMaster [University], we would have to use good old NFS … but we want to get closer to the performance file system rather than the limitations of NFS." He said HP is aware of SharcNet's concern about this and is working with them on it.
The new HP clusters consist of HP Cluster Platform 4000 systems based on HP ProLiant servers with AMD Opteron processors running HP's Linux-based XC system software. The interconnect between the clusters is varied and includes Gigabit Ethernet, Quadrics Ltd. ELAN4, Myrinet and Voltaire's InfiniBand.
The cluster expansion is expected to assist SharcNet in studying human genomics, the containment of infectious human and animal diseases, improving weather prediction, simulating the collapse and formation of planets, and the development of nano-scale electronic devices.SharcNet includes the universities of: Western Ontario (lead institution), Guelph, McMaster, Wilfrid Laurier and Windsor, Waterloo, Brock, York, Ontario Institute of Technology and Fanshawe and Sheridan colleges.
Click here for more of today's news.