|Alternative storage clustering methods|
Not every vendor is choosing to cluster storage using clustered file systems. Here are some other ways vendors are clustering storage on the back end.
Hitachi Data Systems (HDS) Tagma-Store Universal Storage Platform. Hitachi's TagmaStore provides a common platform into which cards may be inserted to access a pool of shared storage.
| Card options include Fibre Channel, FICON and ESCON port cards, and iSCSI and NAS blades. This approach allows all storage to be managed at the block level through the same interface and with the same volume manager. HDS also gives users the flexibility to virtualize the storage pool. However, the NAS blades don't yet include any native method to share data at the file level, so third-party products that provide global name spaces are required.
Network Appliance (NetApp) Inc. Data Ontap GX. NetApp's Data Ontap GX allows users to create one logical group of NetApp filers. With Data Ontap GX, any filer can receive a client request for a file and redirect that request to the filer actually containing the file. This lets users add new filers to an existing NetApp filer installation, as well as share their resources without doing a forklift upgrade or introducing a NAS gateway. NetApp's optional FlexVol feature allows users to stripe data across all of the nodes to improve data performance and availability.
Pillar Data Systems Inc. Axiom Storage System. Pillar's Axiom architecture clusters its storage controllers, called Axiom Slammers, which serve as a gateway to its back-end disk and may be configured for SAN or NAS. Although each Axiom system supports only four Axiom Slammers, and each Axiom Slammer is configured in an active-active configuration, Pillar gives users the option to scale out the NAS Axiom Slammers by using their scalable file-system and global name space options.
Host-based clustered file systems
Clustered file systems that operate at the host level provide some distinct advantages over clustered storage systems and NAS gateway configurations:
- There's no need to purchase proprietary storage systems.
- They work in most mixed-vendor environments.
- There's no need to use a mix of file- and block-based protocols.
- The performance overhead associated with processing NFS and CIFS is minimized.
To control access to the files and maintain their integrity, SGI uses a meta data server for each CXFS file system. This requires each server in the cluster to communicate with the meta data server over a TCP/IP link. Even though the amount of meta data traffic sent over this TCP/IP link is minimal, users may not want to put this meta data server on the same physical network to help minimize network collisions and provide higher uptime. Users in highly available environments may want to consider building another physical network and clustering two meta data servers--an expensive and complicated configuration--so the failure of a single meta data server doesn't bring down the entire cluster (see "Alternative storage clustering methods," this page).
This was first published in September 2006