Ezine

This article can also be found in the Premium Editorial Download "Storage magazine: Hot tips for buying storage technology."

Download it now to read this article plus other related content.

Saving dollars with distributed file systems

Requires Free Membership to View

A distributed file system delivers the benefits of network-attached storage (NAS)-like data sharing with the scalability of a storage area network (SAN). What's more, a distributed file system can eliminate the cost of spare systems. Systems participating in the distributed file system share access to a common data store, so if a server running an application fails, other participating servers can start the application and resume the work. Some examples of distributed file systems include the IBM's SANFS, Silicon Graphics' CXFS, Sistina's (now Red Hat's) GFS and Veritas' Clustered File System. (See "A simple design for a distributed file system").

It's possible with a distributed file system to build server farms consisting of inexpensive PC servers where each server has a single SAN connection and redundancy is provided through the farm. If a server fails or loses its SAN connection, it can be removed from the farm and file system; the client application can reconnect to another server. In general, it doesn't matter whether or not an individual server is available, as there should be adequate CPU power in the farm to handle peak work loads and the occasional loss of a server.

Of course, it's essential to plan for the unexpected loss of a switch. Again, the use of eight-port switches minimizes the exposure to a single switch failure. If a production switch fails, the use of a spare switch provides a modular solution bringing servers back online in relatively short order. On the performance side, the immediate performance exposure to a switch loss in a server farm can be calculated as the percentage of servers accessing storage through an individual switch.

For example, if there are 48 servers in a Web server farm accessing storage through eight switches (six servers connected per switch), then the loss of a single switch would result in a drop of 12.5% of the server farm's computing capacity. Here are the numbers: Forty-eight servers save $1,500 each by using a single connection for a total savings of $72,000; there's a spare switch costing $6,000, but no spare servers. The overall SAN savings is $66,000, not counting the cost of implementing the server farm and distributed file system.

It's certainly not news that storage area networks (SANs) are expensive to purchase, install and operate. As a result, they're primarily on high-end, critical Unix systems where their cost can be justified. Meanwhile, many smaller, less-critical Windows and Linux servers in corporate data centers--and in most small and medium-sized enterprises--are still using direct-attached storage (DAS) and are experiencing all the administrative headaches that DAS brings.

It's time to reevaluate the high-availability and high-cost assumptions about SANs and look at some alternative architectures and topologies that would allow Windows and Linux PC systems to be incorporated into SANs at a much lower cost.

There's little doubt that many of the benefits of SANs--high-availability, mission-critical storage services with centralized control and management--would be useful for DAS-connected Windows systems. But with a Windows or Linux server costing less than $5,000, it's hard to imagine budgeting in the range of $75,000 or greater for an entry-level SAN for a group of four or five servers. Storage is certainly important, but its cost can't be multiples of the server cost.

A simple design for a
distributed file system

Rethinking SAN prices
SANs really shine when matched with high-end systems and their requirements for full redundancy and superior performance for high throughput transaction processing. On the other hand, the reliability of a single Windows or Linux server is usually not critical because servers can be deployed in farms. The newest directions in server computing, blade servers and grid computing revolve around the "any-node-will-do" philosophy of using available CPU resources. The obvious question is: If servers don't require ultimate reliability and redundancy, why should storage?

Servers are purchased according to the computing power needed to meet the needs of the application. Companies don't buy a large system if a smaller system will do the job. So, an equally obvious question is: Why use first-tier storage for low-end servers that are hosting applications that don't need expensive performance capabilities?

A common response to these cost-comparison questions is the refrain that a SAN needs to be able to handle a complete range of performance requirements. That kind of thinking begs the question as to why every component in a SAN needs to meet the requirements of the highest performing servers and applications.

SANs provide key capabilities for mission-critical applications, but because of their cost, they tend to be underutilized for applications that aren't mission-critical; nearly 75% of all server-based data is stored on PC server platforms on DAS storage. So, the challenge is to find ways to expand the use of SANs through less-expensive technologies and by operating/management efficiencies and practices.

The $64,000 question is: How do you cut costs out of a SAN without severely diminishing its benefits? One obvious way is to spend less on SAN products. That approach works, but it depends entirely on the availability of less-costly products. There's a lot of potential to reduce cost by using inexpensive technologies where appropriate, such as ATA, SATA and iSCSI. However, we won't address replacement technologies here; instead, we'll look at SAN architectures and topologies in an attempt to find more cost-effective designs. These designs reject the basic SAN assumptions about the need for high availability and connections for redundancy. In sum, there are ways to make significant spending cuts that don't seriously impact application reliability and performance.

This was first published in March 2004

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: