There are several ways to classify storage arrays; how data is stored on the storage -- in blocks or files, the...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
type of connection to the server and the type of drives used.
Disk drives are probably the first thing you should consider. Enterprise-class SATA drives are available in the same capacities as regular SATA drives, and at a slightly higher cost, generally $50-$100 per drive. All other enterprise-class drives, whether Fibre Channel (FC), SCSI or SAS (serial attached SCSI), are based on the same higher-speed, more durable electronics, usually come in smaller capacities, typically 73 GB, 146 GB and 300 GB, and are much more expensive. They are, however, also a good deal faster, especially in server applications and should last longer.
There are two ways to connect storage to a server: direct-attached storage (DAS) that is connected directly to a single server; and networked storage, which as the name implies is shared storage connected through a network connection, whether Ethernet, FC, Infiniband or something else.
DAS includes a wide variety of connections, such as SCSI, external SATA (eSATA), Firewire (IEEE 1394) and USB. The first is the most common on servers, not because it is the fastest, but because it can better handle the kinds of uses a server sees, especially multiple simultaneous operations on different files. While nearly every motherboard already has one or more USB ports, USB isn't intended for server storage, and eSATA is engineered for expanding more PCs than servers. Firewire is fast enough for servers, but other than Macs, there aren't many servers with Firewire interfaces.
Networked storage has a number of advantages -- it can be connected to multiple servers, allowing better utilization of drive space and more responsiveness to changing demands. Since you can expand the space for one server and decrease the space for another if requirements change. It can also be faster than DAS as well, bumping speeds from the 320 Mbps of SCSI to almost 1000 Mbps with the highest speeds of FC, iSCSI or Infiniband.
Storage area network (SAN) storage will generally require adding a host bus adapter (HBA) to the server. iSCSI storage can use an existing Ethernet connection, but it should be separate from the connection your server uses to connect to other systems or clients, since iSCSI will use up all of the available bandwidth of one connection. Fibre Channel or Infiniband will require HBAs, and separate switches as well, but repays the greater investment in equipment with higher speeds -- up to 8 Gbps for FC, and 10 Gbps for Infiniband.
A number of vendors offer products than can work as either iSCSI or FC storage, at prices not much more than iSCSI-only arrays. These arrays are also expandable by adding additional shelves of drives to an existing appliance. This means that an organization could buy a basic storage appliance, start with inexpensive iSCSI connections to servers, and then upgrade to additional, faster drives and TCP/IP offload adapters or FC adapters as desired, if the need for better performance arises.
RAID is used to increase the performance and reliability of storage systems. There were originally five RAID levels, 0, 1, 3, 4 and 5, and RAID 3 and RAID 4 are generally not used any more. RAID 0 is performance-oriented -- it stripes data across two or more drives, increasing performance, but decreasing reliability, since if any one drive fails, all data is lost. RAID 1 mirrors data to two disks, so that everything that is written to one is written to the other. This increases reliability, but halves the amount of usable storage you get from each pair of disks.
RAID 5 stripes data across three or more disks, but uses one drive (out of a set of 3-8, generally) to add additional data so that if any one disk fails, no data will be lost. This "wastes" less disk space and still provides fault tolerance, which is why RAID 5 is the most common type of RAID in use. Newer variations of RAID include a combination of RAID 0 and RAID 1, which may be called either RAID 0+1 or RAID 10 depending on the implementation, and typically uses two sets of two or more disks. Data is striped across the disks in one set to increase performance and that data is mirrored to the other set, for reliability. RAID 50, RAID 60 and other hybrid variations use different schemes intended to retain the efficiency of RAID 5 while increasing performance or ensuring that even if more than one drive in the set is lost that data will still be available.
Many servers have built-in SCSI interfaces which may also support RAID, though not all do -- some just connect to SCSI disks. If you connect a bunch of disks without hardware RAID, you get a JBOD, (just a bunch of disks), which is less expensive than RAID. You can then use server software (it's built into Windows) to create RAID 0 or RAID 1. The advantage to this is that it's cheaper than buying a RAID controller, but it's slower and doesn't support RAID 5.
SCSI comes in a wide variety of flavors dating back to 1986, but the most recent flavor is U320, which is ultra-wide 320 MBps, a big jump from the 10 MBps of 1986. It's also faster than the 60Mbps you get with USB 2.0, or the 100 Mbps of Firewire, though not as fast as the 375 Mbps of eSATA.
In addition to getting cables that match (there are a dozen or more different types of cables, so you'll need to be sure of the type you need), you'll also need to understand termination -- SCSI supports up to 15 devices and the last device in the chain needs to be terminated so that the system knows it's the last one.
Storage array vendors