Data storage components are at the core of any enterprise storage system. At the lowest level, hard disks are the medium that hold vital corporate data. The choice of hard disks can have a profound impact on the capacity, performance and long-term reliability of any storage infrastructure.
Because it's unwise to trust valuable data to any single point of failure, hard disks are combined into groups that can boost performance and offer redundancy in the event of disk faults. At an even higher level, those arrays must be integrated into the storage infrastructure -- combining storage with network technologies to make data available to users over a LAN or WAN.
The lowest level: Hard disks
Hard disks are random-access storage mechanisms that relegate data to spinning platters (aka disks) coated with extremely sensitive magnetic media. Magnetic read/write heads step across the radius of each platter in set increments, forming concentric circles of data, dubbed tracks.
Hard disk capacity is determined by the quality of the magnetic media (bits per inch) and the number of tracks. Thus, a late-model drive with superior media and finer head control can achieve far more storage capacity than models 6-12 months old. Some of today's hard drives can deliver up to 1,000 GB, or 1 TB, of capacity. Capacity is also influenced by drive technologies such as perpendicular recording, which fits more magnetic points into the same physical disk area (areal density).
Hard disk performance is influenced by the rotational speed (rpm) of the platters and the interface that connects the drive to its host computer. Speeds from 5,400 to 7,200 rpm are most common in personal computers and secondary storage systems; 10,000- and 15,000-rpm disks are allotted to servers and primary storage systems.
The interface manages data transfer to and from the drive. Both ATA and SCSI interfaces are traditional parallel architectures that transfer commands and data across multiple data lines simultaneously. ATA offered lower data rates and was used mostly in PCs, while SCSI provided faster data rates and appeared in workstations and servers. SATA and SAS are more current interfaces that pass ATA/SCSI commands serially along a single data wire. The move to serial cabling allows for data transfers up to 3 Gbps and simpler (less expensive) connections -- the interface has no direct impact on the capacity of a hard disk.
Fibre Channel (FC) is another popular serial hard disk interface. FC is known for its speed; 2 Gbps and (more recently) 4 Gbps and data integrity features. FC is also a switched interface, so it is possible to create a "fabric" of storage devices and hosts where every host can see every storage device, which vastly improves the availability of data. This is a fundamental technology behind the SAN, which is introduced in Chapter 3..
Grouping the disks: RAID
Hard disks are electromechanical devices whose working life is finite. Media faults, mechanical wear and electronic failures can cause problems that make the data on the drive inaccessible. To guard against this, organizations turn to data protection tactics such as arranging groups of disks into arrays. This is known as RAID.
RAID implementations offer data redundancy and enhanced performance. Redundancy is achieved by copying data to two or more disks, so when a fault occurs on one hard disk, duplicate data on another can be used. In many cases, file contents are also spanned (or striped) across multiple hard disks. This improves performance because the various parts of a file can be accessed on multiple disks simultaneously, rather than waiting for a complete file to be accessed from a single disk. RAID can be implemented in various schemes, each with its own designation:
- RAID-0. Disk striping is used to improve storage performance, but there is no redundancy.
- RAID-1. Disk mirroring offers disk-to-disk redundancy, but capacity is reduced and performance is only marginally enhanced. Each disk is duplicated to another, so total capacity is cut in half.
- RAID-5. Parity information is spread throughout the striped disk group, improving read performance and allowing data for a failed drive to be reconstructed once the failed drive is replaced.
RAID-6. Dual parity data is spread throughout the striped disk group, allowing data for up to two simultaneously failed drives to be reconstructed once the failed drive(s) are replaced. Proprietary versions of RAID 6 are usually called RAID DP (for dual parity).
There are other RAID levels, but these four are the most widely used. You can also obtain greater benefits by mixing RAID levels. Combinations are typically denoted with two digits. For example, RAID-50 (or RAID 5+0) is a combination of RAID-5 and RAID-0, while RAID-10 (or RAID 1+0) is RAID-1 and RAID-0 implemented together.
RAID and storage arrays
There are many ways to group hard disks, and enterprise storage can involve dozens to thousands of disks arranged in storage arrays. The largest arrays can store petabytes of data. The most basic expression of disk grouping is JBOD. This is simply the accumulation of pure capacity, and offers no redundancy or performance benefits. Putting five 200 GB drives in a JBOD arrangement simply yields 1 TB of unprotected storage.
RAID arrays group relatively small sets of disks to work cooperatively for redundancy or added performance (often both). However, redundancy costs drive space. Suppose you have ten 200 GB drives. That's 2 GB of raw storage but mirroring halves that total to 1 GB of mirrored storage. Advanced RAID configurations like RAID-6 can reduce the need for redundant disk space by using parity techniques on a dedicated drive. The parity data is then used to rebuild the data on a failed drive.
Storage arrays can be classified as modular or monolithic. A modular storage array like EMC's Clariion AX100 is small and self-contained with less than 24 drives, designed for the lighter traffic patterns of an SMB. Many companies keep pace with their growing storage needs by adding modular arrays.
In contrast, monolithic storage arrays, such as EMC's DMX-4, Hitachi Data Systems' Lightning or IBM's DS8000, may host hundreds of drives with the communication capability to handle heavy utilization. The expense and management overhead needed for monolithic arrays usually result in just a few key deployments. The line between modular and monolithic arrays is blurring and the features in high-end arrays are appearing in smaller systems.
Clustering is a relatively new concept in storage. Storage clusters are groups of storage arrays sharing redundant connections to work cooperatively as a single storage system. The use of multiple arrays can service storage requests very quickly, typically resulting in superior throughput and performance while still supporting large numbers of users. There is also inherent redundancy: When one element of the cluster fails, the other elements take over without interruption to ensure that data is continuously available. Storage clusters are generally deployed where performance and storage system uptime are crucial.
Getting storage on the network
NAS boxes are file-serving storage devices behind an Ethernet interface, connecting disks to the network through a single IP address. NAS deployments are straightforward and management is light, so new NAS devices can easily be added as more storage is needed. The downside to traditional NAS is performance -- storage traffic must compete for NAS access across the Ethernet cable. But NAS access is often superior to disk access at a local server. Multiple communication ports are normally used to aggregate traffic and provide redundancy through failover.
SANs overcome common server and NAS performance limitations by creating a subnetwork of storage devices interconnected through a switched fabric like FC or iSCSI. Both FC and iSCSI approaches make any storage device visible from any host, and offer much more availability for corporate data. FC is costlier than simple iSCSI but offers optimum performance, and as a result is found in the enterprise while iSCSI commonly appears in SMBs. However, SAN deployments are more costly to implement (in terms of switches, cabling and host bus adapters) and demand far more management effort. Today the traditional performance gap between FC and iSCSI is shrinking.