Serial-attached SCSI (SAS) is having a smooth transition from the engineering white board to the marketplace, largely because it incorporates older technologies such as SCSI and the trusted XAUI physical layer. So far, the technology
It doesn't hurt that SAS is built on well-established, field-proven technologies. Standards tend to run into interoperability hurdles when every aspect of the technology is new, from the physical layer all the way up the stack. SAS avoids this pitfall by only attempting to solve a single problem (shortcomings in SCSI) and using tested technologies as a foundation.
SAS has a physical layer based on 10 Gb Ethernet's Attachment Unit Interface specification (XAUI), defined by the Institute of Electrical and Electronics Engineers (IEEE). XAUI uses four serial channels running in parallel at 3.125 Gbps to create a 10 Gb connection. SAS and serial ATA (SATA) have taken this technology and used only one channel, running at 3 or 1.5 Gbps to connect a disk and a Host Bus Adapter. So the Physical Layer of SAS is not brand new. Similar flavors have already been developed and refined for XAUI, and are already on the market in the form of SAS's complementary technology, SATA.
SAS and serial ATA share a similar out-of-band protocol (OOB) used to let end nodes identify each other as SAS or SATA devices and perform initialization. SAS uses OOB both for initialization and for interoperability with SATA devices. At the encoding layer, SAS uses 8b10b encoding to create transmission characters and primitives from bits. This is the same encoding method used by Fibre Channel and Gigabit Ethernet. By using such a tried and true encoding method, SAS ensures that there won't be any surprises at this layer when the technology is deployed.
But aside from simply replacing parallel SCSI, why would anyone want to deploy SAS in the data center? For one thing, SAS will be able to support many more drives than traditional SCSI. SCSI was hampered by a 16 device limit on each bus. SAS has defined a device called an expander, allowing one SAS Host Bus Adapter to connect to thousands of disk drives. This alone will lower the cost of deploying SAS relative to SCSI since less HBAs will be needed.
Also, SAS is designed to be extremely flexible, which makes it ideal for nearline storage. Online storage typically implies storage that needs to be available all the time, and is usually implemented with Fibre Channel or SCSI disks. Offline storage implies data that must be kept, but will rarely be accessed and is best kept on slower but reliable tape. Nearline storage is gaining traction in the market as a means of describing storage that requires fast but infrequent access. Typically nearline storage must be faster than tape, but also inexpensive. Many nearline arrays are populated with high capacity, low cost, low reliability parallel ATA disk drives.
SAS enclosures are now being designed so that they can be populated with either SAS or SATA disk drives, because they have similar physical connectors and signaling conventions. Thus enclosures can be bought and used for either online or nearline storage, and all that needs to change is the disks populating the enclosure, since both SAS expanders and HBAs will be able to talk to SATA disk drives.
SAS is built on a solid foundation, offers flexibility and has strong support from the component manufacturers. In addition, SAS is expected to be priced similarly to parallel SCSI. Considering these advantages, SAS is worth a serious look as a means to contain infrastructure costs in the data center.
David Woolf is the principle SAS Consortium engineer at the University of New Hampshire InterOperability Laboratory (UNH-IOL), which tests SAS and SATA products for interoperability and conformance through open-industry plugfests and ongoing consortium membership testing.
This was first published in November 2004