Direct attached storage FAQ

Greg Schulz, founder and senior analyst with StorageI/O, answers frequently asked questions about direct attached storage (DAS).

Greg Schulz, founder and senior analyst with StorageI/O, discusses direct-attached storage (DAS) today from what it is and how it works to how it's evolving. Greg also talks about vendors that are offering DAS products for small to midsized businesses, best practices for selecting DAS products and DAS performance.

You can read his answers to these frequently asked questions below or download a recording of the Q&A below.

DAS FAQ podcast

Table of contents:

>>Defining DAS
>>DAS evolution
>>DAS performance
>>SMB DAS products
>>Best practices for selecting DAS

Can you briefly define DAS?

The fundamental aspect of DAS is that it is data storage directly attached to a server in some shape or form. Think of it this way: It's not networked. In other words, it's not using a storage area network (SAN), network-attached storage (NAS), Ethernet or Fibre Channel switches. The storage is directly connected to a server.

There are different shades, flavors and facets of DAS. There is an individual disk drive in a server, there is in an individual disk drive in a computer and there is a group of drives that can be internal to a server. There can also be a group of drives that are external to a server, but are directly attached; most commonly via parallel SCSI, Serial ATA (SATA), ATA, IDE in the past and on a go-forward basis with serial-attached SCSI (SAS).

DAS can be either internal to a server or it can be external. In the case of the external, it's dedicated and not SAN or NAS attached.

How is DAS evolving today? Can you talk about some of the new ways DAS is being used? 

DAS is evolving in quite a few ways. As I mentioned, DAS can be internal to a server or it can be external. DAS can be a dedicated, standalone disk drive, in other words, no RAID protection or it can be a small RAID array with one or two hosted ports.

The way DAS is being used today is changing. There is the notion that all storage is SAN-attached or NAS-attached. But the reality is there is still a large portion of storage that just doesn't lend itself to being network-attached, whether it's for cost reasons, the size of the deployments or whatever it happens to be.

One of the ways DAS is changing is from a packaging standpoint, particularly with blade centers for blade servers. With the blade centers you have a new packaging form where you have multiple server blades in a chassis. Those same server blades may have a drive attached to them, similar to what you'd have with a standalone server.

But you're also now seeing blades with a group of mounted disk drives that plug right into the blade centers. What those drives are being used for is local disks, boot disks, paging swap file-type disks and other things that are kept more local and don't lend themselves to being served out over a SAN.

The other place building on that sort of model, whether it's on a blade center with a blade of disks, or using an external RAID array, dedicated or direct-attached perhaps using SAS or SATA, would be in the form of a file server.

This would be either a Windows storage server (that does both iSCSI and NFS CIFS) or one of the storage server utilities from vendors like LeftHand Networks Inc., Seanodes Inc. and Ibrix Inc. that turn a server and its dedicated internal or external storage into some sort of storage server (either a NAS-base, iSCSI-base, NFS CIFS or even a VTL-type device).

DAS is still very prevalent and is really still in the gap where an iSCSI SAN or a Fibre Channel SAN can't quite scale down to a NAS system or use that part of a building block to deploy iSCSI and NAS-based servers.

Are the efficiency issues associated with DAS still a problem for most businesses? 

It depends on what you mean by efficiency. Is it efficiency in terms of performance or is it efficiency in terms of utilization (making use of the storage), or is it efficiency in terms of protecting and using the data in an effective manner?

From a performance standpoint for certain applications with certain types of DAS storage, performance can be as good if not better than a SAN-based device. This is true with a high-performance DAS array, such as an entry-level IBM DS3000, HP MSA 2000 or an EMC Clariion AX4 that's configured in a direct-attached manner.

The performance can be very good because you don't have the associated network in between. But is it cost-effective? You have a single system using that storage and if the application is able to effectively utilize that and needs dedicated DAS, than it can be effective.

So when you look at the effectiveness, how efficient it is to backup, protect and manage the data have all been the traditional value propositions of going into a shared SAN environment and creating one large pool for allocation, sharing and boosting resource utilization and management.

Which DAS vendors are positioning their offerings to fit the SMB market? 

Most vendors have done some sort of adjustment with their product line, either creating entirely new product lines or retrofitting their products for the SMB space. Those vendors include Dell Inc., IBM Corp., EMC Corp., Hewlett-Packard (HP) Co., Hitachi Data Systems (HDS) Ltd., LSI Corp., Sun Microsystems Inc., LeftHand and Microsoft Corp. I would say almost every vendor out there is in some shape or form is catering to the SMB in the DAS space.

Are there any best practices for selecting a DAS system in an SMB environment?

The key one is to keep in mind is what you're going to use that storage for. Take a step back and see if it makes sense to have all of your storage in some form of a network (NAS, NFS, CIFS, iSCSI or Fibre Channel) and, if so, why? Is it for boosting utilization sharing or is it because someone told you that's the best way to do it?

You can go the other way and look at the business value of having some DAS on a server. Is it for a localized boost, a localized scratch workspace or whatever it's for? You need to keep that in perspective.

Probably an important practice that I need to mention here is in a virtualized environment. If you're going to use a virtualization technology, for example VMware ESX, you need some sort of shared storage. If you're going to have two physical servers to support things like VMotion, you need to have storage that's accessible and shared by two or more servers. Historically, that is perceived as being either iSCSI, Fibre Channel, NFS or CIFS.

With some caveats, you could have an entry-level DAS array that has redundant controllers to support connectivity to two or more hosts. For example, an IBM SAS or Sun SAS storage array that can directly attach to two different servers to support things like VMotion for VMware or live migration from Virtual Iron Software Inc.

Greg Schulz is founder and senior analyst with the IT infrastructure analyst and consulting firm StorageIO.

Dig Deeper on Data center storage