What you will learn: Since locally attached storage can't be shared, direct-attached storage (DAS) architectures have faced issues like imbalanced storage capacity needs and demands, a more complex backup and replication process, and the need for more infrastructure behind each server to facilitate these processes. This has resulted in the emergence of storage area networks (SANs) and network attached storage (NAS) as replacements for...
But DAS solutions are still hanging on. EMC Corp. has rolled out PCI Express (PCIe)-based solid-state storage products designed to enable its network storage systems to store some data locally on the server, and major companies like Facebook and Google have adopted the architecture for their cloud computing operations -- leading to somewhat of a comeback for DAS. In this expert technical article, the factors leading to the resurgence of DAS offerings are discussed, including the limitations of SAN and NAS architectures, the advantage of using DAS with solid-state storage and the technology's role in cloud computing environments.
SAN and NAS underdeliver
One reason DAS solutions continue to live on is that SAN and NAS have largely underdelivered on their promises. SANs were supposed to make it easy to create a global pool of storage that could be dynamically divvied up among servers so that only the capacity actually needed at the time was assigned to a server. For the first eight years or so of the technology's existence, this capability was largely unavailable, and SAN storage had to be hard-partitioned to individual servers. When a server needed more capacity, a new partition had to be allocated to that server and then concatenated into the existing storage pool on the server or, worse, managed separately. The process of adding storage to a server on a SAN was very similar to the prior DAS methodology.
Data protection was also supposed to get a lot easier. The goal was to back up the SAN directly and not have to back up the individual servers. While a few software applications were able to accomplish that feat, all suffered from blindly backing up data and not understanding what that data was. Users quickly realized they needed "application awareness" to back up active applications and then perform intelligent restores. As a result, some form of backup software was required on the servers.
Finally, the price of SAN or NAS technology is still significantly higher than DAS. Many users have decided it's less expensive to inefficiently directly attach storage than to efficiently share it.
To be fair, modern SAN and NAS implementations have addressed the early storage allocation shortcomings with technologies such as thin provisioning. However, the time it took to deliver on the allocation promise allowed DAS to build on its foothold in the data center. But the other challenges remain, for the most part.
The primary driver for SAN/NAS adoption has been the advent of server and desktop virtualization, since the ability to move virtual server images between physical hosts requires shared storage. Virtualization also makes application-aware, off-host backup viable due to the entire server being a file that can be backed up without interacting with the original physical host. But despite this new and important use case for shared storage, DAS continues to live on in the data center. And its value is increasing.
One of the key reasons for the continued popularity of DAS solutions in the data center is the need for a local boot drive. While most SANs support some form of booting methodology, it still requires specialized host bus adapters (HBAs) and specific support on the SAN storage system. As a result, most physical servers still boot from DAS storage.
Thanks to solid-state drives (SSDs), booting from the local server offers some specific advantages over booting from the SAN. First, servers can now be booted or rebooted in seconds from a local SSD. And the SSD can be used as a virtual memory paging area, which is incredibly important in virtual environments. As hosts in these environments get loaded up with virtual machines (VMs), they can quickly run out of RAM and begin to use local storage as a memory paging area. If this local storage is hard disk, performance can degrade substantially. When this local storage is memory-based, like flash SSD, the drop in performance is negligible. SSD as a boot drive allows for more virtual machines without the need to purchase expensive RAM.
One reason DAS solutions continue to live on is that SAN and NAS have largely underdelivered on their promises.
Extending the SAN with DAS
Solid-state storage also plays another role in the resurgence of DAS adoption: as an extension to the SAN. Leveraging even higher performing PCIe-based solid-state storage, architectures are now developing that allow the tiering or caching of data directly to the server that needs it. PCIe SSDs can communicate directly with the CPU and don't get bogged down by SAS or SATA protocols like typical SSDs. This again makes an ideal virtual memory paging area for RAM-constrained systems, but it's the tiering or caching use case that's becoming increasingly interesting.
With this architecture, storage systems can intelligently pre-stage the most active data within the PCIe SSD. Then, when a request for data is made by an application or user, it will be available for high-speed delivery on the PCIe SSD. This means the application or user doesn't have to wait for the request to travel across the storage network, be accepted and processed by the storage controllers, wait for hard drives to rotate into position, and then send the requested data or write acknowledgment all the way back up that infrastructure.
If successful, this model of storage architecture design would turn the SAN world upside down. Storage on the SAN would become the central repository of information that's growing cold, and the local PCIe SSD DAS would be used for the most active data. The SAN would be used for long-term retention or backup, and the server would be used for active processing. This would lead to SAN storage system designs where capacity is the focus and performance is less important. But the one downside to native PCIe SSDs is that you can't boot from them, so a local SAS hard drive, or even an SSD in a drive form factor, would still be required.
Cloud compute infrastructure
Other key drivers for the revival of DAS solutions are the designs of massive storage environments like those of Facebook, Google and others. These systems combine compute and storage on a single server that's highly networked for communication with the other servers. These systems often have locally attached storage and the ability to access data on other servers. They can even leverage a combination of PCIe SSD and hard disk drive for booting. These online providers and Internet technology companies chose this design so they could get incredibly cost-efficient architectures with the ability to scale easily as new servers were added.
This model of DAS converged with compute was thought to be a limited use case -- one that only companies with large online apps would deploy. Now, however, thanks again to server virtualization, there's often a need to build scalable compute and storage infrastructure simultaneously. Vendors like Nutanix offer products that are clusters of servers with internal storage to provide a turnkey cloud compute-type of infrastructure suitable for more traditional data centers.
Server virtualization still needs shared storage to move virtual machine images and provide high availability. These converged architectures automatically copy data to the other nodes in the cluster so that virtual machine images are available to any node in the cluster. This "shared DAS" model provides the simplicity and cost-effectiveness of local storage while providing many of the benefits of a SAN.
If DAS lives, is SAN dead?
DAS solutions are thriving. There are many storage experts who believe the data center is moving toward a "DAS mostly" environment (described above), where the SAN would become the long-term repository, while truly active data gets stored locally on the server that needs it. The software to manage this movement of data is maturing quickly and will be used to keep active data locally. It will also be able to acknowledge the writing of new data locally, and then sync that data to the capacity SAN in the background.
The drivers for a potential shift to this "DAS mostly" model are the performance demands of the virtual environment and the performance capabilities of solid-state storage. One driver has a need for data locally, and the other has the ability to leverage local data by avoiding the latency caused by the storage network.
Still lots of storage options
As always, there are a lot of potential options for a storage administrator when dealing with storage challenges. The first step is to invest in a performance analysis tool that can help fine-tune the current environment. This maximizes the current investment and allows users to make an informed decision when selecting what step to take next.
If the network or storage infrastructure can't be upgraded due to budget or time constraints, then a valid approach would be a strategy of mixing SSD-based DAS with SAN storage. This would provide the benefit of improved performance by eliminating the storage network bottleneck for maximum SSD benefit.
If a refresh is in the budget, an investment could be made in storage network infrastructure and a shared storage system, such as an all-flash device to eliminate storage performance concerns for the foreseeable future. Still, with this approach, using SSD DAS as a booting and paging device can complete the storage performance picture.
BIO: George Crump is president of Storage Switzerland, an IT analyst firm focused on storage and virtualization.
This article originally appeared in Storage magazine.