Problem solve Get help with specific problems with your technologies, process and projects.

Unified storage plays important role in data storage environments

Find out how unified storage works, benefits of the technology, possible challenges with a unified storage architecture and the future of unified networking

What you'll learn in this tip: Unified storage has been around for many years. Instead of several disk arrays in one system, unified or multiprotocol storage is a centralized disk array. Learn how unified storage works, the advantages of using a multiprotocol storage array in your environment, possible challenges the technology presents and the future surrounding unified storage systems.

Unified storage was introduced nearly a decade ago and has the ability to present data storage to host systems using different protocols -- hence the term multiprotocol storage. One of the earliest popular forms of unified storage combined IP-based network-attached storage (NAS) and Fibre Channel (FC) or storage-area network (SAN) storage.

How does unified storage work?

A multiprotocol storage array is essentially a centralized disk array made available to host systems via an IP-based network for file-level access and a SAN for block-level access using the Fibre Channel protocol. iSCSI is also a very common IP block-level storage protocol. The disk storage resources are pooled and offered via one of the protocols. Disk arrays are also equipped with multiport storage controllers with a management interface allowing storage administrators to create disk pools or volumes and assign them to the appropriate port for access. Common protocol combinations include NAS and FC, or iSCSI and FC. It's possible to combine all three protocols, but because iSCSI and FC are both block-level protocols, many data storage administrators typically choose one or the other combined with file-level access (NAS).

Advantages of unified storage

Centralized storage arrays initially offered the ability to pool storage resources that were previously directly attached to host systems (DAS) and quite often inefficiently used. But single protocol storage arrays themselves also proved to be inefficient at times; in environments where both NAS and SAN arrays coexisted, future growth capacity had to be allocated to each array and couldn't be pooled because each data storage array offered a single access mode. Multiprotocol arrays enable the pooling of all disk resources in one array, including capacity for future growth which can be allocated where needed.

Another advantage of unified storage is the ability to implement block-level replication at the array level to create copies of all data on a single array regardless of access protocol. In other words, data accessed at the file level and via iSCSI or FC can be replicated locally (snapshots) or to another storage (mirror) using a single mechanism.

Possible challenges of a unified storage architecture

There are some possible challenges that shouldn't be overlooked when considering a unified storage architecture; these challenges have to do with performance. IP-based storage leverages the TCP/IP protocol, which can create a significant amount of processing overhead and affect the overall host system performance. While today's processors offer a lot more "horsepower," server virtualization can further increase the demand for CPU cycle when multiple virtual machines on a single physical host are configured for NAS access.

iSCSI can offset this overhead as long as it's implemented using a host bus adapter (HBA) that has an onboard chip for processing, thus relieving the CPU from the added load. Although it's possible to implement software-based iSCSI using a regular network interface card (NIC), the processing overhead is again handled by the server CPU, which can affect system performance as mentioned earlier.

The data backup strategy must also be carefully planned to avoid performance issues. When using traditional host-based backup software over the network concurrently with iSCSI or NAS storage, network traffic can end up doubled to accommodate array-to-host and host-to-backup server traffic, which can have a negative impact on CPU performance due to TCP/IP overhead. This approach usually works best when backup and I/O traffic are kept on separate networks. It's also advisable to consider snapshot-based backup technology to limit network traffic and added host-based processing.

Unified storage management

Another benefit is the ability to execute storage administration tasks such as disk resource pooling, volume and RAID group creation, and allocation and data replication, all from a single management interface. In single vendor environments, storage administrators can manage more than one multiprotocol array from the same management interface and a number of vendors now offer the ability to manage dissimilar or heterogeneous storage arrays from a single interface. This has been a significant improvement from the not-so-distant days where each storage platform required a separate management software package.

The next level of unified storage is achieved using virtualization. The technology provides the ability to pool FC SAN storage from multivendor arrays and make it available to host systems via protocols such as CIFS or NFS (NAS) for file-level access and FC or iSCSI for block-level access. LUNs are created on the different FC storage arrays and allocated to a multiprotocol storage controller via the SAN fabric as if it were a regular host system. The storage controller acts as the virtualization engine or abstraction layer between the storage arrays and the servers, which can access storage based on the protocol for which they're configured. In other words, the "virtualization" controller takes care of the protocol conversion. This provides the ability to better utilize storage resources and to create storage tiers across which data can be migrated seamlessly. There's also the added benefit of using local or remote replication between "back-end" arrays to provide data protection using a single software package implemented at the controller level.

Latest unified storage development: Unified networking

The latest development in unified storage is taking place at the network level. Fibre Channel over Ethernet (FCoE) is the latest technology in that arena, and it's often referred to as unified networking. In contrast with an array that can present storage to hosts using different protocols (and different adapters), FCoE enables host systems to share a single 10 Gb Ethernet (10 GbE) link to send both IP traffic and block-level I/O.

This is made possible via converged network adapters (CNAs) that contain both Fibre Channel HBA and Ethernet NIC functionality on the same adapter card combined with the encapsulation of the FC protocol. To some degree, this can be compared to iSCSI, which is also a block-level storage protocol travelling over an IP network, but FCoE enables the integration of high-bandwidth FC storage and IP traffic over a single link. IP traffic and I/O are later separated at a switch and redirected respectively to a LAN and a SAN. One of the benefits of FCoE is the reduction of cabling in the data center while taking advantage of 10 Gb Ethernet and soon 40 GB.

There are many debates over whether there's a need for two block-level storage protocols or if one will eventually prevail, but that's another discussion. One thing we can say for sure is that unified storage, along with unified networking, has an important role to play in emerging computing styles such as the much talked about "cloud computing."

BIO: Pierre Dorion is the data center practice director and a senior consultant with Long View Systems Inc. in Phoenix, Ariz., specializing in the areas of business continuity and DR planning services and corporate data protection.

Dig Deeper on Unified storage