How your SAN will evolve

We asked storage vendors, industry analysts and technologists serving on storage industry associations about where they saw the SAN heading. There may not be sweeping architectural changes in five years, but there will be changes in the basic building blocks of the SAN infrastructure: networks and protocols; switches; storage arrays, disks and controllers; and SAN management.

This Content Component encountered an error
This Content Component encountered an error
This article can also be found in the Premium Editorial Download: Storage magazine: What's in store for storage technology in 2009?:

Storage technologists and industry analysts predict how the SAN will evolve and what you need to do to prepare for the changes.

in five years, the enterprise SAN might be a service running in the cloud. Or a huge collection of DAS, like a giant mainframe DASD farm of old. It might be cableless, contained in a pre-wired cabinet or wireless. Object-based storage could make blocks and files irrelevant. The SAN might even be diskless if solid-state disk (SSD) economies of scale and adoption accelerate in a hockey stick curve. Whatever form it takes, the SAN of the future will be more consolidated, virtualized, automated and greener.

Or, as many predict, the changes will be evolutionary and not revolutionary; in five years, the SAN will be a lot like the enterprise SAN of today: just faster, packed with more disk capacity, cheaper on a cost/GB basis, a little easier to manage and less energy hungry.

Storage recently asked storage vendors, industry analysts and technologists serving on storage industry associations about where they see the SAN heading. There may not be sweeping architectural changes in five years, but there will be changes in the basic building blocks of the SAN infrastructure: networks and protocols; switches; storage arrays, disks and controllers; and SAN management.


Networks and protocols
Today, only about half of the storage deployed is networked, says Jackie Ross, VP, business development at Cisco Systems Inc. In five years, the amount of networked storage will increase to 70%, she suggests.

Among networked storage, Fibre Channel (FC) is the dominant storage networking protocol in the enterprise data center with more than an 80% market share, according to Skip Jones, chairman of the Fibre Channel Industry Association. Roger Cox, a research VP at Stamford, CT-based Gartner Inc., projects a 66% share for FC by 2012.

By then, 8Gb/sec FC will be heading toward 16Gb/sec, while 10Gb/ sec Ethernet will be aiming for 40Gb/sec or even 100Gb/sec, keeping in line with the Ethernet's full order of magnitude increases. At that point, FC will face being left behind in terms of sheer network performance.

But before then, the game will shift. "We see the industry moving to a unified fabric," says Ross. That means combining FC and iSCSI on Ethernet. "The construct for FC storage won't change. You manage the SAN, provision LUNs and do masking the same way," she explains. What will change is the number of components the organization needs. There will be only one type of switch and one type of adapter. "Cabling, which represents 25% to 30% of the data center cost, is reduced, too," says Ross.

"In five years, the network infrastructure will have to be a unified platform that speaks multiple protocols," says Jason Schaffer, director of storage product management at Sun Microsystems Inc. "It will spit out whatever protocol the server or storage dictates." Rather than one protocol, there may be four, five or more.

But Cox warns that "there are a number of issues that will keep convergence from happening." The biggest ones are organizational. "You have issues between networking and storage people that aren't easy to resolve," he says.

On the technical side, FC over Ethernet (FCoE), for example, isn't a slam dunk. For 10Gb Ethernet to provide the basis of FCoE, "you need a special form of 10Gb Ethernet," says Cox. Called enhanced Ethernet, it will address such things as flow control, which is necessary to deliver the lossless networking that makes FC storage so popular. "The standards aren't yet in place," says Cox, who doubts they'll be ready for widespread deployment in five years, noting that "FCoE will achieve about 2% market penetration by 2012."

As for InfiniBand, forget it. "Maybe we'll see InfiniBand as an alternative for the converged network," says Greg Schulz, senior analyst at StorageIO Group, Stillwater, MN, and author of The Green and Virtual Data Center (Auerbach).


Switches
Switches will be more flexible and intelligent. "By then, plumbing will be less important than intelligence," says Jon Toigo, CEO at Toigo Partners International, Dunedin, FL.

Cisco expects switches to be capable of providing networking services, such as firewalls, load balancing and other QoS functionality. The switch will also play a central role in network management automation. "To get [end-to-end] automation, you'll need intelligence at multiple places: in the converged network, adapters, HBAs, array controllers," says Ross (see "Where to put storage intelligence," PDF below).

Click here for
Where to put storage intelligence (PDF).

No protocols will go away anytime soon. Instead, switches will handle multiple protocols, including FC, FCoE, Ethernet, enhanced or converged Ethernet, iSCSI and possibly InfiniBand. By 2013, multiprotocol SAN switches should be commonplace, although the particular combination of protocols may vary. Switches will also be bigger, encompassing hundreds of ports and enabling thousands of ports on the network. Intelligence will reside in the core switches, and edge switches will connect to the core.

Storage arrays, disks and controllers
Storage arrays will continue riding Moore's Law. 10Gb/sec Ethernet and 8Gb/sec FC will be standard interfaces for enterprise arrays. "You'll see the expected increases in performance and capacity from all of the major vendors," says Kyle Fitze, director of marketing in the SAN Division of Hewlett-Packard (HP) Co.'s StorageWorks group.

Storage arrays will continue to consist primarily of hard disk drives (HDDs) in 2013, although the size and form factor may vary. "In five years, most of the storage will be ultra-high-density arrays packing large numbers of drives into small footprints," says Schulz. These arrays will become the norm, not just something for firms facing energy or space constraints.

One technology that's not likely to replace HDD in the array is SSD or flash drives. Vendors currently incorporate SSD in arrays and will continue to do so, but SSD will be reserved for critical applications requiring very high IOPS. HP distinguished technologist Jieming Zhu says two main issues deter rapid adoption of SSD: price and SSD's inherent wear-out factor. Zhu adds that work needs to be done on software that prolongs the life of SSDs and better integrates them with RAID and database applications. "It's a work in progress," he says. (See the related Trends story "Much of solid state still on the drawing board,".)

HDD capacity will keep getting bigger, delivering price/performance increases of approximately 40% a year. Low-cost 1.5TB SATA drives will be surpassed by even larger disk drives of 4TB or more. For organizations needing performance greater than 15K rpm, "there's no reason why there can't be 20K or even 22K drives," says Ed Grochowski, conference committee chairman of the International Disk Drive Equipment and Materials Association (IDEMA).

What you're more likely to see are drives supporting 4K (4,096 bytes) sectors for error correction. This is a completed IDEMA standard and compatibility testing will begin in 2009. By 2013, the 4K sector will be in all new SATA drives (iSCSI drives aren't impacted by sector size) and possibly adopted by the SSD industry.

You should also expect to see more file-oriented, NAS-like storage in the data center. "This will simplify provisioning; it's not nearly as complex to manage as block-based storage," says StorageIO Group's Schulz. He expects file-oriented storage to be widely accepted even for database applications.

At about the same time, the data center will begin to see the early implementations of object-based storage, notes Schulz. Object-based storage contains richer meta data than block storage. "It becomes a question of which is the better level of abstraction: the richness of the object-based system or the efficiency of block storage," says Rick Gillett, VP of data systems architecture at F5 Networks Inc. (see "The benefits of object storage," below). By relying on in-depth meta data, object-based systems will know more about the data and enable intelligence in the storage system to better manage the data.


The benefits of object storage
SAN storage in five years will be increasingly object based. Object-based storage resembles file-based storage except it makes greater use of meta data. But object-based storage isn't a total win-win proposition. It trades the efficiency and performance of block-based storage for easier management and more automation.

Object meta data will let you manage the storage more effectively and apply policies based on the data content, regulatory requirements, ownership of the data and so on. The meta data can also be used to dynamically store data at the most appropriate service levels.

Faced with surging volumes of data, more intelligence will be needed in storage systems. Where that intelligence should reside is an open question. "The SAN is taking over much of the intelligence that used to be in the server," says ReiJane Huai, chairman and CEO at FalconStor Software. SAN-based intelligence already provides services like snapshots and replication.

By 2013, storage controllers will have sufficient processing power to run, for example, database apps. "Just think about running Oracle on a controller right next to the storage array. Just imagine what that could do for database performance," notes Huai.

Storage management
Storage management will get harder before it starts to get easier. Storage virtualization embedded into the SAN can simplify some aspects of storage management while server virtualization complicates it.

Server virtualization will continue to complicate storage management. "This is a new dimension for storage management," says Joseph Zhou, senior analyst, storage research at Ideas International Inc., Rye Brook, NY. Virtualization requires dynamic reprovisioning to accommodate changes to virtual servers. In five years, dynamic reprovisioning should be supported for leading hypervisors (see "Future directions: Server virtualization," below).

Future directions: Server virtualization

Today, the basic challenges for storage posed by server virtualization are being resolved. VMware Consolidated Backup (VCB) acts as a backup proxy. And VMware is providing APIs to improve the integration of third-party backup tools with VMware. "In the future, administrators will be able to just click a box in the backup tool for the kind of backup and restore they want," says Jon Bock, VMware's senior product marketing manager. Administrators will have a choice of VM-level or file-level restores from a single backup pass.

It's difficult to provision storage for moveable virtual machines (VMs) today. VMware is refining the VMware Virtual Machine File System (VMFS) to abstract details of the underlying physical storage and limit the number of times storage administrators have to reprovision storage for VMs. For the future, "a planned set of APIs will allow storage and management vendors to better see how the VM uses storage allocated to it," says Bock. This will enable storage administrators, for example, to see and resolve LUN bottlenecks resulting from unexpectedly heavy VM activity.

VMware also recently announced the Virtual Datacenter Operating System (VDCOS) initiative, which has implications for storage. "It will provide interfaces to storage technology that will allow a range of storage activities," says Bock. These include thin-storage provisioning and deduplication for VMs (identifying commonalities in VMs).

Convergence of protocols over a unified fabric promises simplified management. "You will be able to manage across FC and iSCSI," says Mike Karp, senior analyst at Enterprise Management Associates, Boulder, CO. Unresolved is who will manage the FCoE network: network admins or storage admins.

"Intelligent storage is the management solution," insists Steve Luning, VP, office of the CTO at Dell. Storage intelligence could reside in the app, server, data, array, off in the cloud or some middle layer. "Maybe the hypervisor handles the management," suggests Luning.

But some storage management tasks aren't practical to automate. "You can automate the most common tasks, like backup, but these aren't what cause problems," says Schulz. Problems caused by increased complexity and products that comply with standards at a high level but break the standard deeper down will continue to make storage difficult to manage.

"Where vendors provide management tools, they're all stovepiped. Cisco or EMC can add management capabilities, but most often they only work in their environments. As soon as you go beyond the vendor, you lose the management benefits," notes StorageIO Group's Schulz, adding that "this is unlikely to change."

What's needed is a common storage management platform that's transparent from top to bottom. SMI-S doesn't do the trick, according to Toigo at Toigo Partners International. Instead, he envisions the SAN as a set of managed Web services.

Storage skills
"Storage managers will have to get comfortable with server virtualization and moveable workloads," says Dell's Luning. "They'll also need to know about the data, data classification, and better understand each app's storage and performance requirements."

The skills storage admins have today--setting up RAID, provisioning LUNs, zoning and masking--will be relegated to a few specialists or automation.

"The low-level skills will get folded into automation," says Sun's Schaffer. "The storage administrator's expertise will lie in knowing what the data needs and what the requirements are."

For example, a storage admin setting up storage for Microsoft Exchange "will need to know not only the number of mailboxes and their size, but the performance needs and protection requirements, the RPO and RTO," says HP's Fitze. Ideally, the admin can specify this at a high level and automation will set it up correctly.

In addition, storage admins may have to rethink their approach to RAID for extremely large (1TB-plus) disk drives due to impossibly slow rebuild times.

"Extremely large drives raise questions about RAID. Administrators may have to do RAID across files or objects so they would have to rebuild only a small part of a disk," says Enterprise Management Associates' Karp.


No cloud in the forecast
What the enterprise SAN won't look like in five years is a SAN in the cloud, although some storage operations may use the cloud. Similarly, the SAN is unlikely to exist as a set of Web services despite the widespread acceptance of Web services. A wireless SAN could eliminate cabling hassles and expenses, but the volume of data and security concerns make this unlikely. Large DAS farms are a possibility for special situations, but they're unlikely to replace the enterprise SAN despite the simplicity of DAS.

The SAN in five years may look surprisingly similar to the enterprise SAN of today. Protocol convergence, unified fabrics and server virtualization will simplify and complicate the SAN. Storage administrators will need new skills--a better understanding of virtualization, data and apps--while keeping their traditional storage skills sharp. It's not that SAN technology isn't advancing fast. Rather, organizations deploying enterprise SANs adopt change at a more measured pace.


SAN trends, 2013

Likelihood
90%
Disk drives Hard disk drives remain the dominant storage in 2.5-inch and 3.5-inch form factors; 4K sectors will emerge for enhanced error correction; expect capacities to reach 4TB disk capacity, but 15K rpm will remain the top choice for performance.

Likelihood
60%
Management With the widespread adoption of VMware and other hypervisors, APIs by management tool vendors will simplify the backup of virtual servers and enable dynamic provisioning of mobile virtual machines. Intelligence embedded in the SAN and switch will enable more automated, policy-based data management. Object-based storage with rich meta data will allow more intelligence-driven data automation.

Likelihood
85%
Storage Arrays Ultra-high-density storage arrays will pack more storage into a smaller, greener footprint; arrays will have multiple interfaces (IP, enhanced IP, Fibre Channel over Ethernet, Fibre Channel) to connect with converged fabrics; some solid-state disk will be incorporated for high IOPS data.

Likelihood
70%
Switches Multiprotocol switches will be common, and switches will have greater intelligence, which will be used for management.

This was first published in December 2008

Dig deeper on Enterprise storage, planning and management

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close