An iSCSI storage area network (SAN) can be a flexible and low-cost data storage solution. Therefore, more and more small- to midsized businesses (SMBs) are incorporating iSCSI SANs into their data storage environments. Greg Schulz, founder and senior analyst at Storage IO, answers questions about the advantages and disadvantages of iSCSI SANs in multiprotocol systems and virtualization in this Q&A. His answers are also available as an MP3 below.
Table of contents:
>> What is the current state of iSCSI in the SMB market today?
>> What are the most important developments in iSCSI SANs this year?
>> What are the advantages of using iSCSI vs. NAS for VMware or other virtualization?
>> Can you explain the function of a multiprotocol system?
>> What are the drawbacks of a multiprotocol system?
>> What is Converged Enhanced Ethernet (CEE), and when do you think users are going to start seeing CEE products on the market?
>> What are some other options for smaller SMBs that want to run virtualization but can't afford a SAN?
>> What other tips can you offer about iSCSI SAN?
iSCSI and SMBs go hand in hand. They're the perfect fit for each other because of the pricing, the size of the opportunities and the types of environments. In general, iSCSI can be an enabler for what often is enterprise-type functionality, but for the budget and the capabilities of the SMB market.
I think all things aside, iSCSI has delivered what was previously expected of it. Unfortunately for iSCSI, the bar was set very high many years ago, and what's happening now is that those expectations are being delivered and we're seeing much more broad adoption. But in general, I'm seeing a couple of key things: iSCSI solutions scaling up, and iSCSI solutions scaling down. Normally when you think of iSCSI you think of a large EqualLogic or a large LeftHand-based iSCSI cluster, but the reality is you're also seeing products from IBM Corp, Hewlett-Packard (HP) Co. and Dell that are much smaller and modular, but also low cost, so even smaller environments can get into iSCSI without a lot of cost or complexity.
That's a contentious topic. There are those that feel that the only thing for virtualization, whether it's VMware, Microsoft Hyper-V or Citrix Systems Inc. – is that it has to be block, and some are diehard that it has to be NAS. Some are diehard that it's got to be Fibre Channel (FC) and some say iSCSI and some say SAS. What it really comes down to is your particular environment. Not too long ago you had to make a decision: Are you going with iSCSI block or are you going with NAS? Now the feature function parity has made it more of what's your preference. What do you like from the particular vendor you are dealing with?
Having said that, certain vendors only offer certain technologies, so often, you're only going to have one choice: either iSCSI or NAS. For example, if you go with HP/Lefthand or a Dell/EqualLogic, your only option is iSCSI. If you're going to go with a product whose only capability is NFS, you're going to have to go with NAS.
Multiprotocol systems are all about choice and flexibility. Think of it this way, if you go out and buy a technology, for example, a refrigerator, another appliance around your house, or a technology within your business, typically you don't go and choose the plug-in, the adaptors and the connectors. You look at what its function is. In other words, what are the capabilities? What are the features? How does it scale? How does it perform? How energy efficient is it? What are the capabilities for virtualizing? How's the ease of management? Then once you come to that conclusion of what vendor you're going to buy that from, then ask yourself how does it interface? Unfortunately, some vendors in the past only allowed you one choice, but now with the increased availability of multiprotocol type systems, you now have that choice of what it's going to be: NAS, iSCSI, FC and perhaps SAS -- all out of that particular solution so that you can align the right technology to the task at hand.
There are a few drawbacks of a multiprotocol system. For example, say you're working with an application that only supports or strongly prefers block-based access. This could be something like Microsoft Exchange, which has historically been tied to a strong preference, if not an outright supportive need for block-based access. So you're going to have to go with block, whether it's Fibre Channel block, SAS block, or iSCSI block. Likewise, some databases have been tied to block. At the same time, some databases like Oracle can run on NAS with certain configurations for a vendor's products. So it comes back to the particular application. You need to ask yourself: does the application require or prefer a specific technology?
The Converged Enhanced Ethernet, also known as Data Center Ethernet and Fibre Channel over Ethernet (FCoE), is effectively a premium Ethernet that supports both Fibre Channel protocols and topologies and the Fibre Channel interfaces which are grafted natively onto an Ethernet. The value of this proposition is being able to get down to a single unified cabling, adaptor or NIC, and also have traditional IP-type traffic coexisting along with traditional Fibre Channel block-type traffic. Unlike iSCSI where you're mapping the SCSI protocol on top of IP and on top of Ethernet, this is taking out a layer such that IP traffic and IP applications can still coexist and leverage the world of IP.
As for the Fibre Channel world? They can map that protocol onto that Ethernet and coexist. So it's all about simplifying and removing complexity. However the keyword here is premium, which ties into the 10 gig adaptors. 10 gig adaptors today are a premium, however the prices have come down from where they initially were. But 10 gig adaptors are still rather expensive. Consequently, most iSCSI deployments are done using software based initiators because of the low cost. In other words, most iSCSI environments are not about high performance, but are about relatively low costs. Because of that, you don't see a lot of 10 gig iSCSI deployments going forward. But as those 10 gig adaptors come down in price, we will start to see more of these iSCSI deployments.
You might also ask: Will we see a lot of FCoE in the SMB? Maybe for the higher-end SMBs or for those environments that traditionally use Fibre Channel or environments that are very Cisco-centric. However, most others will probably use a mix of legacy Fibre Channel or legacy Ethernet with IP or iSCSI NAS, as well as more and more shared SAS.
What about lower-end SMBs? A lot of people are adopting SANs today in order to reap the benefits of virtualization, but what are some other options for smaller SMBs that want to run virtualization but can't afford a SAN?
A key point to remember is that virtualization does not require a SAN. Virtualization only requires shared storage. Shared storage is commonly thought of as a Fibre Channel and iSCSI SAN or NAS, but shared storage is also shared SAS. In other words, a SAS array (i.e. Dell MB3000, HP MSA2000, or IBM DS3000) can be shared by multiple hosts, and that's what a virtual environment needs – external shared storage.
Keep the technology you want in mind and look at its features and functionality. Rather than zeroing in on connectivity, like looking at just at the cabling, look at the functionality that you're trying to accomplish. What exactly are you looking to do? Are you looking for support virtualization of the servers? Are you trying to leverage virtualization in a storage system? What are your requirements? What are your needs? Once you've answered these questions, then you can align the right technology to the task on hand.