One way or another, your enterprise storage is going to be networked. The question is how: Fibre Channel (FC) or...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Internet Protocol (IP)? Most likely, it will be both. Companies are starting to employ iSCSI at the edge for less critical or performance-intensive storage, and then using it or other IP protocols to connect to central FC storage area networks (SANs) across the wide area.
"The mixed environment--where servers and storage leverage iSCSI to get to Fibre Channel central storage--is becoming more popular," says Rick Villars, IDC vice president of storage systems.
Cisco Systems, which provides products for both IP and FC, concurs. "iSCSI fits in two places, as access technology to let departmental servers connect to corporate data center storage and to host storage right on an IP network using storage arrays running iSCSI," says Bill Erdman, director of marketing at Cisco's storage technology group.
|IP storage standards|
Until recently, a SAN meant FC networks. In February after approximately three years of deliberation, the Internet Engineering Task Force (IETF) approved the iSCSI specification, which is a protocol for block-level storage over IP. iSCSI will go a long way toward enabling IP-based SANs that can rival FC SANs, and it gives enterprises new SAN networking options. Already, midsized organizations such as Southern Insurance Underwriters, Mapics Inc. and Buckeye Color Labs are turning to IP to boost their storage networking.
Although "IP and iSCSI won't replace Fibre Channel," says Villars, it will provide greater flexibility in how storage is deployed.
Initially, he sees it being used as a way to extend or to connect SANs. IP will be particularly useful in remote backup scenarios in which storage must be replicated to remote sites. While this can be done with IP today, the availability of iSCSI will make it easier to use IP for block-level storage.
"With iSCSI, the pieces of the puzzle are all here. The iSCSI engine should get IP storage really moving," says Arun Taneja, consulting analyst for the Taneja Group, in Hopkinton, MA.
Initially, iSCSI will appeal to large companies with remote sites. "They will use iSCSI to consolidate backup from remote sites and smaller servers," Villars says. Midsized and smaller firms eventually will use IP and iSCSI to network and consolidate their storage. Many of these companies have steered clear of networked storage because of its cost and complexity. Adds Taneja: "iSCSI is a natural for small and midsize companies, workgroups and departments."
The impact of the ratification of the iSCSI specification was felt almost immediately. "It has pushed out more products. When we set up our iSCSI SAN, the only choice was Intel cards. Now there are others," says Robert Filipovich, IT manager at Southern Insurance Underwriters Inc., in Alpharetta, GA.
IBM now offers an iSCSI storage array--its TotalStorage 200i comes with up to 3.5TB of storage. EMC is planning to offer iSCSI in both its Symmetrix and Clariion storage products. Other vendors with iSCSI offerings include Adaptec, Alacritech, FalconStor, Hewlett-Packard, Network Appliance and more. The release of Windows Server 2003 with built-in iSCSI support should trigger a flood of new iSCSI products.
Southern Insurance Underwriters turned to products from San Diego-based StoneFly Networks and Nexsan, Woodland Hills, CA, to create an IP SAN with 2.2TB of capacity for a cost of about $30,000, plus the nominal expense of adding a few more gigabit copper ports to its switch chassis. The storage network was needed to handle the massive volume of insurance forms managed through the company's electronic document imaging system. In addition, the company found itself faced with storing rapidly growing amounts of e-mail.
Until it implemented the iSCSI SAN in January 2003, the company had been adding SCSI arrays to its database and Exchange 2000 servers. But the proliferating storage arrays were growing out of hand, driving the company to seek a centralized storage solution. A conventional FC SAN appeared to be the likely option, but "we wanted to avoid the costs and complexity associated with FC SAN solutions," Filipovich says. The company looked at network-attached storage (NAS), which provides file-level storage, but concluded "the NAS model gave us absolutely zero benefit in the application space." The company's applications required block-level storage.
Midrange storage vendors have been among the first to capitalize on iSCSI as a mechanism for networking their low-cost storage arrays. Boulder, CO-based LeftHand Networks, for example, added the IP protocol and a suite of storage management software tools to its serial ATA arrays to create an IP SAN in a box. "We provide a fully managed SAN at the cost of just the Fibre Channel storage capacity, but without the extra cost of software or Fibre Channel," says Tom Major, vice president at LeftHand Networks. The LeftHand product doesn't include a switch, which keeps the price lower than similar alternatives.
Intransa Inc., headquartered in San Jose, CA, also provides an IP SAN storage array with built-in storage management capabilities. Unlike LeftHand Networks, Intransa includes a Layer-2 Gigabit Ethernet switch. The list price for the base 3.2TB configuration is $62,500.
The midrange approach certainly appealed to Mapics Inc., Alpharetta, GA, a developer of manufacturing software. When the company found itself managing an increasingly unwieldy set of servers and their attached storage for its core business applications, it looked for a storage consolidation option. After checking out traditional FC SAN solutions, the company turned to LeftHand Networks.
Although Mapics only needed a total of 340GB of storage, "LeftHand could give us a SAN with a terabyte of storage and RAID striping for one-third of the cost of the closest Fibre Channel SAN," says Jim Overdorff, Mapics director of enterprise technical services. The company easily added the LeftHand SAN to its existing Gigabit Ethernet backbone and directed its nine servers to the LeftHand storage. The increased traffic proved to be no problem: "We run backups at off-peak hours and have had no network throughput problems," he says.
FC has fully established itself as the SAN standard in the enterprise, but it remains difficult and costly to acquire, deploy and manage. Vendor compatibility problems persist, and people with FC skills are difficult to find and cost more than IP network administrators. It costs $80,000 to $100,000 or more to hire skilled FC network people, compared with $50,000 to $70,000 for IP network administrators, says Mark Goodstein, president of Techpros, an IT recruitment firm in Needham, MA.
In addition, enterprise storage components--disk arrays, HBAs, switches, tape libraries--enabled for use with FC are more costly than their IP counterparts. An Internet search turned up 1Gb FC HBAs, for example, at $1,400 (single retail purchase), compared to 1Gb Ethernet adapters at $500 or less. At the switch level, FC fabric switches run from $700 to $950 per port, and director-class switches run from $1,800 to $2,400 per port, according to a spokesman at a major switch vendor. A Gigabit Ethernet switch for IP costs at least one-third less. A slower 10/100 Ethernet switch can be acquired for a few hundred dollars or less.
Interoperability problems still exist in FC SANs. Although FC component compatibility and interoperability problems are diminishing as a result of endless interoperability plug-fests and continuous OEM compatibility testing, they haven't gone away completely. IP component interoperability, on the other hand, hasn't been an issue for years.
Cost was a major driver when the Carlson Companies in Minneapolis decided to deploy a combination Nishan FCIP SAN. Carlson operates cruises, restaurants, and hotels all over the world. As a result, the company had remote servers with attached storage, which was expensive to manage. In addition, the company grew concerned about the availability of the remote data. "I was not sure they were doing backups correctly or could recover," says Gary Johnson, Carlson's IT architectural consultant. The solution was to consolidate storage and backup at the central data center.
The company already had an FC storage array as its centralized storage and more FC out at its servers. Johnson used Cisco switches and IP to cross distances and Nishan to translate between IP and FC. "Using IP, I get scalability and reliability. I have a Fibre Channel SAN at the core and Fibre Channel SANs as islands of storage. Then, I route the traffic using IP," he explains. Carlson manages storage data traffic with its existing HP OverView management product as well as other standard IP traffic shaping tools. A private IP network--built around two Cisco 6509 Gigabit Ethernet switches and a set of Nishan FCIP storage switches--delivers the data with minimal latency and no dropped packets, Johnson says. In the future, the company intends to use iSCSI to ease the addition of new server hosts, enabling them to find the shortest path to the nearest Cisco switch by which they can access data stored on the FC disk arrays.
Ideally, Johnson would have liked to design Carlson's storage environment completely around IP, but the state of the technology when the company started in mid-2002--and even today--wouldn't allow a completely IP solution. "IP is a workable way to do storage, but you can't avoid Fibre Channel. You need to leverage it," he says.
Increased demand for storage
Buckeye Color Labs Inc., in North Canton, OH, turned to IP and network-attached storage (NAS) when it faced a sudden demand for storage. The company processes photos. Two years ago, approximately 5% of its customers wanted electronic digital images of their photos. Each image required approximately 30MB of file storage. When the company was handling a few hundred images a week, the storage was manageable. But suddenly, demand for digital photos exploded, and within 18 months, the company found itself processing 6,000 images each week.
Buckeye had been buying IBM servers with attached storage. "We ended up with 14 or 15 servers, which started to get very expensive," recalls Buckeye COO Bob Hendrickson. Adding to the cost was the expense of managing storage on 15 different servers and backing them up. After looking at a large EMC FC SAN solution, Buckeye called in a storage consultant, OH-based Chi Corp., to come up with a less costly solution.
|Pros and cons of IP and FC|
Chi proposed an IP storage network connecting the Buckeye servers with 2.5TB of NAS storage. The company pulled disk drives from the attached storage to help fill up the Nexsan NAS disk arrays and purchased copper NIC cards, rather than more costly host bus adapters (HBAs) to connect the servers. An IP switch from Extreme Networks gave the solution extra ports to accommodate growth. It also added the FalconStor storage appliance to allocate the storage, handle LUN masking and zoning and provide file-level and block-level storage services, says John Thome Sr., Chi chairman.
"We could have done it with EMC, but the cost would be $300,000 to $400,000," Hendrickson says. "What we did cost less than $100,000." The lower cost results in part from Nexsan's use of ATA drives, which aren't comparable in performance to the more costly EMC drives.
TCP/IP offload engine
Although IP is gaining momentum as a way to consolidate storage and to back up storage over distance, it still has some hurdles to overcome, beginning with performance. TCP/IP is a processing-intensive protocol. When used in an enterprise storage situation, it can bring a system nearly to a halt. The solution, suggests David Hill, VP of storage research at Aberdeen Group, Boston, is to offload the processing to a separate processor, in this case a TCP/IP offload engine (TOE). "Offload engines are just starting to arrive," he says. He expects they will quickly become faster and cheaper.
However, you don't need a TOE or even an iSCSI card to create an IP-based SAN. "All you need is an Ethernet NIC and a server running the iSCSI protocol," IDC's Villars says. Windows Server 2003, for instance, will have the iSCSI protocol built in.
Other hurdles involve the immaturity of IP when used for storage. The iSCSI spec was only recently ratified, and products are just beginning to appear. Similarly, 10Gb Ethernet--the desired speed for enterprise storage--is new and still pricey, but products already are shipping. The FC counterpart, 10Gb FC, remains on the drawing board and when it comes it won't be compatible with today's slower FC. In the interim, 4Gb FC--which is slower, but compatible--is on the way.
These hurdles should be surmounted within 18 to 24 months. Vendors are rushing IP and iSCSI products to market. "In another 18 months, you'll have the basic management tools," says Taneja, who expects TOE capabilities to be built into NICs.
But despite the growing interest in IP for storage, industry observers expect FC to remain dominant in the large enterprise. "FC isn't going away, because it is always going to be on the back end," the final link that connects the storage devices themselves, says Don Mead, a member of the SNIA IP Storage Forum governing board and a manager at FalconStor.
But IP storage will continue to grow, says Mead, particularly among small and midsized organizations and for connecting SANs and storage over distance. So, get ready for a dual IP-FC storage world in which FC reigns in the data center, while small and midsized organizations and anyone who wants to run storage over long distances turns to IP.