Until recently, technological and other barriers have kept the file and block storage worlds separate out on the network, with each in its own management domain and each with its own strengths and weaknesses. Many storage managers view block storage as the gold standard, with all of the bells and whistles, and view file storage a poor stepchild. Given the prevalence of business-critical databases housed on storage area networks (SANs), that's understandable.
|How hybrid systems work|
In a converged environment, NAS gateways sit alongside block-savvy application servers to enable access to back-end SAN storage. But companies that opt for a multivendor solution often struggle to find a mix of products that will work well together.
Now, a slew of vendors are angling to improve large-scale file storage by drawing these two worlds together. Now that SANs are more common, the benefits of deploying network-attached storage (NAS) primarily as a file interface to the SAN are tantalizing. With a hybrid SAN-NAS solution, companies can consolidate block- and file-based data on common arrays. What's more, shifting NAS storage to an enterprise-class disk array can bring a host of benefits to file-based data operations, from sophisticated backup and snapshot capabilities to vastly improved scalability.
Yet issues remain. For example, some implementations raise concerns about performance and latency on SANs. Although growing weekly, product choices still aren't large. Picking the appropriate converged SAN-NAS solution is complex, says Phil Goodwin, senior program director for infrastructure strategies at the Stamford, CT-based Meta Group Inc.
The motive for convergence--constant storage growth--wasn't complex for Mike Forman at San Jose, CA-based Cadence Design Systems. As IT director of North American operations for the computer-aided design software provider, Forman oversees a global storage operation that includes half a dozen SANs across three continents. He also manages a fleet of nearly 50 NAS devices at remote offices worldwide. All told, Cadence manages approximately 300TB of data.
"We had hundreds of different file servers, and it was a nightmare keeping track of parts and dispatching [managers to service them]," says Forman, echoing a common complaint of storage managers faced with such environments. "And we were forecasting doubling storage every year, so it was only going to get worse."
Forman says the decision to converge file and block operations on back-end IBM Enterprise Storage Server (ESS) disk arrays arose from a strategic business need. As a developer of software, Cadence must provide enterprise-class file sharing for the hundreds of developers working on program code. With the company already supporting block-based I/O for its corporate Oracle database and Siebel CRM software, Cadence opted in 2001 to pull its proliferating file operations into the SAN. To do so, it deployed Sun Microsystems E420, E450 and E480 servers running Veritas ServPoint software at each of six corporate sites. The Veritas software allows the Sun servers to act as NAS gateways, also called NAS heads, onto the SAN.
"The SAN fabric is completely separate from the LAN. We do our backups on the SAN and we attach all our application servers to the SAN," says Forman. A cluster of NAS heads on each floor is the gateway for users to access the ESS arrays, typically across campus.
The move allowed Forman to consolidate direct-attached file storage and rein in spiraling management costs, but the solution is hardly perfect. Forman's team must use separate tools for configuring and managing block and file access. And Forman says he would like to find a SAN management tool that's more sophisticated than the IBM Specialist and IBM Expert software that his group uses today. But Forman is unequivocally satisfied: "The economics of a centralized solution are wonderful."
|A different tack|
The many faces of convergence
Companies that are looking to combine SAN and NAS operations face a host of choices, including standalone NAS gateways, SAN solutions with integrated NAS functionality, NAS devices offering block I/O and even filer capability running within a switch.
Of these approaches, the most established is the standalone NAS device fronting a SAN disk array. The EMC Celerra NS600, IBM TotalStorage 300G and the NetApp F800 and FAS900 lines are all examples of NAS products that incorporate Fibre Channel (FC) ports on the backside to connect to a SAN switch. File calls over the IP network are routed through the NAS device and over the FC interface to the back-end file-based storage on the SAN array. In this arrangement, the NAS device acts strictly as a ramp onto the backend array for file calls. Host-based block I/O bypasses the NAS devices and goes directly to the FC switch.
The setup means that the NAS and SAN are managed distinctly and that volumes must be configured for block or file operation using independent tools. In addition, companies are typically limited to using a NAS gateway from the same vendor as the SAN.
An exception is the Hitachi Data Systems (HDS)/NetApp Enterprise Gateway, which mates NetApp FAS900 series filers with Hitachi Freedom Storage Lightning 9900 and Thunder 9500 storage systems. The solution is sold and supported by HDS, providing a single vendor point of contact for companies seeking to deploy the solution.
NetApp has gone further, however, by building arrays that provide both file and block I/O in the same array. The NetApp FAS900 family is an example of a device that has both a NAS head and a block interface and integrates block and file storage at a granular level (4k chunks), rather than the drive or volume level that gateways generally require.
The FAS900 series made its debut nearly a year ago as a high-end filer with high-end storage networking in the form of FC ports. Since then, NetApp has introduced iSCSI to the FAS line, and broadened it with entry-level products in the form of the FAS250 series, which integrates the filer head directly into the disk shelf, lowering cost and saving space. Initially, the 250 was an iSCSI-only hybrid box, but not for long.
"We have clear plans to enable the FAS200 series and the whole FAS line for Fibre Channel," says Rich Clifton, vice president for NetApp's SAN/iSAN unit.
NetApp's goal is to provide "unified storage"--block or file--on a common box, accessed via either the common Internet file system (CIFS), NFS over IP, FC, iSCSI or other protocols. In fact, the same data can be accessed via either FC or iSCSI. As it has rolled products out, NetApp is progressively enabling much of its ancillary software--such as its snapshot products--to work in file and block environments as well.
One implication of NetApp's approach is that it enables users to take a different approach toward optimizing their use of storage. For example, storage managers have traditionally put high-performance applications on a high number of small-volume spindles and low-performance applications on a small number of high-volume spindles. Many users are contemplating SANs that have two or three grades of storage arrays: a typical high-end box, a midrange array and a box full of serial ATA drives, for example, with access controlled through zoning or perhaps some form of virtualization.
A potentially simpler approach, according to Clifton, is to have dozens of high-volume drives. You can spread your high-performance data out across all of those spindles, gaining a performance boost, and still use the free space on each drive for low-performance data. That becomes more attractive the more the system automatically handles performance management, something NetApp prides itself on.
Hybrid capabilities in a single box may not work for companies already invested in a SAN infrastructure, but it's worth investigating for those moving from direct-attached storage (DAS). The capability also provides a ready upgrade path for those who now deploy NAS-only FAS900 devices.
The interoperability mess
Outside of rare partnerships like that between NetApp and HDS, the interoperability of SAN and NAS is weak, says Marc Farley, president of Building Storage, a consulting firm. "The existing products won't mix--it's oil and water," says Farley. "Forget about getting them to interoperate. If you want an integrated environment, at some point it is going to have to be a new environment."
In fact, interoperability is so poor that many companies are putting off integration all together. Ron Lovell, practice director for storage at consulting firm Greenwich Technology Partners, New York, NY, says concerns about vendor lock-in are top of mind among IT managers.
Even when products are supposed to interoperate, the results can be trying. Cadence's Forman says the finger-pointing on his converged SAN-NAS deployment got so bad that he had to throw out one vendor.
Forman says, "The finger-pointing gets insane. I think it went beyond finger-pointing and went to boulder throwing. We had some brutal meetings."
Forman's team struggled with what he termed "major compatibility problems with the SAN infrastructure," including host bus adapter (HBA) cards that failed to interoperate with FC switches. While the situation has improved significantly since 2001, crossing the gap between SAN and NAS still presents tough choices, says W. Curtis Preston, president of The Storage Group, a consulting firm in San Diego, CA.
|An eye on SCSI|
"If you are a filer shop and you start having your filer do SANs, it's not like you can buy your switches and your host bus adapters from Network Appliance," says Preston. "A filer vendor in the SAN space won't have the traction to ensure interoperability."
In fact, Preston urges companies to carefully consider avoiding SAN deployments, if all they want to do is build a highly scalable file-based storage infrastructure. As an example, Preston tells of one large Fortune 500 company that was looking at deploying SAN solutions from EMC and HDS, and at the last minute, decided to bring NetApp into the picture as a third option.
"[NetApp] ended up doing very, very well in comparison to EMC and Hitachi, and their price was much less," says Preston. He adds: "If you are doing file, you should look at all the file options."
That's essentially what Buckeye Color Labs, a North Canton, OH-based photo service bureau, did, but with an interesting twist. It deployed FalconStor Software Inc.'s IPStor after a surge in storage demand strained its DAS. With the number of servers rapidly growing, the company turned to Cleveland-based consulting firm Chi Corporation to help deploy IPStor on a single Intel-based server running Linux.
IPStor's virtualization features allowed the company to reuse its existing disks, running them in a JBOD array from Chaparral Network Storage, Longmont, CO, alongside two new ATA disk arrays from Woodland Hills, CA-based Nexsan Technologies Inc. Most important, the solution allowed the company to access the disks via a direct SCSI attachment, sidestepping the significant costs of a FC array. Jeff Manuszak, senior engineer at Chi Corp., says the infrastructure can scale to 12TB before the number of required SCSI connections forces a switch to FC.
Virtualization can also be a way to get around vendor lock-in. Both DataCore's SANsymphony and IPStor make it possible to unify block and file data access, while drawing disks together into a single, centrally managed pool of storage.
A new take on this approach comes from Maxxan Corp., San Jose, CA, which is just releasing its first-generation MXV320 intelligent switch for general availability as we go to press. Users can deploy blades on the switch (or in a standalone box) that run either FalconStor's IPStor virtualization server or Microsoft Windows Storage Server 2003 to provide file services on back-end SAN storage NAS vendor Z-Force, Santa Clara, CA, offers yet another spin on virtualization, selling file switches that link heterogeneous NAS devices to create a single, logical volume. The ZX-1000 switch sits between the NAS filers and the hosts and clients that access them to scale both performance and capacity. Company officials say that it could be made to work as a gateway, but it hasn't done that yet.
Even as vendors tout the benefits of convergence, some storage consultants question the wisdom of such an approach. The Storage Group's Preston argues that the converged SAN-NAS product segment is more a product of vendor positioning than actual user need.
Says Preston: "I know of no one who said 'Gee, I wish my NetApp could do block.'" But he adds, "I do know NetApp and EMC salespeople who got tired of losing sales because of the other side's capabilities."
Jim Damoulakis, chief technology officer for GlassHouse Technologies Inc., Framingham, MA, says he has found interest in converged SAN-NAS solutions to be slim. "We have not seen a large number of examples of this," he says.
Take BMO Financial Group, which has left its direct-attached file-based storage in place while focusing on extending SAN operations. Brian Black, vice president of application management services at BMO Financial Group, says the bank operates a fast-growing, 50TB FC SAN that is built on HDS and IBM Shark arrays. Black says the bank has no immediate plans to transition its direct-attached file-based storage to the SAN. "The predominant strategy now is on new projects and getting them on to the SAN," Black says.
Northwestern University in Chicago places similar emphasis on its SAN operation. The university maintains a 10TB to 12TB IBM Shark-based SAN, which houses an Oracle database and various administrative applications. Another 5TB of file-based storage is contained on Dell PowerVault NAS boxes, which provide shared file access on the Ethernet network. Dana Nielsen, director of the data center and vice president of information technology at Northwestern University, says there are no plans to unify the NAS and SAN infrastructures.
Arun Taneja, president of consulting firm The Taneja Group, Hopkinton, MA, says the best move for many companies may be simply to wait. "The real convergence is going to be when your tools are consistent. NAS tools are totally different from SAN tools today," Taneja says. "We're still in the infancy stages on the NAS management side, and we're in the first generation of SAN management products."
- Data Protection Strategies in the Era of Flash Storage –Rubrik
- Data Management Strategies for the CIO –SearchDataCenter.com
- Three Ways That AI Will Impact Your Data Management and Storage Strategy –IBM
- Data integration strategy: A clearer path for data –TechTarget