DAS may seem less complicated than networked storage, but a SAN makes it far easier to protect data from multiple...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
Consolidated networked storage isn't just for Global 2000 corporations anymore. Today, more companies with small IT staffs and relatively small budgets are building SANs for their most demanding applications, including data protection.
Until recently, SANs were deployed mostly by large companies that had the storage skills to handle a host of challenges, including hardware incompatibilities and complex SAN settings. However, the SAN landscape has changed significantly within the last few years with the introduction of simpler storage arrays based on the iSCSI protocol, as well as initiatives like Microsoft's Simple SAN that have lowered many of the traditional barriers to deploying and maintaining a SAN.
The primary motivation for building a SAN was to meet a pressing need for performance, scalability or both. But today's new SAN buyers are looking for more than performance and scalability; they're interested in using snapshots of SAN volumes to protect data, sometimes to replace traditional backups. The integration of snapshot technology with many backup applications, along with the Windows operating system, is also spreading such applications as server-free backups and remote application failover--applications once reserved for high-end critical systems--to mainstream IT applications.
The DAS dilemma
Although disk drives keep getting bigger, the level of performance per gigabyte hasn't kept pace. Storage managers are being asked to protect an ever-expanding amount of data: multimedia files, complex office documents, database-driven applications and a flood of email. The old method of adding bigger and bigger internal disk drives just won't handle these new demands. Even if sufficient space can be added inside a server, and 1TB disks promise just that, performance may not be up to user expectations or the requirements of the application.
"We originally went to a SAN architecture for capacity and performance," says Travis McCulloch, systems architect at Hilton Grand Vacations Co. in Orlando, FL. "Then we realized that the SAN could help us consolidate and improve our backup processes. We have a multitude of different platforms, and the SAN allows us to easily manage data protection for all of our hosts."
Like most enterprises, Hilton Grand Vacations started with a small Fibre Channel (FC) SAN primarily to share the resources of a single large storage array. As data storage capacity needs grew, so did the SAN. When they considered streamlining their backup environment, the advantages of networked storage targets became readily apparent. "We have standardized on a single backup environment with CommVault for our Apple, multiple 'flavors' of Unix and Windows servers using disk-to-disk-to-tape methodology," explains McCulloch, "and the results are more reliable backups with much less effort."
Stability was the driving force behind Joe Eastman's SAN purchase. "We had two drives fail in a single direct-attached RAID group on our Exchange server," says Eastman, IT department manager at Griffin, Smalley & Wilkerson Inc., an insurance firm in Farmington Hills, MI. "We were doing daily full tape backups as an accepted best practice. What we didn't realize was that it would take almost a whole business day to restore from tape. We also lost our customers' messages during the outage. After that, we knew we needed to get a better storage environment--we needed to get a SAN."
SAN architectures afford far more flexibility in data protection options than can be achieved with plain direct-attached drives. One of the most prevalent new trends is to use a disk array as a backup target. There are several ways to approach this, including using a tape-emulating disk array, snap copying options or local volume replication. This enables a much shorter backup window, while still allowing tape copies of the data to be made for offsite storage. "I went from nervously trying to complete all the backups in a 12-hour window to simply not worrying about backup any more," says Eastman.
The critical design parameter mentioned most often by SAN architects is the time required to restore an application after a disruption. With a "smoking rack" hardware failure for example, systems administrators are under pressure to get the application back online as soon as possible. "This is where SANs excel," says Eastman. "Before, I had to wheel in a new server, load the operating system, install the backup agent, find the right set of tapes and the restore process took forever. Now we use the boot-from-SAN option, so I wheel in the new server, boot from the SAN and the application is back up in minutes not hours. An added benefit is that we save money on the servers by not having to buy them with local disks."
|Microsoft's Simple SAN
Last year, Microsoft introduced a new initiative that promises to make the lives of its support staff simpler, but it's likely to make your life easier, too. Responding to a large number of support calls associated with storage network and array configuration, Microsoft developed an initiative called Simple SAN to push storage vendors to make their products easier to use. Compliant devices have now been on the market for more than a year and the results have been encouraging.
Questions to consider
As with any endeavor, the most important thing about a SAN implementation is getting a clear picture of the end result. Basic questions include: What do you want to accomplish? What applications will be included? What resources will be required?
Some storage vendors oversell the benefits of their products, as new SAN users quickly learn. Pricing can be deceptive; for example, required options, such as host bus adapters, and software for snapshots and data replication can quickly double initial price estimates.
Canaras Capital Management LLC, an asset management firm in New York City, set clear objectives for its new SAN. "Step No. 1 is educating yourself about SAN technology," advises Raffi Jamgotchian, CIO. He says his company was looking for a flexible and scalable infrastructure, but wanted to avoid the higher acquisition costs of an FC SAN by evaluating iSCSI alternatives. "Ease of management was the most important thing for us. If we had to hire another person just to run the SAN, it wouldn't have been cost-effective," says Jamgotchian, adding that it's vital to seek product references from other users.
This sentiment is echoed by Griffin, Smalley & Wilkerson's Eastman. "Take the vendors' products for a spin," he says. "Evaluate their technology in your own shop, if possible." Most hardware vendors, big or small, will welcome the opportunity to bring their equipment onsite for evaluation once they recognize an opportunity. Many will also bring along expert technicians to assist with the testing and implementation. But don't be tempted by options and features that weren't on your original wish list. Although some companies now bundle many advanced features at a single price, others use à la carte pricing.
If you're in a small IT shop, most likely you and your colleagues will be learning a new technology and designing new business operational processes around the SAN. "One vendor wanted me to take a week of training with the SAN solution, which is something we couldn't afford even if it was free of charge," says Eastman. Environments like these should look for the simplest and most integrated solutions to reduce the amount of training required.
Building the SAN
Good planning and vendor selection leads to a smooth migration to a SAN environment. However, the most often overlooked--and most misunderstood--aspect of SAN deployment is data migration. How do you get the data out from "behind the servers" and into the SAN? There are many approaches to this problem, and most enterprises choose to employ more than one. Some options include the following:
- Host-based copying applications, like Microsoft's Robust File Copy (Robocopy), or cpio and tar on Unix systems remain popular.
- Vendor-supplied data migration tools, which may leverage features of the storage array, can be tempting, but the additional professional services and licensing costs should be considered.
- Phased rehosting of applications on new servers is a smooth and seamless way to implement a SAN if servers are being migrated anyway.
- Backup and restore of application data is another popular method, but this can interfere with daily schedules and reveal deficiencies in the backup system.
SAN technology has come a long way in just the last few years; storage vendors are starting to make their products more user-friendly and flexible. With the introduction of IP-based technologies and vendor attention to integration standards, SANs are now more affordable for the midsized enterprise than ever before.
With easy implementation and low acquisition cost, the total cost of ownership for SAN technology is at a level that makes sense for nearly every IT shop. With demands on application availability, capacity scalability and performance constantly increasing, the question is no longer if your company can afford a SAN but how can you afford to be without one?