For a company with 12 servers (no failover server), I want to create a mirror/snapshot of the system drive (OS...
+ database + application server settings). The reason is that after a server and an OS go down, reinstallation of an OS +application server + database server takes twice as long (four hours) as restoring database from database backup files (two hours). Will a SAN help in this case? We have 14 internal disks (73 GB, 10K rpm) per server for a total of 164 disks. Is $150K a reasonable target for moving all the disks into a SAN with tape library support? We currently have seven DAS tape drives -- one tape drive per two servers -- for a total of five TB data full backup in the weekend and 0.3 TB incremental every weeknight.
A 5 TB SAN solution using modular storage should not cost an arm and a leg. Make sure the vendor you choose gives you 5 TB of usable storage, and not raw storage. The number of drives you'll need for usable storage will depend on how you configure your RAID sets in the storage array. (RAID-1 or 1+0 will cost the most, since you need twice as many physical disks). You say you have 12 servers, each with 14 x 73 GB 10K rpm disks. If you buy a modular SAN array, you can configure your RAID to be RAID-5 13+1, which should net you a bit over 900 GB per server using 73 GB disks, or around 1.8 TB if you use 146 GB disks. This gives you a total of either: 73 GB drives: (13 x 73= 949) x 12 servers = 11.388TB usable storage; 146 GB drives: (13 x 146 = 1898) x 12 servers = 22.776TB usable storage. Take a close look at those numbers. Since you're backing up only 5 TB for full backup, that means you're underutilizing your current
(73 x 14 = 1022) x 12 servers = 12.264TB RAW 12264 - 5120 = 7144
This shows me you're wasting over 7 TB worth of storage since you cannot share it between servers. This is why a SAN makes so much sense for many companies. You'ill also need at least two more disks per server in your SAN to act as your mirrored boot disk. This means 24 more drives. Create 12 RAID-1 mirror sets in the array and dedicate one for each server. Do not share these disks with other servers, or install any other applications on them. Use LUN security in the array to make sure the server's host bus adapters (HBAs) have a dedicated mirror set as its boot LUN. Use LUN number 0 (zero) for each boot LUN, if your array supports it. You should use at least two HBAs per server but three would be better. I would dedicate one HBA per server as the backup path through the SAN fabric, which is either zoned out to a Fibre Channel-based tape library connected to the fabric, or to a third SAN switch dedicated for backup. You'll need at least two 16-port switches for the fabrics with each switch as its own fabric, and each server connected to each switch. It should end up looking something like this, which is from my book that shows you how to hook up server-free backup.
If you do use a third HBA in each server for backup, then you'd use another SAN switch and the dedicated backup HBAs and the tape library would connect to that switch. You'll need "path management" software on your servers so your data path can failover in case of the HBA, switch or storage controller fails. If this is a Unix solution, then something like Veritas DMP or Solaris MPXIO can be used. If it's Windows, you can use the MPIO driver, or you can buy software from your storage vendor that supports their array (Hitachi HDLM, EMC Powerpath, HP Securepath, etc.)
Click here for part two.
Dig Deeper on NAS devices
Related Q&A from Christopher Poelker
SAN expert Chris Poelker discusses how to change the size of a LUN in a Microsoft cluster server environment.continue reading
SAN expert Chris Poelker compares connecting a SAN with wavelength cabling and dark fiber and discusses the pros and cons of each.continue reading
Storage expert Chris Poelker discusses SATA/SCSI compatibility issues in this expert advice article.continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.