By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
One of our clients has an IBM SAN connected to an xSeries 345 with its own RAID 5 array. Their existing design consists of performing a disk-to-disk copy from the SAN to the xSeries server's RAID array. They are copying database dump files which are 4 GB on average. The files are then backed up by an LTO drive that is directly attached to the server. Of course, they are complaining about backup response issues. I have recommended that they connect the tape directly to the SAN and bypass the server directly, but they are hell-bent on keeping this setup. My question is, what can we do to optimize copying from a SAN to a regular RAID array? I have already recommended that they:
- Change the RAID setup from RAID 5 to RAID 10 for write performance
- Change the default cluster size being used on the RAID volume from the Windows default
- Increase the cache memory on the RAID controller
Any other recommendations would be appreciated. Thank you.
Other than getting a new customer, you might try putting in a second RAID array with a new HBA to allow multiple data to be written to disk through multiple I/O paths. I wasn't clear whether the write-to-disk was the bottleneck or the write-to-tape. If the tape seems to be the bottleneck, get a second tape drive and HBA. The hardware costs are really inconsequential compared to lost time and the amount of administration you need. The other changes you recommend are reasonable but not big improvements.