"Instability" is a polite term for the stuff that drives storage
Probably the most common cause of backup server instability is changes to the system parameters to increase performance or for other reasons. There are a lot of things you can do to a server to enhance performance. However, at some point those tweaks and modifications are likely to make the system unstable in some way. Since the instabilities don't always show up in the application or server that is being tuned, it's important to keep a careful record of changes. That record should be easily accessible to all the administrators.
Another common cause of server instability is changes made in the system, such as upgrades or additions. Microsoft's Service Pack 2 (SP2) for Windows XP caused a lot of remote backup servers to fail because it turned on the XP firewall by default and the backup software couldn't get through the firewall. These are usually easy to identify and can generally be alleviated by rolling the system back to an earlier, stable state, while a more permanent fix is found. In the case of upgrades or new hardware or software, your primary resource is the vendor or manufacturer.
The third main cause for backup server instability is that something changed by itself. This can be the result of a natural process, accumulating hardware failure or changes in the use pattern of the system. For example, in Microsoft Small Business Server, two common causes of backup failure are:
In dealing with any backup server failure, your log files are your friends. You should make sure you're logging the relevant backup events and give those logs at least a cursory once-over every day. When your backup server starts having problems, you need to go over them with a fine-tooth comb, and possibly consider logging additional events to help pin down what's happening.
Pay particular attention to error messages generated by the backup, even if the backup was completed successfully. Modern backup systems are amazingly resilient and often will continue to work -- for a while -- even if something in the backup sequence is reporting an error.
In checking your logs, pay special attention to unexplained timeouts. An unexplained timeout almost always means something is running inefficiently -- at the very least -- and can mean you've got much bigger problems elsewhere in the system. This is especially true in architectures where backup processes run in the background with lower priorities than regular jobs. Since the backup gets a smaller slice of system resources, it is likely to be the first to be starved into a timeout when something starts to go wrong.
(Of course, just to complicate matters, there are some errors that truly don't matter. Check with your vendor to see what they can tell you about these error messages.)
A fruitful question to ask yourself when you have a flaky backup server: What else is using those same resources when the server acts up? Obviously this is a pretty wide-ranging question because a backup server touches so much of the rest of the IT infrastructure, interacting with hardware, software, storage devices and just about everything else.
Often the first priority is to get the backup system stable again as quickly as possible while the real problem is tracked down. Frequently, the server can be stabilized by reducing the load on the backup system by throttling back I/O or devoting more resources to the backup.
Do you know…
About the author: Rick Cook has been writing about mass storage since the days when the term meant an 80 K floppy disk. The computers he learned on used ferrite cores and magnetic drums. For the last 20 years, he has been a freelance writer specializing in storage and other computer issues.
This was first published in October 2006