In today's cost-conscious economy, consolidating storage to save on operating expenses makes sense. Yet being able...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
to afford large pools of network storage is a relatively new development. As recently as two or three years ago, only companies with a mission-critical requirement could justify the cost of high-speed, high-capacity storage devices. Today, you can buy a terabyte of IDE storage for not much more than $1,000. While storage area network (SAN) and network-attached storage (NAS) prices are dropping more slowly, the cost of the storage device is no longer a major constraint. What determines your ability to consolidate your storage and save on operating costs today is the speed and robustness of your network.
Without access to shared storage, most users cannot work. This means that your network needs to be highly reliable and segmented in ways that limit the scope of any interruption. Fault tolerance is like insurance; you can buy as much as you want. The good news for most of us is that we only need a little.
Segmentation, fault isolation and fault tolerance are a matter of design. It pays to have your network architecture done by the best engineer you can find. A good network doesn't have to be expensive, it has to be well-built. The best engineers understand cost as well as technology.
Finally, there is the matter of getting existing data onto the new infrastructure. Surprisingly few of us actually know what is out there. Most people do not know what tools exist to help them. (For example, how many Windows administrators know about XCOPY /O or Robocopy from the Resource Kit, both of which copy data and its permissions?)
As we accumulate larger and financially more significant pools of storage, it becomes increasingly important to manage them well. If we fail to maintain a 20 GB volume and its performance drops, only a few people care. However, if we fail to maintain a 20 terabyte SAN and its performance drops, everyone cares.
What is a meaningful measure of SAN performance: The hardware stats? The host operating system stats? Or maybe we should look at the bits as they arrive at the user's desktop. What does a user care that the SAN utility says performance is ok?
When you consolidate your storage -- as you should -- be prepared to learn new things about both networking and performance management.
About the author: Bruce Backa is chief technical officer (CTO) for NTP Software, a long-standing IT leader and one of SearchStorage.com's storage management experts. Ask him your storage management question today.