This article can also be found in the Premium Editorial Download "Storage magazine: How to plan for a disaster before a software upgrade."
Download it now to read this article plus other related content.
Cross-functional staffs and software can make all the difference in your data center.
What's ultimately required is a more dynamic IT infrastructure that can react to rapidly changing conditions, new requirements and massive growth, and also provide availability. In short, for IT to remain (or even become) a truly relevant strategic resource, it has to stop operating within systems that offer a "No" response when asked to provide services to the business. To accomplish this, IT and the infrastructure it controls must become "fluid" and dynamic so that the response to any question can always be "Yes." IT needs to be able to handle whatever comes its way by manipulating the infrastructure to support requirements in near real-time, dynamically and transparently to the business. This may sound farfetched, but I'd argue that the enabling technologies exist--it's the mindset within IT that has to change.
Backing up data or, more importantly, restoring
| lost data is typically a sore spot in most IT shops. Another common issue is provisioning storage in a timely manner. But what if the storage or backup teams didn't have to fulfill those requests? What if, through the use of intelligent software, backup staff could create a few simple policies that would enable a user to reclaim lost files without intervention from a storage backup specialist? Or how about apps that administrators can use to provision their own storage or test/development environment without direct intervention from the storage team? Sound a little crazy? The reality is that organizations are recognizing that data center transformation goes beyond infrastructure to also include organizational structures. The legacy-dependent and separate technology silos that currently exist don't provide the necessary level of interaction and information sharing required to create a more fluid environment capable of meeting future business needs.
Data center architectures and operations are hamstrung by legacy infrastructure (e.g., tape backup, and the one-to-one relationships between specific applications and the infrastructure they reside on). This type of environment is generally very inefficient and only serves to propagate the problems inherent in individual silos and fiefdoms. The result is that business units are held captive for increasingly long periods of time, waiting for information to be restored or the necessary infrastructure to be provisioned so they can access applications. The recovery and provisioning of storage always seems to be the long pole in the tent.
In most cases, it's too difficult or costly to deploy a universal automated provisioning process. Even if it could be done, the process still requires a storage administrator or tech. And finding the time to do this task can be difficult, as the rapid growth of storage places continual pressure on staff just to meet increasingly demanding business needs. Typically, these needs can only be met through individual acts of heroism. Adding more resources doesn't make sense as it's costly, not very scalable and propagates individual domains.
Retrieving information from tape can be a very time-consuming and manual process, especially when the required tapes are stored offsite. The tapes need to be shipped back and loaded so the information can be extracted--provided there are no problems. Oftentimes, this process can take up to three days.
This was first published in May 2008