By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
| Cross-functional staffs and software can make all the difference in your data center.
What's ultimately required is a more dynamic IT infrastructure that can react to rapidly changing conditions, new requirements and massive growth, and also provide availability. In short, for IT to remain (or even become) a truly relevant strategic resource, it has to stop operating within systems that offer a "No" response when asked to provide services to the business. To accomplish this, IT and the infrastructure it controls must become "fluid" and dynamic so that the response to any question can always be "Yes." IT needs to be able to handle whatever comes its way by manipulating the infrastructure to support requirements in near real-time, dynamically and transparently to the business. This may sound farfetched, but I'd argue that the enabling technologies exist--it's the mindset within IT that has to change.
Backing up data or, more importantly, restoring lost data is typically a sore spot in most IT shops. Another common issue is provisioning storage in a timely manner. But what if the storage or backup teams didn't have to fulfill those requests? What if, through the use of intelligent software, backup staff could create a few simple policies that would enable a user to reclaim lost files without intervention from a storage backup specialist? Or how about apps that administrators can use to provision their own storage or test/development environment without direct intervention from the storage team? Sound a little crazy? The reality is that organizations are recognizing that data center transformation goes beyond infrastructure to also include organizational structures. The legacy-dependent and separate technology silos that currently exist don't provide the necessary level of interaction and information sharing required to create a more fluid environment capable of meeting future business needs.
Data center architectures and operations are hamstrung by legacy infrastructure (e.g., tape backup, and the one-to-one relationships between specific applications and the infrastructure they reside on). This type of environment is generally very inefficient and only serves to propagate the problems inherent in individual silos and fiefdoms. The result is that business units are held captive for increasingly long periods of time, waiting for information to be restored or the necessary infrastructure to be provisioned so they can access applications. The recovery and provisioning of storage always seems to be the long pole in the tent.
In most cases, it's too difficult or costly to deploy a universal automated provisioning process. Even if it could be done, the process still requires a storage administrator or tech. And finding the time to do this task can be difficult, as the rapid growth of storage places continual pressure on staff just to meet increasingly demanding business needs. Typically, these needs can only be met through individual acts of heroism. Adding more resources doesn't make sense as it's costly, not very scalable and propagates individual domains.
Retrieving information from tape can be a very time-consuming and manual process, especially when the required tapes are stored offsite. The tapes need to be shipped back and loaded so the information can be extracted--provided there are no problems. Oftentimes, this process can take up to three days.
Something needs to change. Backup technologies are an improvement, but they require capital expenses for disk and potentially new software. Online backup offerings are an operational expense, but they require an enterprise to be comfortable with outsourcing storage of corporate information. To date, this has usually been restricted to desktop and remote-office environments. Typically, there will still be a central administrator to handle requests.
While there's software available to help automate storage provisioning, its control and use is still restricted to the storage domain. This reduces the amount of time required to provision the storage, but still requires an extensive process of change control. But what if storage administrators could delegate that authority and responsibility? Picture a scenario in which the application administrator, needing a snapshot to test a new patch, could simply provision storage themselves. Even better, what if access to provisioning their storage was integrated into their applications to further simplify the process? As data centers continue to become more complex, the industry needs to find ways to help IT staff scale more effectively and work more closely as an integrated team. The days of strictly divided infrastructure teams are nearing their end. Cross-functional teams and software will be necessary for IT to meet the increased service levels demanded by the business moving forward.
For example, online backup services take cross functionality beyond the data center. Many of these new services have shifted the burden of file retrieval from the data center staff to the actual user. Not only does the software automatically back up files to a geographically distant location and greatly accelerate the recovery process but, in most cases, it will eliminate any intervention from the storage or backup teams. What about delegating responsibility for provisioning additional storage? Are storage teams ready to give other departments limited permission and access to storage? Think of the benefits that application-empowered data management could bring. Application admins, QA or test/development personnel could provision and create their own virtual infrastructure and data images instead of having to wait for storage administrators. This would eliminate huge time sinks for all involved, resulting in less operational burden to others and faster time to value across the board. It should also improve service levels, reduce errors and accelerate application deployment.
The big issue here is trust: Do storage teams trust the software and other parts of their organization not to disrupt their area of responsibility? A prudent first step for cutting-edge technology like this would be to fully vet the technology in a test environment and then roll it out to production in a well-controlled pilot program. Once a level of trust has been established, early adopters of this technology will be rewarded with faster time to market and greater efficiencies.
You don't have to make huge, wholesale changes to your world. You can start small and become comfortable before implementing across the entire environment. But make no mistake; this is the way of the future and you can either participate or continue to be marginalized. Progressive IT shops are creating cross-functional teams on a per-project basis and are continuously looking for opportunities to drive greater efficiencies.