This article can also be found in the Premium Editorial Download "Storage magazine: Distance: the new mantra for disaster recovery."
Download it now to read this article plus other related content.
The data center of the future will certainly have more powerful computers, networks and storage than today's data centers. However, if that's all the future holds, it will just be a larger, faster, cheaper version of the present, and that won't be much of a step forward.
Change to one pillar can bring the whole architecture down, because the pillars are not independent of each other.
This model (above) is actually layered, as the diagram of the storage infrastructure (below) demonstrates. The goal at each level is to use agnostic components.
A better approach is to conceive of the architecture as the sum of agnostic components. Changes to each will change the overall shape, but not destroy the functionality.
We're at a point where we need to move past a feeds-and-speeds approach or even just a checklist of standards compliance and develop, evaluate and deploy products that enable a balanced architectural approach. This will allow us to respond to changes in the business and the continuing evolution of technology much more efficiently.
Failure to do so will condemn us to an endless series of conflicts between vendors' proprietary technologies and the conflicting requirements of different applications. The great solution we bought to fix the problem with performance could still turn out to be a compatibility nightmare when an unanticipated application requirement pops up.
Not all our problems originate with vendors. We create problems in fostering narrow specialties within IT that don't always speak each other's language. A new data center concept should include a new idea about how to balance the needs of each of these disciplines with the needs of the whole IT group and the whole business.
We're at the point where the components of computing can't be disentangled from one another, but should be. The building blocks of all business information architectures can be described as infrastructure, data and applications--devices that process and transport the data they manipulate, and the instructions (applications) that describe the logic of that processing. There's nothing new in that description. However, up until now, characteristics of each of these three components have heavily influenced and competed against the other two.
Each of the disciplines associated with these components imposes its requirements on its own particular part of the process, sometimes without how it affects the other. How many times have you been involved in a new project and asked why something has to be done a certain way, only to get the answer, "It's a religious issue?" Imagine if each of these components was independent of each other so we didn't have to change our application facilities just because we were dealing with new data.
The data center of the future should be based around architectures that let us adapt to change and deliver what our businesses need without locking us into one or another technology. So, how can we get to that?
The figure "Needed: A new way to look at information technology" helps explain. In this model, the ideal business informational architecture has infrastructure, data and applications all aligned with a business process, independent of each other and each capable of interacting with the other components as they change and adapt to the business requirements. The model can be used to drill down to greater detail. How do the components of your infrastructure relate to each other? Does your volume management software, for example, impose religious restrictions on the type of storage you can use? If it were agnostic, would you be able to build a more optimal storage environment?
This was first published in May 2003