The shape of the new data center

The key to the next wave of data center technologies is adaptability. Realizing that goal may rest in your hands.

This Content Component encountered an error
This article can also be found in the Premium Editorial Download: Storage magazine: Distance: the new mantra for disaster recovery:

The data center of the future will certainly have more powerful computers, networks and storage than today's data centers. However, if that's all the future holds, it will just be a larger, faster, cheaper version of the present, and that won't be much of a step forward.


Change to one pillar can bring the whole architecture down, because the pillars are not independent of each other.


This model (above) is actually layered, as the diagram of the storage infrastructure (below) demonstrates. The goal at each level is to use agnostic components.


A better approach is to conceive of the architecture as the sum of agnostic components. Changes to each will change the overall shape, but not destroy the functionality.

We're at a point where we need to move past a feeds-and-speeds approach or even just a checklist of standards compliance and develop, evaluate and deploy products that enable a balanced architectural approach. This will allow us to respond to changes in the business and the continuing evolution of technology much more efficiently.

Failure to do so will condemn us to an endless series of conflicts between vendors' proprietary technologies and the conflicting requirements of different applications. The great solution we bought to fix the problem with performance could still turn out to be a compatibility nightmare when an unanticipated application requirement pops up.

Not all our problems originate with vendors. We create problems in fostering narrow specialties within IT that don't always speak each other's language. A new data center concept should include a new idea about how to balance the needs of each of these disciplines with the needs of the whole IT group and the whole business.

We're at the point where the components of computing can't be disentangled from one another, but should be. The building blocks of all business information architectures can be described as infrastructure, data and applications--devices that process and transport the data they manipulate, and the instructions (applications) that describe the logic of that processing. There's nothing new in that description. However, up until now, characteristics of each of these three components have heavily influenced and competed against the other two.

Each of the disciplines associated with these components imposes its requirements on its own particular part of the process, sometimes without how it affects the other. How many times have you been involved in a new project and asked why something has to be done a certain way, only to get the answer, "It's a religious issue?" Imagine if each of these components was independent of each other so we didn't have to change our application facilities just because we were dealing with new data.

The data center of the future should be based around architectures that let us adapt to change and deliver what our businesses need without locking us into one or another technology. So, how can we get to that?

The figure "Needed: A new way to look at information technology" helps explain. In this model, the ideal business informational architecture has infrastructure, data and applications all aligned with a business process, independent of each other and each capable of interacting with the other components as they change and adapt to the business requirements. The model can be used to drill down to greater detail. How do the components of your infrastructure relate to each other? Does your volume management software, for example, impose religious restrictions on the type of storage you can use? If it were agnostic, would you be able to build a more optimal storage environment?

Put it to the test
My ideal world may never come about. But as people who make major decisions and purchases, and sit on committees with our colleagues who do the same in their domains, we can get closer to nirvana.

Let's try to develop a set of criteria that lets us evaluate whether or not any particular new technology can help us move in that direction. Let's look at today's SAN technology and its role in the infrastructure. The big news in recent years has been our ability to use cheaper modular arrays to replace monolithic arrays that didn't easily allow for incremental growth. That's great, but in many ways, modular arrays are just monolithic arrays that you build in pieces. Both are islands that don't scale well beyond their own limits--once the box is full, it's full. Even beyond the capacity issue, the arrays also impose limits on how well we can leverage the infrastructure. Today you can buy an array with many terabytes, but it may only support a limited number of logical host connections. Suppose I'm trying to support a thin-client environment running blade servers--my needs could easily exceed that logical limit quickly.

Another problem with today's arrays is that they're not generally useful. We have arrays for block data such as databases, arrays for file data, and now, arrays for fixed-content data. If our applications or data change, our infrastructure must change. A better idea would be if our arrays had the intelligence to understand the type of data they were storing and treat it with different requirements.

Following is a checklist for a future generation of arrays that would be more useful and give us a much higher return on investment. Beyond basic modularity, these arrays could:

  • Be subdivided into the smallest sensible logical and physical units
  • Be aggregated into larger units with other arrays, even from other vendors
  • Participate in a unified fabric in any of the configurations along that spectrum with a common logical view
  • Store data regardless of the characteristics of that data, most importantly the operating system that created it
This way, we could use and reuse storage resources in any configuration that our applications and data require.

Even if vendors provide us with more flexible products, we have to learn how to exploit them. Up until now, we've tended to use products in a monolithic way. When we deploy a critical application on a server, we dedicate an array to that data to keep it as stable as we can. Even if we don't use the maximum capacity for that application, we can't leverage that piece of infrastructure for other applications.

We've had devices that couldn't get us to that point. Like Scotty on Star Trek, when Captain Kirk is telling him that he has 15 minutes before the star goes supernova, and he's frantically trying to repair the warp engines. Invariably, he repairs them in 14 minutes and 59 seconds, thus perpetuating the problem.

That might make for exciting TV, but no one wants to actually experience the "Scotty syndrome." Common IT practice has been to isolate components and keep it simple. Many shops don't put any open-systems data on mainframe arrays even though they're perfectly capable of handling that. We've siloed the two, but shouldn't make a religion out of it.

What we want to do is make intelligent, non-biased choices that don't lock us in even more. Since nothing is perfectly agnostic, how do we know when our choices are as adaptable as possible?

We can start to understand what we're really buying into when we install a major product. We need to question vendors about what we're gaining from their newest "whiz bang" and what we're giving up. What does that technology or product do that can't be done now?

Vendors will give you an answers to any of your questions, but what you need to do is evaluate their answers from this perspective: Is their answer so limited that it won't answer the next problem--since there will always be a next problem? Go back to our model--can you adapt this technology to new application demands, new types of data, or other changes in the infrastructure? To the extent that you can answer that question more positively, you'll know that doing something new will actually be a step forward.

Let's put two current hot technologies to the test:

  • InfiniBand: This technology addresses the problem of complexity in our interconnect methods. It enables an adaptive environment in which we can use a variety of specific protocols on top of it, even protocols that haven't been invented yet. It could let us, for example, mirror and cluster both processors and storage across greater distances, driven by policies, with much more intelligence and flexibility, in a more organic way than we now do. Whatever its failings, InfiniBand brings us forward toward more adaptive computing. This is conceptually what we're after.
  • Blade servers: Here's a great concept currently being poorly executed. While blade servers are modular in theory, you're purchasing the vendor's religion with the product. Each vendor supplies and supports only those components they choose, and may not integrate into the existing infrastructure, or worse, impose risks we aren't willing to accept.
But, what do we do when that storage and those processors don't fit our application, data or other infrastructure needs? You might want to wait on these technologies until the implementations improve, or at least go into it knowing that fundamental limitations do exist.

Wrapping this all together
The only way to build a business model with a balanced architecture is to allow our infrastructure technologies, data and applications to grow and evolve independently of each other. Vendors must design infrastructure, applications and data management tools that can do that. Then we must fully exploit the adaptability of those agnostic technologies to build a balanced, flexible computing architecture on top of which our companies can grow.

This was first published in May 2003

Dig deeper on Data center storage

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close