Feature

The shape of the new data center

Ezine

This article can also be found in the Premium Editorial Download "Storage magazine: Distance: the new mantra for disaster recovery."

Download it now to read this article plus other related content.

Put it to the test
My ideal world may never come about. But as people who make major decisions and purchases, and sit on committees with our colleagues who do the same in their domains, we can get closer to nirvana.

Let's try to develop a set of criteria that lets us evaluate whether or not any particular new technology can help us move in that direction. Let's look at today's SAN technology and its role in the infrastructure. The big news in recent years has been our ability to use cheaper modular arrays to replace monolithic arrays that didn't easily allow for incremental growth. That's great, but in many ways, modular arrays are just monolithic arrays that you build in pieces. Both are islands that don't scale well beyond their own limits--once the box is full, it's full. Even beyond the capacity issue, the arrays also impose limits on how well we can leverage the infrastructure. Today you can buy an array with many terabytes, but it may only support a limited number of logical host connections. Suppose I'm trying to support a thin-client environment running blade servers--my needs could easily exceed that logical limit quickly.

Another problem with today's arrays is that they're not generally useful. We have arrays for block data such as databases, arrays for file data, and now, arrays for fixed-content data. If our applications or data change, our infrastructure must change. A better idea would

Requires Free Membership to View

be if our arrays had the intelligence to understand the type of data they were storing and treat it with different requirements.

Following is a checklist for a future generation of arrays that would be more useful and give us a much higher return on investment. Beyond basic modularity, these arrays could:

  • Be subdivided into the smallest sensible logical and physical units
  • Be aggregated into larger units with other arrays, even from other vendors
  • Participate in a unified fabric in any of the configurations along that spectrum with a common logical view
  • Store data regardless of the characteristics of that data, most importantly the operating system that created it
This way, we could use and reuse storage resources in any configuration that our applications and data require.

Even if vendors provide us with more flexible products, we have to learn how to exploit them. Up until now, we've tended to use products in a monolithic way. When we deploy a critical application on a server, we dedicate an array to that data to keep it as stable as we can. Even if we don't use the maximum capacity for that application, we can't leverage that piece of infrastructure for other applications.

We've had devices that couldn't get us to that point. Like Scotty on Star Trek, when Captain Kirk is telling him that he has 15 minutes before the star goes supernova, and he's frantically trying to repair the warp engines. Invariably, he repairs them in 14 minutes and 59 seconds, thus perpetuating the problem.

That might make for exciting TV, but no one wants to actually experience the "Scotty syndrome." Common IT practice has been to isolate components and keep it simple. Many shops don't put any open-systems data on mainframe arrays even though they're perfectly capable of handling that. We've siloed the two, but shouldn't make a religion out of it.

What we want to do is make intelligent, non-biased choices that don't lock us in even more. Since nothing is perfectly agnostic, how do we know when our choices are as adaptable as possible?

We can start to understand what we're really buying into when we install a major product. We need to question vendors about what we're gaining from their newest "whiz bang" and what we're giving up. What does that technology or product do that can't be done now?

Vendors will give you an answers to any of your questions, but what you need to do is evaluate their answers from this perspective: Is their answer so limited that it won't answer the next problem--since there will always be a next problem? Go back to our model--can you adapt this technology to new application demands, new types of data, or other changes in the infrastructure? To the extent that you can answer that question more positively, you'll know that doing something new will actually be a step forward.

Let's put two current hot technologies to the test:

  • InfiniBand: This technology addresses the problem of complexity in our interconnect methods. It enables an adaptive environment in which we can use a variety of specific protocols on top of it, even protocols that haven't been invented yet. It could let us, for example, mirror and cluster both processors and storage across greater distances, driven by policies, with much more intelligence and flexibility, in a more organic way than we now do. Whatever its failings, InfiniBand brings us forward toward more adaptive computing. This is conceptually what we're after.
  • Blade servers: Here's a great concept currently being poorly executed. While blade servers are modular in theory, you're purchasing the vendor's religion with the product. Each vendor supplies and supports only those components they choose, and may not integrate into the existing infrastructure, or worse, impose risks we aren't willing to accept.
But, what do we do when that storage and those processors don't fit our application, data or other infrastructure needs? You might want to wait on these technologies until the implementations improve, or at least go into it knowing that fundamental limitations do exist.

Wrapping this all together
The only way to build a business model with a balanced architecture is to allow our infrastructure technologies, data and applications to grow and evolve independently of each other. Vendors must design infrastructure, applications and data management tools that can do that. Then we must fully exploit the adaptability of those agnostic technologies to build a balanced, flexible computing architecture on top of which our companies can grow.

This was first published in May 2003

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: