Instrumenting your storage infrastructure


This article can also be found in the Premium Editorial Download "Storage magazine: Solid-state storage guide."

Download it now to read this article plus other related content.

Instrumentation is critical to the next generation of data center technology. Virtualization and cloud initiatives have put IT infrastructure center stage and raised expectations for dependable, reusable and ubiquitous services to a higher level. An infrastructure with significantly more complexity needs to operate with greater precision and reliability.

The type of instrumentation that’s needed is distinctly new, but it’s still an under-recognized technology that’s often lumped together with other tools that really aren’t in the same class. This usually happens with capacity planning or general trending tools.

There are a few key characteristics that can help sort through the available alternatives to figure out what makes a real instrumentation system that reaches beyond mere point products.

Infrastructure encompassing. Instrumentation should encompass a big slice of the infrastructure, be able to relate information across many domains and provide accurate insight into a complex data center. Detailed information provides the foundation for sufficient analysis to inform automated interactions with the infrastructure. Information in the infrastructure can be thought of as a spectrum, from the signal levels on physical wires up to the most complex application and storage interactions. No matter how trivial a single domain of information may seem, missing bands of that spectrum -- a physical layer, Fibre Channel frames, port transactions and

Requires Free Membership to View

so on -- can compromise an instrumentation system.

Efficient and scalable. Given the demands for more insight across more data and with a larger number of systems than ever before, it’s obvious that any instrumentation needs to be highly efficient. Instrumentation should be able to operate at cloud scale, but have minimal or zero impact on the infrastructure.

Real-time capable. Given the dynamic nature of a cloud infrastructure, instrumentation systems must be real-time. Anything less won’t deliver the trending analysis, deeply informed real-time alerting and ongoing observation that will provide a continuous view of the infrastructure.

Cross-domain correlated. Detailed, efficient capture comes with hazards; it can easily create a deluge of information that will thwart efforts to streamline the infrastructure. Consequently, the instrumented private cloud must transform information into something meaningful through deep analytics and cross-domain correlation.

Physically instrumented. Finally, the best way to harvest total information efficiently is with physical layer-attached systems. This may seem counterintuitive, as it’s expected that with a cloud environment the movement is away from the restrictions of the physical infrastructure. But clouds are still built on physical systems, and those systems ultimately provide the largest share of operational data access. Moreover, instrumenting the private cloud with physical layer solutions is possible today.

It’s increasingly clear that data center instrumentation is one of the most important factors in the success or failure of advanced data center infrastructures. While there’s a lot of wishful thinking that the next-generation infrastructure will do miraculous things right out of the box, the hard truth is that IT may never provide an “out-of-the-box” solution. IT must meet a diverse range of fairly unique business needs and, as a consequence, this will always entail engineering and integration.

The market is starting to respond to these needs. BlueStripe, NetApp (with its BalancePoint product, formerly from Akorri), Quest Software, Veeam and Virtual Instruments all offer products that are on the instrumentation track, and there are others out there, too. Check them out and see how they stack up against the criteria we’ve identified. Whatever you decide, it’s important you take that first step. The alternative could be a catastrophic failure.

BIO: Jeff Boles is a senior analyst at Taneja Group.

This was first published in March 2012

There are Comments. Add yours.

TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: