Performance caching: Your new Tier-0

Brad O'Neill

I've noticed an interesting trend popping up in conversations with a few storage vendors and several forward-thinking end users: performance caching of data for transactional and high-performance NAS workloads. Performance caching of data is not a new idea, but there are several new technologies that can turn performance caching into a truly killer IT concept.

Requires Free Membership to View

NAS management information
Five ways to add capacity to a NAS system

NAS in the small to midsized business: Selecting a NAS system

Taming NAS sprawl and file management

In a nutshell, we're talking about dynamically placing certain NAS workloads on a specified high-performance platform in order to leverage big memory allocations. This greatly increases overall performance for your most critical applications. Let's call this IT process "performance caching," and as an architectural approach, we can refer to it as a "Tier-0" deployment.

Okay, we all know that for years, leading storage providers like EMC and NetApp have allowed us to cache storage in front of their arrays in order to increase performance. So what's new here? The change taking place is the ability to leverage network-resident file virtualization technologies and/or centralized memory schema in a shared fashion across the entire infrastructure. In short, we're moving beyond caching, as it has been historically executed on a parochial per-device basis.

Imagine a true shared high-performance tier with fat memory, which could be accessible across heterogeneous applications. Technology is now available that allows us to deploy a high-performance, memory-rich, scale-out tier, and park IO for specific workload subsets… and to do it in real-time, using industry standard hardware (just visualize stacks of your favorite 64 bit dual-core chips running against dozens of gigabytes of memory).

Several approaches are being explored, and the details will soon begin to trickle into the marketplace from several vendors. The commonality that is emerging is that in-band, network-resident controls of file data are enabling us to make decisions about where workloads need to reside, based on their IO profile.

The implications for this are going to be huge. Think about this: The cost to run high-performance computing out of memory vs. big expensive disk is literally fractional. The ability to eliminate bottlenecks is without question. The ability to increase application performance is obvious. The answers to storage performance issues aren't always solved with expensive disk (Tier-1 default thinking). Sometimes, the answer simply is to get access to abundant throughput. That's what performance caching into a Tier-0 is all about.

Performance caching and Tier-0 are going to happen. The trick for vendors is to make it easy for the IT team to deploy that tier, integrated with an existing infrastructure, and tune it to specific application requirements across the entire enterprise. Watch this trend.

About the author: Brad O'Neill brings a wide range expertise to the consulting and market research practices of Taneja Group. As a business and product strategy expert in in the data storage industry, Brad has helped many client firms and their institutional investors develop, launch or refine offerings in the software, systems and services sectors.


There are Comments. Add yours.

TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: