I've noticed an interesting trend popping up in conversations with a few storage vendors and several forward-thinking...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
end users: performance caching of data for transactional and high-performance NAS workloads. Performance caching of data is not a new idea, but there are several new technologies that can turn performance caching into a truly killer IT concept.
In a nutshell, we're talking about dynamically placing certain NAS workloads on a specified high-performance platform in order to leverage big memory allocations. This greatly increases overall performance for your most critical applications. Let's call this IT process "performance caching," and as an architectural approach, we can refer to it as a "Tier-0" deployment.
Okay, we all know that for years, leading storage providers like EMC and NetApp have allowed us to cache storage in front of their arrays in order to increase performance. So what's new here? The change taking place is the ability to leverage network-resident file virtualization technologies and/or centralized memory schema in a shared fashion across the entire infrastructure. In short, we're moving beyond caching, as it has been historically executed on a parochial per-device basis.
Imagine a true shared high-performance tier with fat memory, which could be accessible across heterogeneous applications. Technology is now available that allows us to deploy a high-performance, memory-rich, scale-out tier, and park IO for specific workload subsets… and to do it in real-time, using industry standard hardware (just visualize stacks of your favorite 64 bit dual-core chips running against dozens of gigabytes of memory).
Several approaches are being explored, and the details will soon begin to trickle into the marketplace from several vendors. The commonality that is emerging is that in-band, network-resident controls of file data are enabling us to make decisions about where workloads need to reside, based on their IO profile.
The implications for this are going to be huge. Think about this: The cost to run high-performance computing out of memory vs. big expensive disk is literally fractional. The ability to eliminate bottlenecks is without question. The ability to increase application performance is obvious. The answers to storage performance issues aren't always solved with expensive disk (Tier-1 default thinking). Sometimes, the answer simply is to get access to abundant throughput. That's what performance caching into a Tier-0 is all about.
Performance caching and Tier-0 are going to happen. The trick for vendors is to make it easy for the IT team to deploy that tier, integrated with an existing infrastructure, and tune it to specific application requirements across the entire enterprise. Watch this trend.
About the author: Brad O'Neill brings a wide range expertise to the consulting and market research practices of Taneja Group. As a business and product strategy expert in in the data storage industry, Brad has helped many client firms and their institutional investors develop, launch or refine offerings in the software, systems and services sectors.