The latest forecast for data storage technologies has LTFS heating up, cloud storage still rather cool and perhaps...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
a change in the wind for solid-state storage.
I had the extraordinary pleasure of dining with real IT people in several cities over the past few weeks as TechTarget's "The New Rules of Backup and Data Protection" seminar winds its way around the country. It seems that summer's extreme weather always increases interest in disaster recovery and data protection.
At these casual meetings, there are always the expected inquiries about the latest "shiny new thing" in data storage technologies -- flash storage -- but I'm sensing less enthusiasm about adopting the technology on anything more than a one or two PCI Express card basis than I might have expected. Some analysts might say flash in server has peaked, but that isn't what I'm seeing. People seem desperate to do anything to speed up the doggedly slow performance of their server hypervisor and virtual machine complex even if I/O isn't the problem.
I also had the opportunity to interview Erik Eyberg, who came to IBM with the Texas Memory Systems acquisition and serves as technology savant and evangelist at Big Blue. I recorded the interview (and stuck a two-part video on my blog for anyone who's interested) because I was amazed at the man's candor. No, he admitted, flash memory probably would do nothing to speed up VMware or any other application that was processor or network bound (conditions you can readily check with performance monitors available in most OSes). Flash only works its magic when I/O binding is the issue, signaled by overly long queues in storage performance monitors.
Another highlight of the Eyberg interview was his desire to switch the narrative about flash from one of speeds and feeds (you know, how the technology even makes grits cook faster), to one of latency. Latency is a bit nuanced in his usage, referring to how long it takes for a transaction to complete. From a business standpoint, this is all that matters and not the 18 million IOPS achievable with an all-flash array under test conditions. I quite agree.
I've also found in my conversations with the plain folk of IT that the blush just doesn't quite seem to be reaching the rose of cloud storage. I receive considerable plaudits for my skepticism on this subject with only some occasional criticism. In the latter category, only one fellow, a vendor, has actually gotten "up in my grill" about my questioning the solvency of public clouds. He said that "real analysts like IDC and Gartner," not to mention "real technology leaders" like Joe Tucci of EMC and John Chambers of Cisco Systems, would "laugh me out of the room" for my doubts about the future of clouds.
Less caustic was a cloud evangelist who listened closely to my questions, nodded approvingly at my concerns about security, service-level agreements and so on, and then responded with the observation that "for clouds to succeed, the culture of IT and business must change." While I appreciated her tone (no shouting or growling), her view struck me as kind of out there: If we could change the culture so that everyone felt comfortable with clouds, then we could work out all the technical hurdles that still remain. That struck me as sort of like saying that if we could get everyone to drive a car, we could work out the problems with all those dials, gauges, brakes and even the safety gear in due course.
One topic that I did see gaining a lot of interest was the Linear Tape File System (LTFS) and how it can be used to build huge-capacity storage for infrequently accessed files. Surprisingly, my seminar presentation was the first that most attendees had even heard of the data storage technology. I was able to bring them up to date quickly, not only with the basic concept, but with the Enterprise Edition software that debuted last month at IBM's Edge conference in Las Vegas. The key talking point around the latest version is its support for IBM's General Parallel File System, which allows the system of files stored to LTFS tape to be included in a hugely scalable, clustered and cross-storage file namespace. In short, you can use LTFS as part of a file-system migration and archive strategy without needing to leave behind clumsy pointers or stubs. Moreover, LTFS tape may well become a low-cost way to store historical data you may wish to access later for big data analysis.
To most of the folks I talked to, however, the greatest perceived value of LTFS was its potential use to store a ton of files that are never accessed, clutter up the storage junk drawer and that everyone is afraid to throw away. No one I spoke to trusts clouds enough to put their old files there, owing mainly to cost, accessibility and privacy concerns. LTFS tape seems like a great way to leverage existing investments in LTO tape libraries to slow the rate of disk growth and its associated cost. Despite the green shoots of economic improvement, no one seems to want to rush out and buy an all-flash array or cloud service, despite what analysts and vendor/innovators are saying.
About the author:
Jon William Toigo is a 30-year IT veteran, CEO and managing principal of Toigo Partners International, and chairman of the Data Management Institute.