tashatuvango - Fotolia
Last September, I attended an event hosted by magnetic tape media maker Fujifilm Recording Media U.S.A. I listened to several fine vendor presentations from industry smart guys who updated their slide decks with the latest roadmaps for all your favorite storage forms, including flash, disk, tape and even optical. Presenters also underscored tape's continued presence and growth, at least in terms of capacities shipped, several times.
As I watched the presentations, a couple of micro-trends caught my eye: malware and ransomware and what 451 Research called "cloud repatriation" in a 2017 report. In that report, 20% of companies surveyed said cost drove them to move one or more of their workloads from public clouds to private clouds. The first trend, which I have addressed in this space before, had vendors making references to malware and ransomware as a means to sell tape or anything else, really.
David Balcar, a security strategist with Carbon Black, gave a frenetic and frightening overview of the current situation, casting clouds as force multipliers for security risk exposure. Newer vulnerabilities in servers, such as Meltdown and Spectre, are hard enough to try to control through patching processes and likely to cause downtime to prevent them. They're much more dangerous within the context of massive server farms like those found in the cloud. Perhaps this first trend is one of the contributing reasons behind the second one: cloud repatriation.
What's behind cloud repatriation?
Granted, I wouldn't expect the crowd at this event to wholeheartedly embrace the cloud. In many cases, archivists avoid using cloud storage simply for cost and convenience -- and in some cases because of legal or regulatory constraints. For example, a great slide in a deck from the Active Archive Alliance, presented by its co-founder Molly Presley, compared the relative costs of storing 1 petabyte of archival data on different platforms over a three-year period. Using flash, the price was nearly $3.5 million. NAS disk totaled roughly $2.6 million. Amazon S3 came in at around $1.5 million and Amazon Glacier at $300,000. Tape added up to a paltry $107,000.
Cost isn't the only criterion for choosing to use or abandon public clouds, but it is an important one. A major Fortune 500 company recently withdrew from the public cloud citing around $80 million in savings, monthly. They are part of the cloud repatriation movement cited by 451 Research. Similarly, IDC reported around the same time that 53% of enterprises were -- or were considering -- bringing their workloads back on premises.
IBM seemed to catch wind of cloud repatriation earlier than other firms. It began promoting hybrid clouds that combine on-site and cloud-based resources and processes while competitors were hawking clouds and more clouds in a strategy called multi-cloud.
So much for cloud woo.
Case in point: Caringo
A briefing in September with Caringo Inc. reminded me of this trend away from all things cloud. The object storage company was about to release the 10th edition of its Swarm product, when CEO Tony Barbagallo described the speeds and feeds of his updated object platform as "scary fast" to me. He positioned Swarm not as the cheap-and-deep alternative to other storage or as a stepping stone to cloud, but rather as an archival data hosting approach that eliminated the need for cloud altogether. That's cloud repatriation in a nutshell.
I noted how Caringo was the first storage company I encountered willing to decouple its marketing strategy from clouds. To my surprise, Barbagallo didn't play down the distinction at all. He wants Caringo to provide a common platform to deliver secondary storage on premises with enough performance at an affordable price. That would save its customers from the cost, complexity and risk associated with archiving to the cloud.
To support this point, Barbagallo sent background on an implementation and test Caringo had just completed with Rutherford Appleton Labs Science and Technology Facilities Council as part of a project called Jasmin. Jasmin is a data center supercomputer used to model massive quantities of scientific data for climate and earth science research communities. Jasmin's petabytes of storage are used by thousands of virtual machines linked by high-bandwidth networks, or as Caringo described it, "a supercomputer's network and storage but without quite as much compute."
During testing of this distributed scenario, Amazon S3 registered aggregate throughput numbers well in excess of goals: reads of 35 GBps and writes of 12.5 GBps. NFS single instance throughput was even more astounding, with Caringo reads exceeding target goals by 132% and writes by 256%. Barbagallo said Jasmin proved the need to characterize object storage as the "slow, steady and dependable platform" for older data was over. It is time to start introducing object into the mainstream as a fast and efficient platform for storing data locally that competes with, and possibly blows away, cloud-based storage.
You can search for the full report about Caringo and Jasmin on the web, or download it directly from Caringo's website. For now, the important news is we are starting to see a decoupling of the value case for object storage wares and public cloud services, greasing the wheels of cloud repatriation.
This may well be the beginning of the end of the latest wave of IT outsourcing that seems to accompany recessionary economies that occur every couple of decades. The question remains whether the downsizing that occurred in IT during the past decade or so of cloud-everything gutted some of the essential skills needed to repatriate IT on premises again, however.
- 2012 Trends to Watch: Storage –ComputerWeekly.com