boscorelli - Fotolia
Not long ago, I received an email message from Promise Technology regarding their "first consumer product," which they unveiled at the Consumer Electronics Show in January. Called Apollo, the product was an "appliance" that stored "up to 4 TB of data assets" including "treasured family photos and videos that proliferate like wildfire on family member phones, tablets and laptops, and at increasingly high resolutions." The product sounded a bit to me like a network-attached storage array but it was, according to the vendor, "a personal cloud appliance" -- I suppose, because it's accessed via an app.
I don't want to pick on Promise, whose products I use and generally find to be of good quality, but this "cloud storage" product concept has become increasingly pervasive in the market these days, whether aimed at home or business users. In the process, the concept of a traditional NAS -- a network-attached file server connected to its own storage array -- is being usurped. In a way, the marketing around the Promise product provides a bit of a microcosm for a broader discussion of the architectural pluses and minuses of "cloud appliances" aimed at small, medium and large businesses.
The proprietary problem
In the case of Apollo, as with other cloud appliances, we have a problem of a proprietary nature. Promise generally makes a pretty good kit, and one that is pretty open. In this case, the notion of proprietary is introduced as a function of operational compatibility. Apollo will be sold "exclusively through Apple stores." That already hints at a proprietary product. I'm not a big fan of Apple anything, though we have some iMacs, MacBooks and even an iPhone or iPad participating on the sprawling home network in my house. While a product like Apollo might get some play, I suppose, I doubt that the non-Apple devices would find it very useful, especially if the app required for access is only iOS- or OSX-compatible. If it's like other Apple storage products, there may also be an issue of file system compatibility with the many Windows and Linux systems and devices deployed in the network.
NAS vendors have worked for many years to ensure that their products could be accessed via "open" network file system protocols so that data could be stored and retrieved by any device. No proprietary apps or protocols were needed to get my files down from an Isilon, NetApp or Promise NAS array as long as I knew the IP address of the box. Sacrificing this universality on the altar of simplicity or ease of setup is concerning.
The key point here, however, is that proprietary cloud appliances are by nature exclusionary and limited in their support of the broadest number of use cases. Buying something that is aimed at a specific OS, hypervisor or file system -- even if it "simplifies" deployment and use in the short term -- is creating a technology stovepipe in your environment. That's tantamount to shooting yourself in the foot. While the product may plug and play today, it may pose some serious and expensive challenges in the future.
The capacity conundrum
I also have a problem with the capacity of the basic Apollo product. These days, a home data storage repository with 4 TB capacity isn't all that grand. (Indeed, individual disk drives that exceed that capacity are available!) Apollo's 4 TB size seems to reflect current analyses that peg Internet data consumption rates within typical households at about 3 GB per week (12 GB per month, 144 GB annually). Basically, product designers are making the rather modest assumption that something less than 144 GB of data per year will be added to local storage in the home, making 4 TB more than adequate for keeping all the pictures and videos safe.
The real story is much different. Internet or mobile data use is in no way a gauge of how much data is created and stored locally in a home -- or in a business. In my house, 8 TB hard disks fill up very quickly with games, movies, music, artwork, writing projects, and so on.
In a business, the capacity burn rate is even greater -- upwards of 40% per year according to IDC, or 10 to 20 times that rate if your servers are virtualized. This helps to explain why the NAS folks have gone to such pains to build new scalability techniques into their products -- whether capacity growth is achieved by adding more storage media or by linking to back-end Linear Tape File System tape or by attaching to a public cloud via a network gateway.
There is also a need, in the business world at least, to think about capacity burn rates in terms of operational economics. It makes little sense to use expensive hard drives or flash storage in a NetApp rig to store files that are hardly ever accessed. Typically, smart administrators want to place data where its re-reference and revision characteristics require it to be placed. "Hot" data that changes a lot and is accessed frequently needs to go on SSDs or fast disk, while "colder" data needs to be stored on capacity media, preferably tape. And if you are just putting data to sleep, think public cloud.
That's the philosophy of products like Crossroads Systems' StrongBox, or that product used in conjunction with the dTernity Media Service from Fujifilm, which puts that extra layer of mass, low-cost, tape-based cloud storage into the mix. Such a "bottomless" network-accessible storage repository begins to appeal at about 30 TB of data, according to Crossroads spokespersons, which doesn't make it an alternative to Apollo for home use, but in a small, medium or larger office, the economics of NAS-to-tape-to-cloud make a lot of sense.
Some vendors offer workable alternatives
Another alternative is object storage, like the one advanced by Caringo and a few other vendors. Cluster a number of smaller, inexpensive, "cloud appliances" (actually hyper-converged server/storage nodes) and present them to users via either a standard NAS access method or a "cloud" interface (RESTful commands via a browser or app). Objects can be segregated by policy, protected in a manner appropriate to their importance and use, and accessed via familiar and simple interfaces as though they were all on the same big disk drive. An object storage model is already seen as the future of network storage and is likely to become part of any "cloud appliance" pitch in the future -- even from Promise, which has done some significant work on object storage in recent years.
Bringing this to a speedy conclusion, my recommended "best practice" for home-based network-accessible storage is to buy and deploy whatever you want and can afford, remembering that losing your pictures and movies will not bring about the apocalypse in any case.
However, if your business depends on the data you're storing, be a bit more circumspect before you ditch NAS for simple "cloud appliances." First, think about compatibility and look for storage that will work with all of your data. Second, look at the scalability and economics of the product. For the money, capacity and resiliency, mixing HDD/SSD, tape and cloud makes a lot of sense -- even if it is more challenging to access with a smartphone. Finally, stay tuned for real cloud storage -- not just storage in an off-site data center on the Internet, not just storage accessed via a smartphone or tablet app, but an object-based repository featuring the clustering and virtualization of physical storage infrastructure and a robust data management and access component. That's a real cloud appliance.
Evolution of the cloud appliance market
A closer look at hybrid cloud storage
Cloud storage gateways migrate into storage systems
- 2012 Trends to Watch: Storage –ComputerWeekly.com
Dig Deeper on Public cloud storage
University of Leicester dumps SANs and LUNs for Cloudian object storage
Leading storage for AI tools address workload capacity, performance
Qumulo storage refresh taps into newest HPE Apollo servers
Qumulo gets HPE upgrade as customers ‘come back from object’