SearchStorage.com contributor Mike Linett ventured to Storage Networking World recently. Mike then stepped aside from his duites as President of Zerowait Corporation to let us know what struck him as the trends in backup he observed. Here is his account…
I just returned from the Storage Networking World (SNW) show in Orlando and was impressed.
The folks at SearchStorage.com asked me to pay particular attention to the backup and recovery aspects of the storage business while I was in Orlando. With this in mind, during the show I spoke with several former Storage Service Providers (SSP). I also conversed with Ken Barth of Tek Tools, Larry Fox of Bakbone and Don Campbell of Avamar. Each of these is testing the waters outside the standard answer -- backup equals tape -- and looking for ways address the critical issues of disaster recovery and backup without the administrative and time-tax overhead of a tape-only solution. Two major factors are leading them in this direction: the amazing prices people can get for bigger, faster, better ATA/IDE disks and the growing realization that backups and archives are not necessarily the same thing.
With a few exceptions (for example, the HIPPA regulations, which pretty much require hospitals to save every byte for 20 years, and the financial industry's equivalent SEC Regulation 17A), most people would probably agree that some data is more important than others. Further, there's the corollary -- there's no need to put every bit of data in the enterprise offsite on tape.
Sometimes a near-line store will suffice. Some files are actually temporary and should be dumped immediately. There are other files that you'll need over and over for years and others that you'll need once every five years. If one could somehow manage to distinguish these varieties of files and store or trash, we'd all be better off. Then there's the issue of Disaster Recovery (DR) -- after all, what constitutes a disaster? A single Word file corruption shouldn't require restoration of the whole tape archive. A near-line store should be sufficient and would be much more preferable. Issues such as these have been the basis of the renewed interest in Hierarchical Storage Management (HSM) in recent years, but putting this in action hasn't been an overwhelming success. It takes a lot of time and discipline to do HSM right, and both are in short supply.
Storage media isn't in short supply, however, and that's what companies have traditionally employed in an effort to cage this beast. They throw another array into the storage network, which adds to the amount of data to be backed up. So then they add another tape library. Safe again! But at what cost? Besides the capital outlay, more equipment means more administrative overhead. More complexity means more chance for error. Safe again?
It's time to get off the treadmill and rethink what we're trying to do here. Basically, it boils down to this: Companies want to be able to get to their data easily, they don't want to have to slog through a bunch of extraneous or redundant files to find what they need, and they want to know that if something horrible happens to the active data store, they have a method by which to restore it quickly and keep on working. Plus, they want it to be fairly simple to implement and understand.
Along these lines, a concept has arisen that places a secondary, near-line disk store between the active disk store and the archival tape storage. This is coming to be known as Disk to Disk to Tape (DDT). One of the promising features of DDT is that it might allow companies to redeploy otherwise obsolete storage arrays (i.e., older SCSI raid sets) and less costly IDE arrays for the second D in the acronym. These arrays are perfect for that in-between spot -- where performance isn't as critical as capacity. Although, to be honest, the newer ATA/IDE and SCSI drives are getting so much faster through the use of cache and other means that claims of slower performance when compared to Fibre channel drives soon will be insupportable.
But hardware alone won't solve the problem. That became clear over the past few years by the experiences of the "throw more disks at it" crowd. Intelligence is required and that means software management intelligence.
For example, as Don Campbell of Avamar explained, their Axion appliance will allow you to make the most of your secondary storage, processing and eliminating redundant sequences of data, slimming down file systems prior to dispatching them to tape. Further, their intelligent client agents, installed on systems throughout the enterprise, identify replicated data sequences in files and across systems before sending data over networks. They claim that in some cases they can reduce the amount of data for backup by a factor of 100. While Tek Tools is not a back up and recovery solution per se, Ken Barth says their software suite offers a superb toolset for identifying islands of storage and network bottlenecks, providing you with a clear picture of your network and its data patterns. Then, for managing the backup process, you'll need something like BakBone's NetVault, which Larry Fox says uses an advanced modular software architecture. This lets NetVault support a very wide range of platforms, applications, and storage devices -- letting you manage and monitor the backup and archival process from start to finish.
Coming from a different viewpoint than the above were the Storage Service Providers. Instead of facilitating companies to manage their own storage more simply, they continue to recommend outsourcing. I was surprised and baffled by the number of former SSPs that have now redefined themselves as software companies. After listening to their explanations and pricing models, I was left wondering why an enterprise would want to outsource to them and invest in software that has its legacy in a failed business model. The SSP representatives spoke about reinventing themselves on their core competencies and explained to me how their model would enable companies to buy storage management on demand by hosts and terabytes used.
I couldn't help but wonder how Microsoft would have faired if they charged me by the number of hard drives I put in my PC. I couldn't see myself paying MS extra for the ability to write 1000 letters rather than just 10 letters, so why would I pay the software vendor incremental amounts for managing more data. One would think that the software could manage 10G Bytes of data at the same price as it does for 10T Bytes. The only way this made sense is if they were not software vendors at all, but hardware rental agencies. And we all saw how that worked out.
Most of my clients dislike proprietary vendors and would like to have storage work more like light bulbs. When a GE bulb goes bad I can replace it with a Sylvania or Philips bulb and the lights still work. Wouldn't it be great if storage worked like that? I spoke at length to Chris Croteau of Intel about this issue and it seems that Intel is working to make a true standards based solution based on Serial ATA (SATA) interface technology, which is expected to be faster (Ultra SATA/1500 at 1.5Gbps) and simpler to cable and route than the currently available Parallel ATA interfaces. I believe Intel is on the right track and low priced IDE and SCSI storage will quite soon marginalize Fibre channel drives and arrays. Along these same lines, the products from Nexsan offer excellent alternatives to Fibre channel right now.