When we think about archiving strategies, we often talk about the past and what we know about our data. When it...
comes to the future, we cannot possibly know all of the events, trends and changes that will impact our information archive requirements in terms of the data we need to preserve, the platform we use to preserve it with, and the tools required to make it accessible to future users and systems. This realm of the "unknown unknowns" has a way, unfortunately, of making an archiving strategy seem more like a thought experiment to many corporate bean counters rather than a part of a real strategic initiative that will help contain costs, reduce risk or improve productivity.
When archive programs are approved and funded, they are often construed as a standalone project. In many cases, archive projects have their own staff, processor, network and storage infrastructure quite apart from the production data center. Even cloud service providers like Amazon Web Services or Google offer discrete services for archival storage that are separate and distinct from their managed hosting and storage services.
However, with the emergence of interest in big data analytics and the appearance of server-side and software-defined storage infrastructures, the model of standalone archive strategies is being called into question. Big data analytics, which applies a set of technologies to examining ongoing trends of multiple and otherwise unrelated data sets, sees no data as archival. Instead, all data is active and has value in day-to-day business decision making or problem resolution. Archives have no real meaning in a framework like this.
Moreover, the movement of storage architecture away from a centralized pool or repository and toward a series of discrete, server-side, direct-attached configurations connected to individual server nodes in a cluster -- whether to support federated processing strategies like Hadoop and MapReduce or workload virtualization strategies like VMware Virtual SAN or Microsoft Cluster Storage Spaces -- is challenging traditional notions of storage tiering in which the tertiary tier contains archival data. Companies embracing these so-called agile philosophies in infrastructure design cannot conceive of a standalone archive practice. Clearly, the way we conceive an archive needs to change. We will need to stop thinking of an archive as a set of operations and infrastructure separate and distinct from production operations and infrastructure -- a "bolt-on" set of technologies and services -- and instead look for an archive-in-place strategy. Archive in place, fundamentally, means leaving archival data where it is physically located, but marking the data and perhaps applying special services to the data that befit its archival class.
Dig Deeper on Data storage compliance and regulations
Related Q&A from Jon Toigo
Cache memory and random access memory both place data closer to the processor to reduce latency in response times. Learn why cache memory can be the ... Continue Reading
Linear Tape File System and Linear Tape-Open technology can improve user access and durability in your tape archive system. Explore specific products... Continue Reading
Parallel computing technology has not seen widespread use in the business world, but could that change? Jon Toigo discusses parallel I/O for ... Continue Reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.