Managing and protecting all enterprise data
In This Issue


Keep it simple, stupid: Part deux: Storage Bin 2.0

Are you one of those companies that leaves inactive data on the most expensive architecture and applies stringent (expensive) processes to that same data? If so, wise up!

Make sure you aren't using your priciest options to accommodate changing data.

In my last column, I described the four lifecycle stages related to data. Here's a quick review:

  1. Dynamic active online data

  2. Persistent active online data

  3. Persistent inactive online/nearline data

  4. Persistent inactive offline data
So where does staging fit? Staging is all about how data, once born, lives in an ever-changing state until it's permanently archived or deleted. Whether we're talking about a PowerPoint document or an ATM transaction, the birthing of this data has an impact on your larger environment and needs to be treated with consistent, high-level care. As data changes, you should take a good look at any assumptions that dictated IT processes in the previous stages and decide if you need to modify them. The final stage (persistent inactive offline data) is all about checking the box and hoping you never need to think too much about actually invoking a recovery from there.

For example, company ABC is exactly like your company. It sells things and runs business systems. Employees do payroll, send invoices, collect money and do online transactions. There's order processing and ERP, and packages and customers are tracked. Documents are written, code is produced and advertisements are crafted. The firm has a Web site, a forum, a service portal and a knowledge system. It runs collaboration tools and telephones. Information of every description is kept online and ediscovery requests are received. It also has lawyers, accountants and auditors, just like your firm.

The organization's business systems run on top of Oracle, just like yours. It runs mainframes, Sun boxes, Hewlett-Packard, IBM or whatever, just like your company. That data is stuffed into EMC or Hitachi Data Systems just like yours. There are distributed servers, as well as NAS systems from NetApp; and there are small SANs with Windows Server, SharePoint and Exchange, just like at your company. Employees deal with workstations and WANs, remote offices, moron bosses and idiot users just like at your firm.

And for each of its apps, company ABC has an infrastructure that tends to be the same stuff it used when the data was created, as well as a "never-going-to-change-and-let's-hope-we-never-access-it" stage. Just like you have. But what people forget to do is pay attention when data moves from stage one to stage two, and then again to stage three. People leave data that's no longer "dynamic" on their most expensive infrastructure, and apply their most stringent (expensive) processes to that data. They do endless backups, disaster recovery (DR) replications, snapshots, copies and cloning to make sure they can do business the way they did when the data was dynamic--even though it isn't dynamic anymore. This means they run out of space, bandwidth, archive space, tape and so on. Then they have to go back to the boss and ask for more money.

But not you. You realize that when data passes between stages you should adjust your processes accordingly. By doing so, you don't need to get more expensive gear, bandwidth or people. You simply migrate the data based on your own subjective, intelligent rules. You alter your backup/DR/test planning so it creates fewer copies of the same data and breathe much easier in general. You add helpful technologies, such as data dedupe, to further reduce the need to go back to the well. You're smart and popular. You, my friend, have mastered the art of common sense. Certificates are forthcoming.

Article 37 of 37

Dig Deeper on Data storage compliance and regulations

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

Get More Storage

Access to all of our back issues View All