Magazine

Dealing with big data: The storage implications

Ezine

This article can also be found in the Premium Editorial Download "Storage magazine: Data archiving in the cloud."

Download it now to read this article plus other related content.

These boundaries can be encountered on multiple fronts:

The transaction volume can be so high that traditional data storage systems hit bottlenecks and can’t complete operations in a timely manner. They simply don’t have enough processing horsepower to handle the volume of I/O requests. Sometimes they don’t have enough spindles in the environment to handle all the I/O requests. This often leads users to put less data on each disk drive and “short stroke” them. That means partially using them to increase the ratio of spindles per GB of data and to provide more disk drives to handle I/O. It also might lead users to deploy lots of storage systems side by side and not use them to their full capacity potential because of the performance bottlenecks. Or both. This is an expensive proposition because it leads to buying lots of disk drives that will be mostly empty.

The size of the data (individual records, files or objects) can make it so that traditional systems don’t have sufficient throughput to deliver data in a timely manner. They simply don’t have enough bandwidth to handle the transactions. We see organizations using short stroking to increase system bandwidth and add spindles in this case as well, which, again, leads to poor utilization and increased expense.

The overall volume of content is so high that it exceeds the capacity threshold of traditional storage systems. They simply don’t have enough capacity to deal with

    Requires Free Membership to View

the volume of data. This leads to storage sprawl -- tens or hundreds of storage silos, with tens or hundreds of points of management, typically with poor utilization and consuming an excessive amount of floor space, power and cooling.

It gets very intimidating when these things pile on top of each other -- there’s nothing that says users won’t experience a huge number of I/O requests for a ton of data consisting of extremely large files.

This was first published in May 2012

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: