Managing and protecting all enterprise data


Manage Learn to apply best practices and optimize your operations.

Cut out the fat

It's not too tough to make short-term cuts to reduce costs, but most companies find it difficult to sustain those cost savings over a longer stretch of time. Here's how to make cost cutting a long-range, ongoing effort.

Cutting storage costs is easy; making the cuts sustainable is the difficult part.

MANY CLIENTS ASK US to "cut storage costs." Whether it's reducing the unit cost of storage, the overall total cost of ownership or the potential costly risks associated with poor storage management, it's easy for someone with the experience, sponsorship and time to identify cost-saving opportunities.

But what happens after the project is completed? A typical cost-cutting initiative is a short-term success, driven by the need to produce immediate, tangible results. Surprisingly, however, these efforts are often long-term failures. The cutting tends to address the symptoms only, while overlooking--or even exacerbating--systemic issues that are less-readily apparent.

Companies that aren't willing or ready to invest in sustainable solutions will find themselves going through the same cost-cutting exercise a year later. Here's how to move from short-term cost cutting to sustainable cost consciousness in your organization.

For most IT managers, the pressure to reduce costs is constant. They're often put on the spot, parrying questions about storage. If the cost-per-gigabyte of disk storage is dropping, why do costs keep going up? Why is data volume growing at an unchecked pace, driving purchase after purchase of more disk? What are all those storage people doing--does your team really need to be that big?

Over the last 15 to 20 years, some application design has come to rely more on what's perceived as cheap and ubiquitous storage. Rather than using disciplined, sophisticated engineering to produce applications that perform better, many application engineers specify high-performance storage arrays and ample disk volume to meet performance requirements. Worse, they often overlook how applications purge data in a manner that complies with business and regulatory requirements. As a result, many companies end up keeping everything forever.

Compounding the problem
This problem can be exacerbated when firms overuse data protection and availability solutions. Replication and point-in-time copies multiply storage demand by a factor of two or three, driving more volumes of storage into the data center.

Defining cost savings
As you define cost savings, don't fall into the trap of focusing only on hard dollars. Here are some potential ways of looking at cost reductions:
  • Increases in storage utilization through reduced consumption or consolidated requirements can yield capacity at no cost. Depending on your capacity requirements, this is usually a deferred cost and won't be treated as hard dollars.
  • By improving service levels to business units, a company can better respond to market conditions, reducing the cost of doing business.
  • A proven disaster recovery (DR) plan can mitigate the potential for significant damage--and costs--to day-to-day business. While the cost savings that result from a DR plan are hard to quantify, a DR plan goes a long way toward aligning the needs of the business to the infrastructure that supports them.
  • Tiering storage lets data be moved to lower cost-per-unit tiers of service.
  • Increasing storage staff productivity through automation and process efficiencies can help control payroll costs.

The cost issue may be amplified by the way in which storage was incorporated into the infrastructure. Companies typically allocated storage on a project-by-project basis. The result was a mishmash of underutilized infrastructure supporting a bloated application base.

One company's immediate response to this problem was to reduce the amount of data going to tape media. It's a simple proposition: Reduce the volume of data being backed up and save money on tape. The company was able to reduce the data volume of its weekly backup cycle by approximately 30%, which effectively trimmed media usage and extended the life of the backup environment by reducing its load and making capacity available for future requirements. But was this the right solution?

Although apparently successful, this effort didn't address the policy and behavior modifications that have lasting benefits. The waste of protecting application system and temp files, MP3s and the like is eliminated. But sometime in the future, a new application will be installed, new clients will come online and a new crop of non-critical file types will appear. There's a good chance that these files won't be among the file types identified in the original effort, so wasteful backups will occur. It's also possible that the exclusions originally identified will lose their validity as the application, infrastructure or business environment changes.

Savings that last
The best way to realize ongoing cost reductions is to shift the focus from technology to process. By thinking in terms of people and process, you'll have a better chance of transforming one-time events into permanent processes that save money. In any organization, there will likely be widely varying views on what cost savings are, how they're measured and to whom the saved costs will be allocated. Spend some time discussing your cost-cutting framework with finance, IT management and business users to gain a consensus. In some cases, sponsorship from the highest levels of the organization will be needed to align the various groups around what's best for your company. This alignment phase will not only help validate your results, but will serve to gain participation from data owners.

You should next look at ways to install processes that make cost-saving opportunities repeatable. For example, have you created a policy for not backing up specific file types? Incorporate that policy into the overall backup process. As noted earlier, shifting business requirements may invalidate the policy, so include a schedule of steps for the storage team to maintain an ongoing dialog with data owners. You should also consider automating some of the newly minted processes to ease the storage team's workload. Perhaps most importantly, use your documented processes to enforce accountability.

What about execution? To avoid a technical mishap, either by error or omission, document all storage processes. By creating repeatable procedures and paying particular attention to change and configuration management, reliance on a specific individual can be mitigated. This will eliminate the most common single point of failure in storage: the individual. Reduce errors and lessen the burden on your star performers by capturing knowledge and inserting the right checks and balances to keep operations running smoothly.

Instead of a one-time fix, cost cutting should be viewed as a valuable precursor to long-term cost-reduction initiatives. By investing in the repeatability of processes, connectivity with the business and alignment across different parts of the organization, you can create a cost consciousness that takes your bright ideas and makes them part of the operation. That's a win-win situation: You get more time to focus on storage innovation, and the business gets more for less.

Article 7 of 16

Dig Deeper on Data storage strategy

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

Get More Storage

Access to all of our back issues View All