News Stay informed about the latest enterprise technology news and product updates.

Storage experts pan report on tape archiving TCO

The disk vs. tape debate that has been going on for years is heating up again, given technologies like data deduplication that are bringing disk costs into line with tape.

Or, at least, so some people believe.

The Clipper Group released a report today sponsored by the LTO Program which compared five-year total cost of ownership (TCO) for data in tiered disk-to-disk-to-tape versus disk-to-disk-to-disk configurations. The conclusion?

“After factoring in acquisition costs of equipment and media, as well as electricity and data center floor space, Clipper found that the total cost of SATA disk archiving solutions were up to 23 times more expensive than tape solutions for archiving. When calculating energy costs for the competing approaches, the costs for disk were up to 290 times that of tape.”

Let’s see. . .sponsored by the LTO trade group. . .conclusion is that tape is superior to disk. In Boston, we would say, “SHOCKA.”

This didn’t get by “Mr. Backup,” Curtis Preston, either, who gave the whitepaper a thorough fisking on his blog today. His point-by-point criticism should be read in its entirety, but he seems primarily outraged by the omission of data deduplication and compression from the equation on the disk side.

How can you release a white paper today that talks about the relative TCO of disk and tape, and not talk about deduplication?  Here’s the really hilarious part: one of the assumptions that the paper makes is both disk and tape solutions will have the first 13 weeks on disk, and the TCO analysis only looks at the additional disk and/or tape needed for long term backup storage.  If you do that AND you include deduplication, dedupe has a major advantage, as the additional storage needed to store the quarterly fulls will be barely incremental.  The only additional storage each quarterly full backup will require is the amount needed to store the unique new blocks in that backup.  So, instead of needing enough disk for 20 full backups, we’ll probably need about 2-20% of that, depending on how much new data is in each full.

TCO also can’t be done so generally, as pricing is all over the board.  I’d say there’s a 1000% difference from the least to the most expensive systems I look at.  That’s why you have to compare the cost of system A to system B to system C, not use numbers like “disk cost $10/GB.” 

Jon Toigo isn’t exactly impressed, either:

Perhaps the LTO guys thought we needed some handy stats to reference.  I guess the tape industry will be all over this one and referencing the report to bolster their white papers and other leave behinds just as the replace-disk-with-tape have been leveraging the counter white papers from Gartner and Forrester that give stats on tape failures that are bought and paid for by their sponsors.

Neither Preston nor Toigo disagrees with the conclusion that tape has a lower TCO than disk. But for Preston, it’s a matter of how much. “Tape is still winning — by a much smaller margin than it used to — but it’s not 23x or 250x cheaper,” he writes.

For Toigo, the study doesn’t overlook what he sees as a bigger issue when it comes to tape adoption:

The problem with tape is that it has become the whipping boy in many IT shops.  Mostly, that’s because it is used incorrectly – LTO should not be applied when 24 X 7 duty cycles are required, for example…Sanity is needed in this discussion… 

Even when analysts agree in general, they argue.

Join the conversation

7 comments

Send me notifications when other members comment.

Please create a username to comment.

Do you enable or disable hyperthreading on your servers?
Cancel
Because (per Microsoft) 50% of Servers are Windows Server 2003, and as per our large SMB customer database. Most of our customers are not using VMs (definitely not hyper-v).
Cancel
We generally find that hyperthreading improves performance, but if an application vendor indicates that best practice is to not have hyperthreading we use processors that do not have it or disable it. I don't have any statistics at hand regarding percentages.
Cancel
good services
Cancel
There is no single answer as such. Servers with database, i used to disable it and also for applications that are not mutithreaded.
Cancel
I disagree with some of your comments. Most benchmarks I have seen have shown higher performance with hyperthreading turned on. Why do most of the server manufacturers have it enabled at the factory if it would "slow" down their servers?? In fact, I do not know of any of my customers who disabled hyperthreading. Your comment about wasting processors is not true if you use 2012 Datacenter edition which supports 64 sockets.
Cancel
Original P4/Xeons had 24 execution units for one core. Then Intel gives us dual core (whoop) with 10 execution units per core; then quad core with 4 e.u. per core. Similar (for a given generation): one core 2MB cache; dual, 2x 1MB; quad, 4x 512KB per core. Hyperthreading is more efficient of CPU resoorces and takes advantage of larger single cache.
Note the comment above about VM HAL not supporting HT on Virtual CPUs. Also note that AMD doesn't support Hyperthreading, and that Microsoft did not write the x86-64 HAL for the NT codebase, AMD wrote it. Linux however does support HT on 64-bit systems.
Cancel

-ADS BY GOOGLE

SearchDisasterRecovery

SearchDataBackup

SearchConvergedInfrastructure

Close