This content is part of the Essential Guide: EMC World 2014 conference coverage
News Stay informed about the latest enterprise technology news and product updates.

Keep EMC XtremIO performance comparison in perspective, analyst says

During one of Monday's keynote sessions at EMC World 2014, company executives showed a slide comparing the performance of the EMC XtremIO all-flash array with that of an unnamed competitor.

That comparison drew the attention of George Crump, lead analyst at Storage Switzerland, who termed it "one of the more controversial things" that came up at the conference.

The graphic shows EMC XtremIO performance staying level while at capacity and with deduplication applied, while the competitor's performance nosedived.

"It's a very interesting graph and it's a real statistic, [but] I question the value of showing it," Crump said. "We don't know the details. We don't know who the customer was. We don't know the load. We don't know how full it was. We don't know any of that.

"But what it's showing is really important," he explained. "It shows you [that] at some point the deduplication engine can become part of the problem. So what's happening is, as you get closer to capacity, you're managing more and more data and doing more and more comparison, so the speed at which the data about the data -- the metadata -- can be traversed and compared so you can get an efficient deduplication rate becomes very important."

Crump offered takeaways for all-flash-array users.

"One of the things, if I was an end user, I would pull from this is [that] you can't treat deduplication like a checkbox," he said. "Just the fact that everybody has it or eventually will have it is nice, but these engines are all a little different and will perform different under load and at capacity. As you scale, this is a database -- essentially a metadata table -- and the larger that database comes, the more data it has to track [and] the more it will impact performance."

According to Crump, "the chance of corruption in a corrupted metadata table in deduplication could be very, very problematic. The real takeaway isn't necessarily who the competitor was, but making sure you do the right testing so you understand what the impact of deduplication will be in your environment."

View All Videos

Join the conversation

1 comment

Send me notifications when other members comment.

Please create a username to comment.

George is correct. Each vendor's implementation of deduplication and compression is unique and performs differently under different workload scenarios. The question users should ask is how will my workload perform using the different vendors' products under their specific data reductions implementations. The best way to answer this question is with a performance validation system, like Load DynamiX, that can model dedupe and compression and generate workloads based on your I/O profiles to test the different vendors. This will give you an apples to apples test under your specific workloads.