News Stay informed about the latest enterprise technology news and product updates.

EMC defends bandwidth specs

The technical specs EMC is touting regarding its new Symmetrix high-end array have raised an eyebrow or two among users. But at least one analyst says that, however you look at it, the system still outperforms its competition.

EMC Corp.'s new Symmetrix DMX system has received praise from many within the industry for its outstanding performance. But technical specifications presented by EMC have created some degree of confusion in the user community over what the new Symmetrix's performance capabilities truly are.

Earlier this month, EMC rolled out the Symmetrix DMX, the sixth and latest version of the company's flagship storage system.

The Symmetrix DMX series is based on the company's new Direct Matrix architecture, an interconnect design that uses up to 128 point-to-point connections between cache memory and the front-end and back-end controllers. The DMX architecture was built to eliminate bottlenecks that choked earlier Symmetrix bus architectures.

One number touted by EMC is 64 gigabytes per second (GB/sec) -- the total data path bandwidth of its high-end Symmetrix DMX 2000 model, which assumes 128 direct connections at 500 MB/sec through cache. Add to that 6.4 GB/sec bandwidth for control data, and EMC claims a total aggregate cache bandwidth of 70.4 GB/sec -- more than four times the comparable bandwidth offered by Hitachi Data Systems (HDS) Corp.'s Lightning 9980 array (15.9 GB/sec).

While technically accurate, a more telling indicator of Symmetrix DMX's performance capabilities is its cache throughput. According to EMC, Symmetrix DMX can house up to eight cache directors, which can each support four simultaneous cache connections, for a total of 16 GB/sec.

"The 64 GB/sec number is, effectively, bull," says Steve Duplessie, senior analyst at the Enterprise Storage Group, Milford, Mass., "since you can only get 16 GB/sec in and out of cache."

"But," he adds, "so what? That's still loads faster than anyone else."

By way of comparison, explains Duplessie, HDS' effective internal bandwidth maxes out at 6.4 GB/sec. "Apples for apples, its almost three times more bandwidth."

Furthermore, asks Duplessie, do any users actually care about this? "How many real users even generate anywhere near this amount of traffic?" he said.

Why then, did EMC publicize its aggregate cache bandwidth numbers? "Others in the industry have focused on aggregate bandwidth," says an EMC spokesman. "We wanted to do both."

Still, the highlighting of this figure has raised the ire of EMC's competitors, HDS in particular. Far from conceding defeat on the performance race that HDS has led for so long with its Lightning 9900 product, HDS Senior Director of Product Marketing Phil Townsend instead warned users to "ask deep probing questions" of EMC about DMX's real-world capabilities.

Beyond cache bandwidth, says HDS' Townsend, other issues to consider when evaluating a monolithic storage array include total capacity; number of ports; the kind of connectivity, such as Fibre Channel, ESCON and FICON; and high-availability features, such as whether the cache is mirrored or single-instance.

Alex Barrett is Storage Magazine's Trends Editor.


New Symmetrix hits the streets

Inside the new Symmetrix

Will EMC's new Symmetrix be enough to win users

Dig Deeper on Storage management and analytics

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.