News Stay informed about the latest enterprise technology news and product updates.

Pillar claims SPC-1 supremacy

Pillar Data Systems has published its first system test benchmarks for the Axiom 600 disk array via the Storage Performance Council (SPC). It tested a system with 42 total TB, including 10 TB used and mirrored to allocate a total of about 20 TB. The results were 64,992 total IOPS.

These numbers come in ahead of competitive systems from IBM, NetApp and EMC. The EMC CX 3-40 was tested last January by archrival NetApp, making its SPC benchmarks controversial, but as listed a 22 TB system with 8.5 TB used and mirrored produced 24,997 SPC-1 IOPS. A NetApp 3170 with 32 TB and 19.6 used and mirrored resulted in 60,515 June 10. A 37.5 TB IBM 5300 with 13.7 TB used and mirrored produced 58,158 SPC-1 IOPS Sept. 25.

I found it interesting that with one 2,000-IOPS deviation between the IBM 5300 and NetApp 3170, the systems generally performed better according to which had been most recently tested. Note also how much of an outlier EMC is, both in terms of capacity used and total capacity. It was also an outlier in its free space, with just under 1 TB unused. IBM and NetApp both left approximately 5 TB of free space in their configurations, and Pillar had 16 TB of free space.

I do have to wonder how much weight users give to these industry benchmarks when selecting a product.  NetApp’s submitting EMC systems to SPC, a flap last summer over server virtualization benchmark testing, and  continued inconsistency among vendors as to who submits systems for benchmarking leaves a lot of potential reasons to take benchmarks with a grain of salt.

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

What's the most important consideration when choosing thin client devices?
Because our users require UX as similar as possible to a middle type computer as possible. Also we have an small but very important group: CAD designers and GIS analyst that required very fast, very high resolution, and multi-monitors to do their job.
Great post, Steve and Bobbie. We completely agree with you here at NComputing that cost, display processing and application support are critical factors when picking thin clients! For Citrix environments, we feel you should also consider ones built specifically for HDX versus ones that merely support it. Also important is how much of the manufacturing process and technology stack that the thin client vendor owns (self-sourced components keep upgrades and support easy and agile – no finger-pointing). Our N-series HDX Ready SoC line aims to provide the richest HDX experience for Citrix customers at the best price ( We offer client-side rendering capabilities, dual monitor support up to 1920x1080, windows media player optimizations, 1080 HD video, and vast application support. Our SoC is a dual core design, so we also suggest performing comparison tests that feature real-world multi-tasking operations/running multiple applications at once versus one at a time.
We got desktops all over the place running on different OS, office 2003, 2007, 2010 and so on. This is not only crap, it is pathetic. Managment has no clue what they are doing. That explains the mess we are in.
User Experience which equates to app performance
End user experience is essential all application should work the same or better