News Stay informed about the latest enterprise technology news and product updates.

Latest Sun/NetApp clash: SPEC SFS

While one vendor’s blogger came to bury SPEC SFS, another came to defend it. The clash of vendors as yet seems unresolved.

The Standard Performance Evaluation Corporation (SPEC) SFS benchmark measures file server throughput and response time. The latest version, SPECsfs2008 was implemented last year.

But Sun FISHWorks blogger Bryan Cantrill wrote in a post called “Eulogy for a Benchmark” that the workload mix even in the most recent version remains outdated:

The 2008 reaffirmation of the decades-old workload is, according to SPEC, “based on recent data collected by SFS committee members from thousands of real NFS servers operating at customer sites.” SPEC leaves unspoken the uncanny coincidence that the “recent data” pointed to an identical read/write mix as that survey of…now-extinct Auspex dinosaurs a decade ago — plus ça change, apparently!

Moreover, Cantrill argued, the testing parameters for systems lead vendors to design NAS heads to perform well in the SFS test, which he said is at best irrelevant and at worst detrimental to a real-world environment. He also insists that SPEC benchmark results need to come with system pricing disclosures.

Enter NetApp blogger and senior technical director Michael Eisler, who called his response to Cantrill’s post “Chuckle for Today.”

the philosophy of SPEC SFS has always been to model reality as opposed to the idealist…dream where a storage device never has to process a request. P.S., in an earlier blog post, I made the argument that SPEC SFS 2008’s differences from SPEC SFS 3.0, show the caching on NFS clients has improved.

On the pricing disclosure issue:

Like many industries, few storage companies have fixed pricing. As much as heads of sales departments would prefer to charge the same highest price to every customer, it isn’t going to happen. Storage is a buyers’ market. And for storage devices that serve NFS and now CIFS, the easily accessible numbers on are yet another tool for buyers. I just don’t understand why a storage vendor would advocate removing that tool.

In storage, the cost of the components to build the device falls continuously. Just as our customers have a buyers’ market, we storage vendors are buyers of components from our suppliers and also enjoy a buyers’ market. Re-submitting numbers after a hunk of sheet metal declines in price is silly.

This is where Cantrill appears to take exception to Eisler’s taking exception, responding in a followup post that Eisler’s defense of the pricing non-disclosure is an “Alice-in-Wonderland defense.”

Mike’s argument — and I’m still not sure that I’m parsing it correctly — appears to be that the infamously opaque pricing in the storage business somehow helps customers because they don’t have to pay a single “highest price”! That is, that the lack of transparent pricing somehow reflects the “buyers’ market” in storage. If that is indeed Mike’s argument, someone should let the buyers know how great they have it — those silly buyers don’t seem to realize that the endless haggling over software licensing and support contracts is for them!

It’s not just this benchmark which is being debated over in the storage industry–SPC benchmarks have also been a bone of contention between EMC and NetApp and between HP and EMC. Even in the comments on this blog I’ve heard everything from “Take the time to read the full disclosures, read the specifications…You might learn something” from a defender of SPC to a nonplussed “I really hope nobody uses SPC-1 results as any criteria for buying storage.”

So benchmarks are obviously a touchy subject among many in the industry. But meanwhile, is there anything Sun and NetApp aren’t fighting about?

Join the conversation

1 comment

Send me notifications when other members comment.

Please create a username to comment.

Why would someone want to slow down a fast machine with de duplication that can be CPU intensive (server cpu) and de-dupe in non backup is minimal.