NetApp is rolling out a portfolio-wide refresh this week, with new midrange and high-end FAS storage arrays, and the introduction of a pre-configured stack for virtual environments, solid state drives (SSDs) and inline compression for primary data.
The new systems are the FAS3200 midrange, FAS6200 enterprise, and a FlexPod bundle that includes Cisco server and switching products and VMware server virtualization applications with the FAS3200.
The FAS3200 and FAS6200 are mainly speeds and feeds upgrades from the FAS3100 and FAS6000 platform with new boards, processors and memory buses. The FlexPod is an answer to EMC's Vblock bundles that also include Cisco servers and switches and VMware software.
NetApp serves up SSD option along with Flash Cache
The NetApp SSD support comes nearly two years after its main rival EMC Corp. first offered SSDs in its arrays, and gives NetApp customers a choice for their Flash implementations. NetApp first brought out its Performance Acceleration Module (PAM) with Flash to boost read cache last year.
Now NetApp is supporting SSD in its DS4243 disk enclosure, which also holds 3.5-inch 2 TB SATA and SAS drives. The enclosure holds up to 96 100 GB 6 Gbps SAS single level cell SSD (SLC SSD) for a maximum of 9.6TB.
Patrick Rogers, NetApp's vice president of solutions and alliances, said he expects most customers to use Flash Cache, with SSDs a better fit only for applications that do a large amount of random reads. NetApp is taking a different approach than most data storage vendors, however, by not offering automated tiering software to facilitate placement of data across SSD and spinning disk tiers.
NetApp customers can use its Data Motion migration software to move volumes, but they must manually load volumes into the SSD storage. NetApp's take is that most data should go on cheaper SATA storage with Flash Cache used to boost response times of data that must be quickly accessed.
"We see two tiers of storage – Flash and SATA," Rogers said. "Everything on Flash is permanently stored on SAS or SATA, and then moves into Flash as the application needs that data. That, in our view, is [the] perfect automated tiering solution. It goes to Flash, automatically deduped, and then as you retired those blocks, they go to SATA."
Compellent Technologies Inc., EMC, Hewlett-Packard (HP) Co., Hitachi Data Systems and IBM have come out with automated tiering applications to make SSDs more efficient. Mark Peters, a senior analyst at Enterprise Strategy Group, said NetApp has a "philosophical disagreement" with most of their competitors about automated storage tiering.
"It's not that they disagree with tiers, but they don't agree with tiering as a dynamic thing," he said. "If you put it in the right place to start off with, that means it's an application-specific placement. The rest of the market says you move it around to the right place at the right time, making it less about the applications and more about the data."
Ray Lucchesi, president at Silverton Consulting, pointed out that EMC added FAST Cache – its answer to Flash Cache – as part of its automated tiering strategy but NetApp takes a different strategy.
"NetApp was always going against the grain with Flash Cache," he said. "Flash Cache is useful for hot data – forget migrating data off disk, just keep it in cache. NetApp says you can use Flash Cache for everything that's unpredictable and SSDs for everything that's predictable."
Ryan McDonald, IT director at web-based monitoring service provider eLynx Technologies, said he will likely add one of NetApp's Flash alternatives to his FAS3100 arrays.
"We can't justify SSDs for everything, but for specific point solutions, one of the two options would fit nicely," he said. "We're looking at SSDs specifically for SQL indexes -- we want to drive faster response times. We thought about PAM when we first purchased our [NetApp systems in June] because you can isolate which volumes you want PAM to work on. But SSDs would give us more flexibility with capacity. PAM is more limited in size."
As for automated tiering, McDonald said he doesn't think it's necessary now but may be helpful eventually.
"There's a lot of data we collect and keep for long periods," he said. "I could see in the future maybe trying to put data older than 10 years on SATA, but keep data from the last year or two on SSDs and everything else in the middle."
Compression adds another data reduction option
NetApp's Data Ontap operating system has been able to dedupe primary storage since 2007, and now adds compression for primary block data. This comes two months after EMC added block compression for its Clariion SAN and Celerra unified storage systems.
NetApp's Rogers said customers can choose to compress by volume, just as they do for deduplication. He said he expects a lot of customers to use both primary data reduction technologies: deduping on the first pass and then compressing volumes.
"Most people will do dedupe to eliminate redundant blocks and then compress the remaining blocks," he said. "They're mutually supporting of one another."
Unlike dedupe, he said, compression introduces some performance impact. "You need to decide how much you're willing to tolerate," he said. "If there's enough horsepower in the controller, you might not see any impact, but in some cases you might. We have a tool that helps predict the performance hit."
FAS arrays expand capacity, memory
The FAS3200 has three models with maximum capacities from 480 TB to 1.9 PB, with 8 GB to 32 GB of memory, and 512 GB to 2 TB of Flash Cache available. The FAS3100 systems it replaces scaled to 840 TB with 32 GB of memory.
The FAS6200 also has three models, with maximum capacities of 2.4 PB to 2.9 PB, memory ranging from 48 GB to 196 GB, and from 3 TB to 8 TB of Flash Cache. It replaces the FAS6000 line that scaled to 1.2 PB with 64 GB of memory.
The FAS3200 and FAS6200 configurations use dual controllers and are also available as V-Series gateways that support SAN arrays from EMC, Fujitsu, Hitachi Data Systems, Hewlett-Packard (including 3PAR) and IBM.