The never-ending flood of unstructured data -- from documents and spreadsheets to photos and videos -- is driving many IT shops to pay closer attention to their file-based storage infrastructure.
Traditional NAS boxes have fixed capacity while scale-out NAS systems can expand to store and manage multiple petabytes of data. However, there are tradeoffs for that added scalability.
Traditional NAS comprises one or two controllers, or NAS heads, and a pre-set amount of CPU, memory and drive slots. Once the NAS device reaches its limits, the user needs to buy a new, separately managed system to boost capacity and performance. Traditional NAS is sometimes known as scale-up NAS because it's upgraded by adding improved performance and capacity (speeds and feeds) to an existing architecture.
In contrast, scale-out NAS grows by adding clustered nodes. These are often x86 servers with a special operating system and storage connected through an external network. Users administer the cluster as a single system and manage the data through a global namespace or distributed file system, so they don’t have to worry about the actual physical location of the data.
“Both traditional and scale-out NAS are growing, though use cases are evolving,” said Rick Villars, vice president of storage systems and executive strategies at IDC in Framingham, Mass., via email. “Traditional NAS is playing a greater role in virtualized server environments. Scale-out is the foundation for many cloud and large archive environments, and will come to dominate in terms of capacity shipped.”
IDC predicts that more than 83% of the shipping capacity for enterprise storage systems will accommodate file-based data within three years, and the growth rate for file storage will be 2.5 times greater than the rate for block-based storage capacity.
Enterprise Strategy Group (ESG) Inc. in Milford, Mass., forecasts that by 2015, scale-out storage will make up 80% of all net-new networked storage shipments from a revenue standpoint and 75% of all networked storage capacity. ESG doesn’t distinguish between NAS and SAN because it assumes that all scale-out systems will ultimately support file and block storage.
Traditional NAS systems increasingly have become multiprotocol. For instance, earlier this year EMC Corp. unveiled its VNX unified storage family, which converges its Celerra NAS and Clariion SAN systems. NetApp Inc.’s FAS and V-Series products also support unified connectivity for file and block workloads.
In this tutorial, we'll focus on file-storage capabilities. Here are some of the differentiators to consider when evaluating traditional NAS vs. scale-out NAS.
NAS use cases
Traditional NAS: File-storage covers a broad range of workloads, from office productivity and collaboration applications to specialized systems in financial services, manufacturing and health care.
Vendors generally optimize scale-up NAS devices for the random access of small files, and these products work especially well with predictable performance and capacity requirements. Traditional NAS also can serve as an alternative to tape-based backup and handle limited data archiving.
More recently, traditional NAS has seen an uptick in usage with virtual servers, especially those based on technology from VMware Inc., and databases, including those from Oracle Corp.
Greg Schulz, founder and senior analyst at Stillwater, Minn.-based StorageIO Group, said the trend will likely accelerate now that VMware vSphere 5.0 has added greater feature/function parity between NAS and SAN with its vStorage APIs for Array Integration (VAAI) and vStorage APIs for Storage Awareness (VASA). Plus, NAS tends to be flexible and easy to use with virtualization technology, he added.
Jason Blosil, a product marketing manager at NetApp, attributed the increasing interest in NAS for databases to faster 10 Gigabit Ethernet (10 GbE) networks, the lower cost of Ethernet in comparison to Fibre Channel (FC), and the ease with which file protocols configure and scale.
Scale-out NAS: High-performance computing (HPC) was the original sweet spot for scale-out NAS, as select industries craved the high throughput the systems offered for exceptionally large files and data sets. Early scale-out systems were especially popular in scientific and academic research, biotechnology, oil and gas, engineering, design and media production.
Traditionally, we’ve seen scale-out NAS mainly on high-end, vertically focused and high-performance workloads or where massive capacity and performance were needed. We’re focused on bringing scale-out NAS to the masses.
Mike Davis, director of NAS marketing at Dell
HPC applications need multiple processors, memory modules and data paths. Parallel data services, which break up single files and deliver them in pieces in parallel, are an absolute must, said Terri McClure, a senior analyst at ESG. She likened the services to the checkout process at a grocery store. Systems based on parallel processing offer multiple checkout lines, rather than funneling requests through one or two cashiers.
Patrick Osborne, manager of worldwide NAS business development at Hewlett-Packard (HP) Co., noted his company has seen a significant trend in archiving and data retention for compliance/regulatory purposes, as well as business analytics. In these instances, scale-out network-attached storage can be especially helpful with massive amounts of unstructured data.
To accommodate the expansion in use cases, scale-out NAS vendors have to adapt their systems to handle both large files in need of high throughput and small files requiring high IOPS.
On the low end, vendors such as Dell Inc. are trying to make a case that scale-out NAS can be economically feasible for small-scale and remote office/branch office (ROBO) customers.
“Traditionally, we’ve seen scale-out NAS mainly on high-end, vertically focused and high-performance workloads or where massive capacity and performance were needed,” wrote Mike Davis, Dell’s director of NAS marketing, in an email. “We’re focused on bringing scale-out NAS to the masses.”
This was first published in November 2011