Scanrail - Fotolia

Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

NAS applications will change with greater SSD adoption

As NAS technology changes -- with new software features and faster, all-flash NAS hardware -- its use in organizations has evolved into functions for which it was never suited.

When we look at network-attached storage hardware and software, there are a wide variety of features and performance elements from which to choose.

At the low end of the market, we see compact, ready-to-use boxes with just a couple of terabytes of capacity, replacing the need for home-grown NAS servers.

Moving upmarket, the traditional 2U NAS server can now hold up to 12 drives, each with 10 TB, though not many shops would put that much data on a single NAS application. Beyond the rack server, we see the enterprise-class systems from companies such as NetApp and EMC, with its Isilon line. These are typically highly featured and can expand into the petabyte range.

SSDs are changing the way organizations deploy NAS applications. The storage area network, however, has not responded well to the SSD challenge, with hybrid arrays incapable of handling more than 4 or so SSDs. This is changing the balance of storage away from the traditional RAID array to Ethernet-based solutions, including NAS and object storage.

I suspect that the all-flash array, which plugs into a SAN easily and provides serious acceleration, has saved the SAN market segment from a much more abrupt decline. The lack of such capability in the high-end NAS market has been a serious problem for NAS users, who've generally been left with some flash caching in controllers as their acceleration option.

High-end NAS is getting on top of the flash issue, and we are now seeing really fast NAS boxes using SSD. This can be done even in the homegrown NAS server using NFS or using SMB from a Linux distro. At the same time, software is adding features typically seen in object storage to NAS applications and is setting the stage for true scale-out NAS products.

High-end NAS is getting on top of the flash issue, and we are now seeing really fast NAS boxes using SSD.

This affects use cases. Use cases that demand performance, such as virtual desktops, can take advantage of models to meet performance goals with a set of fast SSDs. This is true across the whole NAS market.

Using low-cost SATA drives, the small business NAS box can really deliver with a price increase of perhaps a couple hundred dollars, while at the other end of the spectrum, NAS has become a real competitor for the all-flash array.

Windows Storage Server-based boxes are even bringing the all-flash array to the NAS market. Violin Memory offers an all-flash appliance with SMB and NFS capability and very high performance. With a match in performance with the best SAN offers and scale-out capabilities to boot, NAS is still a viable, high-end contender. Quite possibly, these fast NAS applications could accelerate the drop in difficult to scale SAN.

Hot video, cold data

A major use for NAS applications could be video streaming and editing apps, as users may find NAS much easier to handle when sharing files with many geographically distributed workers or clients. The NAS ability to communicate via public clouds while masking REST and other interfaces is a plus in ease of use, fitting a paradigm that is universally supported within apps.

NAS also provides a place to park cold data. Of course, this could be done with block-IO storage, but the traceability of files in NAS is much better, especially as the data store size increases. In the short haul, this could be a good use for those older SAN arrays using an NAS server as a front end.

Merging auto tiering with high-end NAS capabilities should offer a good combination in containers environments, making the data for many users easier to track, while simplifying operations via cloning and deduplication. The SSD top tier will provide speed, while the secondary tier of HDDs (which will move to SSD in a couple of years) provides cheaper bulk storage, likely with deduplication and compression.

Capacity increases in drives have already reduced the footprint of this secondary storage, and the expectation of 30 TB 2.5 inch SSDs in 2018 or 2019 makes a transition to SSD-based secondary storage inevitable, since hard drive development can't match these capacities. Couple this with deduplication and compression, both of which are easier on NAS systems, and secondary storage will be faster, greener and smaller in just a few years.

The market profile is changing even further. Fast SSDs are bringing huge performance increases and, often, boxes with high drive counts can't cope. We are transitioning to an era where network and compute limitations, which mean smaller primary networked storage configurations with just 12 solid-state drives, are going to become the norm. All-flash arrays will have to duke it out with these boxes on performance and price.

With both primary and secondary NAS applications moving toward smaller physical configurations based on commercial off-the-shelf (COTS) products, we should see a healthy alignment of NAS and the storage device interface world. Using what is essentially a standard COTS server, both primary and secondary storage will likely converge on the same box sizes, which will shrink to 1U given the small physical size of SSDs. Primary storage will use fast drives and have high-speed interfaces such as 40/100/200 Gigabit Ethernet (GbE), while bulk storage will use 30+ TB drives and have 25/50 GbE.

Next Steps

EMC adds all-flash version of Isilon NAS

Forrester researcher predicts SAN and NAS future

NAS can now scale out, take on object storage

Dig Deeper on NAS devices