Unstructured data storage showdown: Object storage vs scale-out NAS
A comprehensive collection of articles, videos and more, hand-picked by our editors
Computer systems today are a complex interplay of compute, storage and networking. A significant change in any...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
of these three elements can upset the performance balance of the system as a whole, not just during the initial installation phase, but on an ongoing basis.
For example, adding additional networked-attached storage boxes to a switch might slow down some important primary NAS-based storage data flows. "Rationalizing" files over to the new NAS scale-out boxes may cause serious short-term congestion. Avoiding these problems needs some statistical analysis and careful preplanning.
Looking at average and peak traffic on the network links to NAS will identify any hotspots. These are switches to avoid for the new NAS scale-out gear. This might come into conflict with the desire to localize storage to computing, minimizing backbone traffic. If there are hotspots within the rack, consider more links to the top-of-rack switch or even moving to 40 Gigabit Ethernet (GbE) for the storage boxes.
A good rule of thumb in storage is that access gets slower as the appliance fills up. File structures become more complex and parsing them slows things down. NAS is particularly prone to this because it lends itself to data sprawl. NAS tenants often have no inclination toward cleaning up unwanted data, while duplication can run rampant. It's a good idea to clean up data, but that can be a big task.
Some ways to get at the data issue include identifying warm and cold data, and moving the latter to cheaper, slow storage -- which might include the cloud. Doing this manually is very time consuming, even in small installations, so some automated alternative is required. One solution might be to archive every file older than a certain date -- by applying exceptions for known active files or directories -- and then turn on auto-tiering that includes the archive.
Assuming the NAS pool is cleaned up, further growth might still be in order. The first priority is to look at the economics of compression and deduplication versus adding raw capacity. Compressed data will load much more quickly than raw data, and it saves network bandwidth, but software usually isn't free.
Factors to consider when purchasing scale-out NAS offerings
The next issue is to choose the NAS scale-out system to use. Just buying a clone of an existing NAS box isn't a good idea. The useful life of the box is effectively ended when the older boxes are forklifted out, while technology is moving at such a pace that the gear is probably obsolescent technically. It's better to buy the state of the art. If your NAS software can't cope with heterogeneity, look very hard at alternatives, because vendor lock-in, together with the buying of outdated designs, can be very expensive.
Product designs have improved rapidly every year in the last half-decade, driven by flash technologies. Flash-based caches can make storage appear much more energized, improving bandwidth and latency dramatically. Remember that this faster speed is network-limited, so it might be necessary to use 40 GbE links into the network backbone for the new scale-out NAS, which means a partial network rebuild, at least.
Drive capacity is a second reason to pick newly-designed kit. Drives in the 4 TB to 8 TB range are now mainstream and this will soon extend to 10 TB. While this means fewer NAS boxes for any given effective capacity, performance can suffer if there are fewer, but larger HDD spindles. The answer to this usually lies in the NAS software, where adding a couple of journal files is supported to speed up write operations. Data from these is spread over multiple drives and NAS boxes in a background mode. Making these journals based on solid-state drives is a good idea. Writes complete in microseconds not tens of milliseconds, and if the software supports auto-tiering, the SSDs can also act as a fast cache.
Some applications benefit enormously from having cloning capabilities in NAS. When many copies of the same data have to be delivered to clients, doing so from flash makes things move much faster. The cloning approach allows a single copy to deliver all these images, saving precious cache space.
Within a couple of years, we'll see all NAS systems being delivered on SSDs. This will boost speeds dramatically, even with SSD capacities in the 20 TB or higher range. This means fewer boxes to deliver both on terabytes and IOPS, but the network loads will increase substantially. We are lucky that Ethernet development is running on a new, fast pace, with 25 GbE due this year and 50 GbE in 2018. Backbone versions with 4, 8 or even 12 lanes are in the pipe, giving capacities out to 0.6 terabits per second in 2018 or so. Without this improvement, your NAS scale-out system plans would be hampered by a network that can't keep pace.
Scale-out NAS vs. scale-up NAS
Designing scale-out NAS for object storage
Six scale-out NAS product offerings surveyed