This article can also be found in the Premium Editorial Download "Storage magazine: Salaries up for storage pros."
Download it now to read this article plus other related content.
Network-attached storage (NAS) systems are at the heart of burgeoning file data stores. But scalable NAS systems will have to evolve to meet new capacity, accessibility and management needs.
The evolving features and capabilities of network-attached storage (NAS) systems
To start, next-generation NAS is no longer confined to corporate data centers but is increasingly used to power cloud services with their need for unbridled scalability. But even in enterprises, the rising interest in big data and the accelerating growth of unstructured data are pushing scalable NAS to the top of the next-gen NAS feature list. A 2011 study by Framingham, Mass.-based IDC found that the world's data is doubling every two years, and predicted enterprises will have to deal with 50 times more data and 75 times more files in the next decade.
The consumerization of IT and the fading of boundaries between personal computing and the workplace require NAS systems to be securely and easily accessible from a wide range of devices. In addition, the continuous need for more efficient IT and the unstoppable journey to a virtualized IT infrastructure are pushing for the incorporation of new features into next-generation NAS systems. Scalability, accessibility and manageability are the key areas in which next-gen NAS systems will be measured to determine if they're up to the task in a world of clouds and proliferating consumer devices.
Scale-out architecture. Until recently, storage systems in which two storage controllers share the workload and provide failover for each other dominated the enterprise space. Capacity scaled by adding disks and shelves, and performance by adding additional processors, memory, spindles and upgrading storage controllers to the next level. Eventually, and often rather quickly, a scalability limit was reached, and the only available options were to either add another storage system or do a fork-lift upgrade and replace the existing NAS. The results were sprawling NAS silos and bloated storage budgets. While dual-controller storage systems worked in the 20th century, they've failed to efficiently support the unchecked growth of unstructured data in the 21st century.
With established NAS vendors sticking with traditional architectures, in the early 2000s, startups like Isilon and Ibrix ventured into multinode NAS systems that scale proportionally as nodes are added. Their systems were first adopted in vertical markets like health care and the gas and oil industry where large unstructured files prevail; since then, the systems have increasingly found their way into the enterprise. It took established storage vendors almost a decade to yield to the pressure and success of scale-out architecture NAS; lacking scale-out experience, they sought to acquire scale-out pioneers. The acquired technologies have either been incorporated into existing systems, as NetApp Inc. did with Spinnaker, or simply repackaged, as EMC Corp. is doing with Isilon (EMC Isilon), Hewlett-Packard (HP) Co. with Ibrix (HP Ibrix X9000) and Dell Inc. with Exanet (PowerVault NX3500). With the majority of large storage vendors now on board, scale-out has become the architecture of choice for NAS.
The benefits of scale-out NAS are compelling:
- Scalable performance in I/O and throughput
- Scalable capacity
- Lower cost
- Improved high availability (HA)
- Simplified management by being able to manage a single large NAS rather than NAS silos
At this point, the ability to scale horizontally and manage a multinode NAS system as a single storage system with a global namespace are must-haves and should top anyone's NAS wish list. By the same token, it's important to realize that not all scale-out NAS systems are equal; while each vendor claims to lead, there are significant differences in how they scale, how they support a global namespace, the number of file systems and files per file system they support, the makeup of their storage pool and how they manage metadata.
Tiering solid-state drives (SSDs), disk and cloud. Storage tiering and the ability to efficiently support solid-state storage are instrumental to scale costs efficiently. Next-generation NAS systems need to support SSDs, disk and cloud tiers. Almost all NAS systems have some level of SSD support, but there are substantial differences in how SSD is leveraged and the tiering methods used to ensure that active data remains on fast flash tier and stale data on slower disk or cloud tiers. In its most basic and common implementation, SSDs are added to a NAS system to supplement mechanical drives, with files and applications allocated to appropriate tiers manually. Despite a consensus that data movement between tiers needs to be automatic, the support for automated data tiering varies significantly in contemporary NAS offerings.
"Isilon supports SAS, SSD and SATA as part of a storage pool, but today we don't move data between tiers automatically," said Sam Grocott, vice president of marketing at EMC Isilon.
Flash as cache is another way to supplement a NAS with solid-state storage. While more complex to implement and requiring a change in the underlying storage architecture, flash cache has several advantages over simply substituting disks with SSDs (at least as long as the cost of SSD remains an order of magnitude higher than that of mechanical disk drives):
- A cache will always keep the most active data in SSD without the need for tiering policies
- It benefits all files on the storage system
- A cache moves data at a sub-file level between tiers
NetApp and Oracle Corp. (in the Sun ZFS Storage 7000 series appliance) have been early advocates of using flash as cache. The combination of a flash cache and low-cost, high-capacity SATA drives can challenge the performance of high-end disk arrays but cost less. "Next-gen NAS should support both tiering with cache, as well as by policies," EMC's Grocott said.
While solid-state storage aids performance and may yield lower overall storage costs, storage clouds enable unprecedented levels of capacity scalability. Today, the integration between traditional NAS systems and the cloud is usually accomplished using cloud gateways, but next-gen NAS systems are expected to support native cloud integration to enable virtually infinite scalability. While cloud computing accounts for less than 2% of IT spending today, IDC estimates that by 2015 nearly 20% of information will be touched by cloud providers -- that is, somewhere in a byte's voyage from creation to disposal it will be stored or processed in the cloud. Cloud will simply be another tier and the movement of data between the cloud and other tiers needs to be automatic.
"Next-gen NAS should have the ability to talk to a cloud natively; today, EMC can do it to Atmos leveraging Rainfinity," said Greg Schulz, founder and senior analyst at Stillwater, Minn.-based StorageIO.
NAS systems have been confined to data centers and accessibility via NFS and CIFS file-system protocols. With the rise of cloud computing and the proliferation of mobile devices, limited connectivity to files on NAS arrays has become an obstacle. To have data concurrently available on smartphones, tablets and traditional computing devices, savvy users have taken advantage of services like Dropbox. But that often means they've taken files out of secure corporate NAS stores, put them on their laptops and desktops, and synchronized them to all their devices. To the consternation of corporate IT, which had been struggling to mitigate the security risk of USB memory sticks, new cloud services like Dropbox suddenly posed a new and bigger threat to confidential corporate data.
Traditional storage systems will need time to respond to the new requirements to efficiently support simple and secure access for mobile clients. However, it's clear that with the explosive growth of mobile clients, new mobile connectivity options are a critical next-generation NAS feature. "Next-gen NAS needs to offer the ability to share storage with all kinds of users and end-point devices, including consumer devices," said Terri McClure, a senior analyst at Milford, Mass.-based Enterprise Strategy Group (ESG). New requirements are usually pioneered by startups and adopted by large storage vendors much later. Startups such as Maginatics Inc. have begun to offer products with rich end-point device support.
The other aspect of accessibility is the ability of NAS systems to integrate with applications and other systems. With the boundaries between NAS and cloud storage blurring, and NAS systems actually powering many existing storage clouds, next-gen NAS systems need to open themselves up and transform from closed systems that essentially existed by themselves into open systems that converse with other systems and applications using standard protocols while providing versatile application integration options. In other words, NAS systems need to become more like object stores. To start, next-gen NAS must support cloud protocols, most importantly a REST interface that enables HTTP-based integrations. We're now seeing NAS vendors starting to support REST interfaces. Furthermore, NAS systems need to support metadata beyond traditional file-system metadata to enable applications to tag files and objects with custom information, an ability that becomes increasingly relevant for cloud applications.
With petabyte file stores becoming more common and NAS systems being used as cloud storage, manageability is a key aspect in next-gen NAS systems. To start, the increase in human NAS management overhead needs to grow at a much slower rate than NAS capacity growth. Despite the earlier cited growth in the number of files (75x) and amount of information (50x) in the next decade, IDC predicts there will only be 1.5 times the number of IT professionals available to manage it. For this prediction to become reality, next-generation NAS systems require management features that substitute for the modest addition of human resources to manage substantially larger NAS systems:
- A single scale-out system where all storage is managed through a single management pane will be indispensable
- Monitoring and actionable storage analytics that provide real-time status and metrics are a must
- Automation that acts on monitoring events and analytical data based on rules will become increasingly important
- High availability that sustains multiple concurrent failures and self-healing features will grow in relevance
As next-gen NAS systems become more open and find their way into storage clouds, security will play a more significant role than it does today. Robust multi-tenancy that enables secure separation and isolation of information of different tenants on the same NAS is turning from a nice-to-have into a must-have feature. While data is currently stored unencrypted on NAS file stores, encryption will inevitably be a requirement, especially as mobile clients get direct access to information on the NAS and when a NAS system is used as cloud storage. Security tools that allow identifying and categorizing information to be secured will be crucial to provide different levels of protection, depending on the criticality and confidentiality of the information residing on the NAS. Furthermore, existing threat protection devices and fraud management systems will have to access this information.
The evolution of NAS
The accelerating growth of unstructured data, continuing virtualization of IT, and extended file services to meet new requirements of mobile clients and cloud computing will drive next-generation NAS features. While the ability to cost-efficiently scale with minimal additional management overhead tops the next-gen NAS requirements list from an IT perspective, seamless integration with all their computing and communication devices and cloud services matters most to end users. To get there, NAS systems need to step up and excel in scalability, accessibility and manageability to address the needs of an increasingly virtualized, mobile and cloud-enabled computing landscape.
About the author:
Jacob N. Gsoedl is a freelance writer and a corporate director for business systems.
This was first published in October 2012