Scale-out NAS meets today's requirements for massively scalable and highly available systems, is cost effective...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
and generally more efficient than traditional scale-up architectures. But technology change introduces risk, and companies may not be ready for a switch.
The information we store today is very different from the information we stored a mere decade ago. Every endpoint device has become a content creation and capture device that has enabled faster and more efficient business processes while also driving massive unstructured data growth. Nowhere has the impact been felt more than in the data center storage domain. And it seems no industry is safe. Across the board, file formats are richer and file sizes are growing exponentially.
Using traditional scale-up architectures to address this growth is unrealistic. IT organizations need more efficient storage technology, and they're frustrated by the complexity of current offerings. An alternative approach, scale-out NAS, is poised for a breakout year. It not only meets today's requirements for massively scalable and highly available systems, but does so cost effectively. It's generally more efficient than traditional scale-up architectures and reduces complexity because it can scale to multipetabyte capacities within a single namespace. In other words, it enables more capacity with far fewer systems.
With independent scaling of storage capacity, processors and bandwidth, users can grow scale-out NAS systems as needed without buying racks and power supplies in advance of capacity requirements or buying extra spindles to stripe files across. In effect, scale-out NAS provides "just-in-time" scalability. And with most scale-out systems, many low-level storage management tasks are automated, such as expanding the file system when new physical capacity is added and load balancing performance across processors, significantly reducing management costs.
Until recently, scale-out NAS has been tucked away in a corner, used mostly in niche markets such as high-performance computing (HPC), scientific computing, and media and entertainment environments. Scale-out architectures were originally designed and tuned to support the bandwidth-intensive applications found in these verticals and, like many technologies that made their mark in the past, they're now finding their way into mainstream IT shops.
In addition, major storage vendors are now putting skin in the game. In 2009, Hewlett-Packard (HP) Co. bought Ibrix and introduced a new scale-out line; IBM ramped up the volume on its General Parallel File System (GPFS)-based scale-out file services and scale-out NAS appliance; NetApp Inc. introduced its Ontap 8 operating system that combines scale-out and scale-up modes; and Hitachi Data Systems expanded its BlueArc-based Hitachi NAS scale-out portfolio with the addition of the BlueArc Mercury product. Even smaller players -- like Bycast Inc., Isilon Systems Inc. and Panasas Inc. -- that focus on scale-out NAS in the niche markets where it's become mainstream are seeing more interest and traction in commercial IT.
There's also evidence that the increased use of collaboration technology in today's enterprises is favorably impacting scale-out NAS. In ESG's recent 2010 data center spending survey, 28% of organizations citing new collaborative tools and business processes using Web 2.0 technologies (for example, blogs, wikis and social networking services) as a business initiative that will have the greatest impact on IT spending over the next 12 to 18 months will make significant investments in scale-out system technology for rapidly growing unstructured content. Among organizations that don't view collaboration as a key business initiative, only 14% will make similar investments.
Despite the rosy outlook for scale-out NAS in 2010, the shift to scale-out in commercial enterprises won't be immediate or wholesale; it will be a journey that will take some time. One reason: Change introduces risk -- mostly risk of the unknown -- so IT organizations will take a cautious approach. Plus, introducing new storage systems in the enterprise means training users on managing the system and laying out new data protection methodologies that work with the new technology. And tier-1 applications with demanding performance requirements will continue to need dedicated systems to support transactional performance, a good fit for the continued use of scale-up systems.
Managing data growth is an ongoing challenge for IT. It's also the "low hanging fruit" with which CIOs can make an impact and reduce both costs and cycle times. Keeping up with data growth has become an ever-more-costly effort, as it's been historically limited by traditional inefficient and complex-to-manage scale-up architectures as capacity needs increase. These limited-scale architectures have created an environment in which any changes, even simply provisioning more capacity, can take six months or more thanks to a lengthy change management process. Deploying new applications in this type of environment is a long, drawn-out process that limits a business' ability to respond to changing market conditions. IT's ability to respond to business needs must occur in real-time, which in turn will drive IT to look at deploying newer scale-out technologies that can provide a platform for business agility, consolidation, ease of use and availability.
BIO: Terri McClure is a storage analyst at Enterprise Strategy Group, Milford, Mass.