The Internet Engineering Task Force's (IETF) Internet Engineering Steering Group (IESG) formally approved the Network File System Version 4 Minor Revision 1 protocol (NFSv4.1) in December 2008. In January 2010, it was published in a 617-page Request for Comment (RFC) 5661. However, it might take another year for developers to create a client capable of supporting the pNFS spec, which could push the release of shipping pNFS NAS systems into 2012.
"There's clearly going to be a huge increase in performance," said Larry Jones, vice president of marketing at Panasas Inc., one of the vendors that developed technology used for pNFS. Jones said pNFS will usher in a new way of accessing file data, with the client running the show. "The client contains the standard base that goes into Linux and all the other OSes, and it knows how to talk to the OS itself," he said.
"Then there are a set of drivers, three to be precise, that know how to talk to different types of storage systems," Jones added. The drivers are contained in RFC 5661 (file level), RFC 5663 (block level) and RFC 5664 (object level). According to Jones, NetApp Inc. developed the file-based technology, EMC Corp. developed the block-based technology (for Fibre Channel, iSCSI and Fibre Channel over Ethernet), and Panasas developed the object-based technology.
How does NFSv4 differ from NFSv4.1?
NetApp senior technical director Mike Eisler, an RFC 5661 editor, described the NFSv4.1 RFC as the largest ever published by the IETF. RFC 5661 and the pNFS specification allow compliant clients to create separate paths for requesting file metadata and file data from NFS servers.
Current file service requests — whether made with NFSv3 or NFSv4 — begin with the client requesting data from an arbitrary NFS node. The single NFS node then finds the appropriate storage nodes that contain the requested data and sequentially gathers the data before presenting it to the client. Relying on the single node to gather the data from multiple sources introduces latency and decreased throughput.
In NFSv4.1 the client controls the file-data request. According to Panasas' Jones, when the client makes a data request, it first communicates with the metadata server over IP. Once the metadata server authorizes the client and presents a map where the data is located, the client communicates with the appropriate nodes over the appropriate transport protocol and begins collecting the data directly from the storage nodes with multiple parallel data streams. The client doesn't depend on the NFS Server to collect the data for it.
"Instead, what [the client] does is directly [access all the nodes holding data] at once and, boom, it comes straight back," Jones said. "You don't have to worry about the extra hop."
Brad Bunce, EMC 's director of unified storage, said IT administrators will use standard mount commands to mount the file system, and the host will already be connected to the back-end storage through standard implementations.
Parallel file services predicted to enhance NFS adoption
Terri McClure, a senior analyst at Enterprise Strategy Group, said NFSv4.1 is best thought of as a split-path architecture where the file metadata and protocol data are transmitted over the IP network, and the file data itself travels over the storage architecture as files, blocks or objects. Bunce said the split-path architecture of pNFS means faster file service and better scalability. "[pNFS] splits the metadata and data requests to achieve the parallelism, so that gives you the performance and scalability benefits over the standard 4.0 or previous versions of NFS."
McClure believes pNFS will accelerate NFS adoption in mainstream companies. "[NFS] 4.1 should certainly accelerate transition to NFS because of the parallel file services," she said. "That's pretty powerful. It allows companies to start to overcome some of the bandwidth limitations when they are trying to serve large files."
Organizations that frequently deal with large files will get the most benefit from pNFS, EMC's Bunce said. Applications that do large volumes of small transactions, such as instant messaging systems, won't see many benefits because the files are too small for parallel delivery to matter.
But those files don't have to be massive for pNFS benefits to be seen, Bunce said. He estimated that files greater than 64 KB in size should get a performance boost.
NetApp's Eisler said the pNFS protocol will also be a boon to server clustering in general. "You get all the management advantages of storage clustering without having to pay for any of the performance drawbacks that you would get in the absence of a more agile protocol like parallel NFS."