The first fundamental change to the Network File System (NFS) standard in decades -- the Parallel NFS (pNFS) extension -- is currently in the process of ratification
Although the pNFS extension has the support of NAS hardware vendors, it's not yet clear how soon application and operating system vendors will support pNFS.
Parallel NFS, which is part of NFS 4.1, is pending approval from IETF's area directors who handle standards applying to the network infrastructure. If all goes well, NAS vendors expect NFS 4.1 to be ready early next year.
Parallel NFS provides a specification for placing a metadata server outside the data path of servers attached to a multinode storage system. Storage nodes can be held together with another clustered file system, while pNFS exposes the block mapping of files and objects to the client. The client then receives those blocks through multiple parallel network channels and reassembles them for presentation to the user.
Gigabit Ethernet bottleneck
Storage pros are already considering specific benefits they might get from the new protocol. Some storage administrators said they hope VMware will offer a client for pNFS to help overcome storage I/O issues with the server virtualization software.
Tom Becchetti, storage engineer for a large medical manufacturing company, said pNFS might finally give him a way to run VMware virtual machines attached to NFS shares instead of using VMware's VMFS proprietary clustered file system. "I've been looking into running VMware on NFS, but have been leery of doing it with a standalone NFS server for scalability reasons," he said. "The hesitancy toward using NFS mainly comes from the fact that Gigabit Ethernet is a bottleneck for most shops."
Scott Lowe, national technical lead for virtualization for VAR ePlus Technologies, said his clients have run into Gigabit Ethernet bottlenecks when using VMware on NFS because the server virtualization software isn't built to take advantage of multiple Gigabit Ethernet links by default.
"It's not just a matter of adding additional NICs," he said. "The way VMware places traffic onto uplinks means that you need multiple VM kernel implementations on different IP subnets, as well as multiple NFS servers on different subnets, to use all the bandwidth in aggregated links."
VMware officials did not respond to requests for comment on whether it will incorporate pNFS.
Applications must adapt for pNFS success
VMware might not be the most widespread application used with NFS, but it's a good example of the kinds of roadblocks that remain between the emerging standard and the commoditization of parallel I/O.
The biggest hurdle is that the pNFS architecture pushes most of the I/O processing to the client level. That's not a problem when it comes to hardware resources – most client servers have more CPU to spare than storage systems. But, it is a problem when requiring software on the client side to communicate with the pNFS metadata server and absorb parallel I/O through multiple channels.
Once the standard is ratified, vendors of other applications and operating systems above the storage layer will also have to decide if they will support the new standard. Parallel NFS is already being incorporated into the Linux kernel, but has yet to make it in Red Hat or SUSE commercial distributions of the open source OS.
"NetApp will definitely have a hand in that," said Michael Eisler, NetApp's senior technical director. "We have development engineers working on a Linux NFS client, which is the main challenge for pNFS."
EMC, Panasas and other members of the IETF NFS committee also said they'll do whatever they can to push the standard in the wider market, but analysts said it will be a slow process. "NFS v4.1 requires NFS v4, which has seen very little market traction," wrote ESG analyst Terri McClure in an email to SearchStorage.com. "New versions of NFS are notoriously slow to be adopted. New NFS clients need to be implemented and performance tends to be slow until the client software matures."