Definition

Remote Direct Memory Access (RDMA)

Contributor(s): James Allen Miller

Remote Direct Memory Access (RDMA) is a technology that allows computers in a network to exchange data in main memory without involving the processor, cache or operating system of either computer. Like locally based Direct Memory Access (DMA), RDMA improves throughput and performance because it frees up resources. RDMA also facilitates a faster data transfer rate and low-latency networking. It can be implemented for networking and storage applications.

How RDMA works

RDMA enables more direct data movement in and out of a server by implementing a transport protocol in the network interface card (NIC) hardware. The technology supports a feature called zero-copy networking that makes it possible to read data directly from the main memory of one computer and write that data directly to the main memory of another computer.

If both the sending and receiving devices support RDMA, then the conversation between the two will complete much quicker than comparable non-RDMA network systems.

RDMA vs. standard network connection
At left is a standard network connection. At right is an RDMA connection. The initiator and the target must use the same type of RDMA technology -- RDMA over Converged Ethernet or InfiniBand, for example.

RDMA has proven useful in applications that require fast and massive parallel high-performance computing (HPC) clusters and data center networks. It is particularly useful when analyzing big data, in supercomputing environments that process applications, and for machine learning that requires the absolutely lowest latencies and highest transfer rates. You can also find RDMA used in connections between nodes in compute clusters and with latency-sensitive database workloads.

Network protocols that support RDMA

RDMA over Converged Ethernet. RoCE is a network protocol that enables RDMA over an Ethernet network by defining how it will perform in such an environment.

Internet Wide Area RDMA Protocol. IWARP leverages the Transmission Control Protocol (TCP) or Stream Control Transmission Protocol (SCTP) to transmit data. It was developed by the Internet Engineering Task Force to enable applications on a server to read or write directly to applications executing on another server without support from the operating system on either server.

InfiniBand. RDMA is the standard protocol for high-speed InfiniBand network connections. This RDMA network protocol is often used for intersystem communication and was first popular in high-performance computing environments. Because of its ability to speedily connect large computer clusters, InfiniBand has found its way into additional use cases such as big data environments, databases, highly virtualized settings and resource-demanding web applications.

Products and vendors that support RDMA

  • Apache Hadoop and Apache Spark big data analysis
  • Baidu Paddle (PArallel Distributed Deep LEarning) platform
  • Broadcom and Emulex adapters
  • Caffe deep learning framework
  • Cavium FastLinQ 45000/41000 Series Ethernet NICs
  • Ceph object storage platform
  • ChainerMN Python-based deep learning open source framework
  • Chelsio Terminator 5 & 6 iWARP adapters
  • Dell EMC PowerEdge servers
  • FreeBSD operating system
  • GlusterFS internetwork filesystem
  • Intel Xeon Scalable processors and Platform Controller Hub
  • Mellanox ConnectX family of network adapters and InfiniBand switches
  • Microsoft Windows Server (2012 and higher) via SMB Direct supports RDMA-capable network adapters, Hyper-V virtual switch and the Cognitive Toolkit.
  • Nutanix's upcoming NX-9030 NVM Express flash appliance is said to support RDMA.
  • Nvidia DGX-1 deep learning appliance
  • Oracle Solaris 11 and higher for NFS over RDMA
  • Red Hat
  • SUSE Linux Enterprise Server
  • TensorFlow open source software library for machine intelligence
  • Torch scientific computing framework
  • VMware ESXi

RDMA with flash, SSD and NVDIMMs

Because all-flash storage systems perform much faster than disk or hybrid arrays, latency in storage performance is significantly reduced. As a result, the traditional software stack starts to act as a bottleneck, simultaneously adding to overall latency. RDMA is one of the technologies that can step in to lower that latency.

Non-volatile dual in-line memory module (NVDIMM), a type of memory that acts as storage, is quickly finding its way into data centers. NVDIMM can greatly improve database performance by as much as 100 times, and will prove especially beneficial in virtualized clusters and as a means to accelerate virtual SANs. But to get the most out of NVDIMM, in terms of both data integrity and performance when transmitting data between servers or throughout a virtual cluster, you must use the fastest network possible. RDMA over Converged Ethernet fits the bill by allowing data to move directly between NVDIMM modules with little system overhead and low latency.

RDMA over Fabrics and future directions

RDMA over Fabrics, a logical evolution of existing shared storage architectures, increases performance access to shared data benefiting from solid-state and flash memory. Here, an RDMA network sends data between memory address spaces over an interface using a protocol, such as RoCE, iWARP or InfiniBand, that accelerate operations to increase the value of application, server and storage investments. Fibre Channel storage networks at Gen 6 -- 32 gigabits per second -- and PCI Express support the RDMA over Fabrics interface.


This video from the RoCE Initiative
explains RDMA over Converged Ethernet.

RDMA storage technology may one day be enabled for scale-out file systems, scale-out distributed SANs or other applications.

RDMA and Fibre Channel are the fabric transports supported by the NVM Express over Fabrics (NVMe-oF) specification published on June 5, 2016. The specification extends the benefits of NVMe technology over distance. As NVMe-oF gains a foothold, so will RDMA as a means of transporting data in these environments.

With Ethernet performance surging in recent years, and with the age of software-defined networking and convergence upon us, RoCE may give Ethernet the edge over InfiniBand in the long term.

This was last updated in August 2017

Continue Reading About Remote Direct Memory Access (RDMA)

Dig Deeper on Data storage management

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

Have you deployed RDMA technology in your IT environment? If so, what sort of performance and low-latency benefits have you seen?
Cancel

-ADS BY GOOGLE

File Extensions and File Formats

SearchSolidStateStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close