Abstract—Application’s memory footprints are growing exponentially due to an increase in their data set and additional software layer. Modern RDMA capable networks such as InfiniBand and Myrinet with low latency and high bandwidth provide us a new vision to utilize remote memory. Remote idle memory can be exploited to improvement performance of memory intensive applications on individual nodes. Network swapping will be faster than traditional swapping to local disk. In this paper, we design a remote memory system for remote memory utilization in InfiniBand clusters. We present the architecture, communication method and algorithm of InfiniBand Block Device (IBD), which is implemented as loadable kernel module for version 3.5.0-45 of the Linux kernel. Especially, we discuss design issues transfer pages to remote memory. Our experiments show that IBD can bring more performance gain for applications whose working sets are larger than the local memory on a node but smaller than idle memory available on the cluster.
Index Terms—Remote memory, distributed memory, swapping, cluster system, InfiniBand.
The authors are with the Cloud Computing Research Department, Electronics and Telecommunications Research Institute (ETRI), Daejeon 305-350, Republic of Korea (e-mail: {hyunwha, khk, sbae}@ etri.re.kr).
[PDF]
Cite: Hyun-Hwa Choi, Kangho Kim, and Seung-Jo Bae, "A Remote Memory System for High Performance Data Processing," International Journal of Future Computer and Communication vol. 4, no. 1, pp. 50-54, 2015.