Non-volatile memory for fast, reliable file systems

1992 ◽  
Vol 27 (9) ◽  
pp. 10-22 ◽  
Author(s):  
Mary Baker ◽  
Satoshi Asami ◽  
Etienne Deprit ◽  
John Ouseterhout ◽  
Margo Seltzer
2021 ◽  
Vol 17 (3) ◽  
pp. 1-25
Author(s):  
Bohong Zhu ◽  
Youmin Chen ◽  
Qing Wang ◽  
Youyou Lu ◽  
Jiwu Shu

Non-volatile memory and remote direct memory access (RDMA) provide extremely high performance in storage and network hardware. However, existing distributed file systems strictly isolate file system and network layers, and the heavy layered software designs leave high-speed hardware under-exploited. In this article, we propose an RDMA-enabled distributed persistent memory file system, Octopus + , to redesign file system internal mechanisms by closely coupling non-volatile memory and RDMA features. For data operations, Octopus + directly accesses a shared persistent memory pool to reduce memory copying overhead, and actively fetches and pushes data all in clients to rebalance the load between the server and network. For metadata operations, Octopus + introduces self-identified remote procedure calls for immediate notification between file systems and networking, and an efficient distributed transaction mechanism for consistency. Octopus + is enabled with replication feature to provide better availability. Evaluations on Intel Optane DC Persistent Memory Modules show that Octopus + achieves nearly the raw bandwidth for large I/Os and orders of magnitude better performance than existing distributed file systems.


2014 ◽  
Vol 22 (2) ◽  
pp. 125-139 ◽  
Author(s):  
Myoungsoo Jung ◽  
Ellis H. Wilson ◽  
Wonil Choi ◽  
John Shalf ◽  
Hasan Metin Aktulga ◽  
...  

Drawing parallels to the rise of general purpose graphical processing units (GPGPUs) as accelerators for specific high-performance computing (HPC) workloads, there is a rise in the use of non-volatile memory (NVM) as accelerators for I/O-intensive scientific applications. However, existing works have explored use of NVM within dedicated I/O nodes, which are distant from the compute nodes that actually need such acceleration. As NVM bandwidth begins to out-pace point-to-point network capacity, we argue for the need to break from the archetype of completely separated storage. Therefore, in this work we investigate co-location of NVM and compute by varying I/O interfaces, file systems, types of NVM, and both current and future SSD architectures, uncovering numerous bottlenecks implicit in these various levels in the I/O stack. We present novel hardware and software solutions, including the new Unified File System (UFS), to enable fuller utilization of the new compute-local NVM storage. Our experimental evaluation, which employs a real-world Out-of-Core (OoC) HPC application, demonstrates throughput increases in excess of an order of magnitude over current approaches.


IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 25836-25871 ◽  
Author(s):  
Gianlucca O. Puglia ◽  
Avelino Francisco Zorzo ◽  
Cesar A. F. De Rose ◽  
Taciano Perez ◽  
Dejan Milojicic

2019 ◽  
Vol 68 (3) ◽  
pp. 402-413 ◽  
Author(s):  
Xiaoyi Zhang ◽  
Dan Feng ◽  
Yu Hua ◽  
Jianxi Chen

Author(s):  
Masashi TAWADA ◽  
Shinji KIMURA ◽  
Masao YANAGISAWA ◽  
Nozomu TOGAWA

2016 ◽  
Vol 213 (9) ◽  
pp. 2446-2451 ◽  
Author(s):  
Klemens Ilse ◽  
Thomas Schneider ◽  
Johannes Ziegler ◽  
Alexander Sprafke ◽  
Ralf B. Wehrspohn

Author(s):  
Franz-Josef Streit ◽  
Florian Fritz ◽  
Andreas Becher ◽  
Stefan Wildermann ◽  
Stefan Werner ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document