Το work with title VenOS: a virtualization framework for multiple tenant accommodation on reconfigurable platforms by Miliadis Panagiotis, Theodoropoulos Dimitrios, Pnevmatikatos Dionysios, Koziris, Nectarios, 19..- is licensed under Creative Commons Attribution 4.0 International
Bibliographic Citation
P. Miliadis, D. Theodoropoulos, D. N. Pnevmatikatos and N. Koziris, "VenOS: a virtualization framework for multiple tenant accommodation on reconfigurable platforms," in Applied Reconfigurable Computing. Architectures, Tools, and Applications, vol. 13569, Lecture Notes in Computer Science, L. Gan, Y. Wang, W. Xue, T. Chau, Eds., Cham, Switzerland: Springer, 2022, pp. 181–195, doi: 10.1007/978-3-031-19983-7_13.
https://doi.org/10.1007/978-3-031-19983-7_13
As FPGAs provide tremendous improvements in performance and energy efficiency in a wide range of workloads, cloud infrastructures increasingly incorporate them in their infrastructure for on-demand application acceleration. However, accelerator development remains challenging, and ways to program, deploy and securely utilize FPGAs are still difficult to manage both for provider and developer alike. The complexity of such systems is compounded when moving to multi-tenant environments, where cloud providers seek to multiplex tenants on a single FPGA platform to increase their return of investment. To this end, we present VenOS, a full-stack framework that enables multiple application hosting on FPGAs. VenOS exposes a high-level API for developers to easily and securely offload data execution to hardware. Under the hood, it utilizes a simple -yet efficient- NoC approach for sharing FPGA resources among tenants, virtualizes memory and I/Os operations and offers strong data isolation against malicious transactions. Finally, VenOS comprises a resource manager based on memory segmentation, along with isolation modules that offer a protection layer between the accelerators and the system. Experimental results suggest that VenOS is a befitting platform that, despite its ease of use, benefits applications by 1.15x–2x, while introducing a resource overhead of only 11%. Finally, our system scales by up to 3.79x when four accelerators are mapped.