Το έργο με τίτλο Accelerating binarized convolutional neural networks with dynamic partial reconfiguration on disaggregated FPGAs από τον/τους δημιουργό/ούς Skrimponis Panagiotis, Pissadakis Emmanouil, Alachiotis Nikolaos, Pnevmatikatos Dionysios διατίθεται με την άδεια Creative Commons Αναφορά Δημιουργού-Μή Εμπορική Χρήση 4.0 Διεθνές
Βιβλιογραφική Αναφορά
P. Skrimponis, E. Pissadakis, N. Alachiotis, and D. Pnevmatikatos, “Accelerating binarized convolutional neural networks with dynamic partial reconfiguration on disaggregated FPGAs,” in Parallel Computing: Technology Trends, vol 36, Advances in Parallel Computing, I. Foster, G. R. Joubert, L. Kučera, W. E. Nagel, F. Peters, Eds., Amsterdam, The Netherlands: IOS Press, 2020, pp. 691 - 700, doi: 10.3233/APC200099.
https://doi.org/10.3233/APC200099
Convolutional Neural Networks (CNNs) currently dominate the fields of artificial intelligence and machine learning due to their high accuracy. However, their computational and memory needs intensify with the complexity of the problems they are deployed to address, frequently requiring highly parallel and/or accelerated solutions. Recent advances in machine learning showcased the potential of CNNs with reduced precision, by relying on binarized weights and activations, thereby leading to Binarized Neural Networks (BNNs). Due to the embarassingly parallel and discrete arithmetic nature of the required operations, BNNs fit well to FPGA technology, thus allowing to considerably scale up problem complexity. However, the fixed amount of resources per chip introduces an upper bound on the dimensions of the problems that FPGA-accelerated BNNs can solve. To this end, we explore the potential of remote FPGAs operating in tandem within a disaggregated computing environment to accelerate BNN computations, and exploit dynamic partial reconfiguration (DPR) to boost aggregate system performance. We find that DPR alone boosts throughput performance of a fixed set of BNN accelerators deployed on a remote FPGA by up to 3x in comparison with a static design that deploys the same accelerator cores on a software-programmable FPGA locally. In addition, performance increases linearly with the number of remote devices when inter-FPGA communication is reduced. To exploit DPR on remote FPGAs and reduce communication, we adopt a versatile remote-accelerator deployment framework for disaggregated datacenters, thereby boosting BNN performance with negligible development effort.