Institutional Repository [SANDBOX]
Technical University of Crete
EN  |  EL

Search

Browse

My Space

A reconfigurable logic based accelerator for bioinspired DNN architectures with dendritic structure and a novel learning rule

Palatiana Nikoletta

Full record


URI: http://purl.tuc.gr/dl/dias/770878C0-99A0-4F37-990E-3D52D4C81D4F
Year 2024
Type of Item Diploma Work
License
Details
Bibliographic Citation Nikoletta Palatiana, "A reconfigurable logic based accelerator for bioinspired DNN architectures with dendritic structure and a novel learning rule", Diploma Work, School of Electrical and Computer Engineering, Technical University of Crete, Chania, Greece, 2024 https://doi.org/10.26233/heallink.tuc.99534
Appears in Collections

Summary

Artificial Neural Networks (ANNs) have been successfully used in DeepLearning architectures to solve a variety of challenging machine learningproblems. Nevertheless they usually require a considerable amount of energy.In addition, they demonstrate weakness in continually learning newtasks without forgetting the previous ones. They require multiple sets of dataand a considerable amount of trainable parameters. The brain, on the otherhand, operates at a very low energy level without facing problems learningnew things. By drawing inspiration from the human brain and overcomingthe limitations of ANNs, the Poirazi lab at IMBB-FORTH developed abio-inspired architecture that incorporates the dendritic structure and receptivefield, along with a novel approach to Hebbian learning. In this thesisa lower-level Numpy implementation was developed based on their initialKeras implementation in order to analyze and understand this model andits training process in greater depth. This was followed by the design, implementation,and download of an FPGA-based architecture onto the XilinxZCU 102 board for training the ANN. Using the high parallelism and powerefficiency of the FPGA, our architecture has accelerated training and reducedpower consumption. In particular, our proposed FPGA implementation executesan epoch of training (for the MNIST dataset) in only 13.46 secondsrather than 490 seconds on the CPU (Keras) and 45 seconds on the GPU(Keras). Furthermore, it achieves 346 times greater energy efficiency than theCPU implementation (Keras) and 57.5 times greater energy efficiency thanthe GPU implementation (Keras).

Available Files

Services

Statistics