Το έργο με τίτλο Accelerated stochastic gradient for nonnegative tensor completion and parallel implementation από τον/τους δημιουργό/ούς Siaminou Ioanna, Papagiannakos Ioannis-Marios, Kolomvakis Christos, Liavas Athanasios διατίθεται με την άδεια Creative Commons Αναφορά Δημιουργού 4.0 Διεθνές
Βιβλιογραφική Αναφορά
I. Siaminou, I. M. Papagiannakos, C. Kolomvakis and A. P. Liavas, "Accelerated stochastic gradient for nonnegative tensor completion and parallel implementation," in 2021 29th European Signal Processing Conference (EUSIPCO), Dublin, Ireland, 2021, pp. 1790-1794, doi: 10.23919/EUSIPCO54536.2021.9616067.
https://doi.org/10.23919/EUSIPCO54536.2021.9616067
We consider the problem of nonnegative tensor completion. We adopt the alternating optimization framework and solve each nonnegative matrix completion problem via a stochastic variation of the accelerated gradient algorithm. We experimentally test the effectiveness and the efficiency of our algorithm using both real-world and synthetic data. We develop a shared-memory implementation of our algorithm using the multithreaded API OpenMP, which attains significant speedup. We believe that our approach is a very competitive candidate for the solution of very large nonnegative tensor completion problems.