Το έργο με τίτλο A flexible and efficient algorithmic framework for constrained matrix and tensor factorization από τον/τους δημιουργό/ούς Huang Kejun, Sidiropoulos, N. D, Liavas Athanasios διατίθεται με την άδεια Creative Commons Αναφορά Δημιουργού 4.0 Διεθνές
Βιβλιογραφική Αναφορά
K. Huang, N. D. Sidiropoulos and A. P. Liavas, "A flexible and efficient algorithmic framework for constrained matrix and tensor factorization," IEEE Trans. Signal Process., vol. 64, no. 19, pp. 5052-5065, Oct. 2016. doi: 10.1109/TSP.2016.2576427
https://doi.org/10.1109/TSP.2016.2576427
We propose a general algorithmic framework for constrained matrix and tensor factorization, which is widely used in signal processing and machine learning. The new framework is a hybrid between alternating optimization (AO) and the alternating direction method of multipliers (ADMM): each matrix factor is updated in turn, using ADMM, hence the name AO-ADMM. This combination can naturally accommodate a great variety of constraints on the factor matrices, and almost all possible loss measures for the fitting. Computation caching and warm start strategies are used to ensure that each update is evaluated efficiently, while the outer AO framework exploits recent developments in block coordinate descent (BCD)-type methods which help ensure that every limit point is a stationary point, as well as faster and more robust convergence in practice. Three special cases are studied in detail: non-negative matrix/tensor factorization, constrained matrix/tensor completion, and dictionary learning. Extensive simulations and experiments with real data are used to showcase the effectiveness and broad applicability of the proposed framework.