Institutional Repository [SANDBOX]
Technical University of Crete
EN  |  EL

Search

Browse

My Space

Parallel optimization algorithms for very large tensor decompositions

Papagiannakos Ioannis-Marios

Simple record


URIhttp://purl.tuc.gr/dl/dias/0069F4A3-9C49-47B7-A69D-11B96FBD36EA-
Identifierhttps://doi.org/10.26233/heallink.tuc.83411-
Languageen-
Extent57 pagesel
TitleParallel optimization algorithms for very large tensor decompositionsen
TitleΠαράλληλοι αλγόριθμοι βελτιστοποίησης για παραγοντοποιήσεις πολύ μεγάλων τανυστώνel
CreatorPapagiannakos Ioannis-Mariosen
CreatorΠαπαγιαννακος Ιωαννης-Μαριοςel
Contributor [Thesis Supervisor]Liavas Athanasiosen
Contributor [Thesis Supervisor]Λιαβας Αθανασιοςel
Contributor [Committee Member]Karystinos Georgiosen
Contributor [Committee Member]Καρυστινος Γεωργιοςel
Contributor [Committee Member]Samoladas Vasilisen
Contributor [Committee Member]Σαμολαδας Βασιληςel
PublisherΠολυτεχνείο Κρήτηςel
PublisherTechnical University of Creteen
Academic UnitTechnical University of Crete::School of Electrical and Computer Engineeringen
Academic UnitΠολυτεχνείο Κρήτης::Σχολή Ηλεκτρολόγων Μηχανικών και Μηχανικών Υπολογιστώνel
Content SummaryTensors are generalizations of matrices to higher dimensions and are very powerful tools that can model a wide variety of multi-way data dependencies. As a result, tensor decompositions can extract useful information out of multi-aspect data tensors and have witnessed increasing popularity in various fields, such as data mining, social network analysis, biomedical applications, machine learning etc. Many decompositions have been proposed, but in this thesis we focus on Tensor Rank Decomposition or Canonical Polyadic Decomposition (CPD) using Alternating Least Squares (ALS). The main goal of the CPD is to decompose tensors into a sum of rank-1 terms, a procedure more difficult than its matrix counterpart, especially for large-scale tensors. CP decomposition via ALS consists of computationally expensive operations which cause performance bottlenecks. In order to accelerate this method and overcome these obstacles, we developed two parallel versions of the ALS that implement the CPD. The first one uses the full tensor and runs in parallel on heterogeneous & shared memory systems (CPUs and GPUs). The second one decomposes the tensor in parallel using small random block samples and runs on homogeneous & shared memory systems (CPUs).en
Type of ItemΔιπλωματική Εργασίαel
Type of ItemDiploma Worken
Licensehttp://creativecommons.org/licenses/by/4.0/en
Date of Item2019-10-04-
Date of Publication2019-
SubjectCanonical polyadic decompositionen
SubjectAlternating least squaresen
Subjectshared memory systemsen
SubjectOpenMPen
SubjectCUDAen
SubjectTensoren
SubjectRandomized block samplingen
SubjectPARAFACen
SubjectParallel computingen
Bibliographic CitationIoannis-Marios Papagiannakos, "Parallel optimization algorithms for very large tensor decompositions", Diploma Work, School of Electrical and Computer Engineering, Technical University of Crete, Chania, Greece, 2019en
Bibliographic CitationΙωάννης-Μάριος Παπαγιαννάκος, "Παράλληλοι αλγόριθμοι βελτιστοποίησης για παραγοντοποιήσεις πολύ μεγάλων τανυστών", Διπλωματική Εργασία, Σχολή Ηλεκτρολόγων Μηχανικών και Μηχανικών Υπολογιστών, Πολυτεχνείο Κρήτης, Χανιά, Ελλάς, 2019el

Available Files

Services

Statistics