Ιδρυματικό Αποθετήριο [SANDBOX]
Πολυτεχνείο Κρήτης
EN  |  EL

Αναζήτηση

Πλοήγηση

Ο Χώρος μου

Discriminative training of language models

Fytopoulos Nikolaos

Απλή Εγγραφή


URIhttp://purl.tuc.gr/dl/dias/9E3E6133-EC63-4B6A-B43A-BCDAB53B3774-
Αναγνωριστικόhttps://doi.org/10.26233/heallink.tuc.23008-
Γλώσσαen-
Μέγεθος54 pagesen
ΤίτλοςDiscriminative training of language modelsen
ΔημιουργόςFytopoulos Nikolaosen
ΔημιουργόςΦυτοπουλος Νικολαοςel
Συντελεστής [Επιβλέπων Καθηγητής]Digalakis Vasilisen
Συντελεστής [Επιβλέπων Καθηγητής]Διγαλακης Βασιληςel
Συντελεστής [Μέλος Εξεταστικής Επιτροπής]Lagoudakis Michaelen
Συντελεστής [Μέλος Εξεταστικής Επιτροπής]Λαγουδακης Μιχαηλel
Συντελεστής [Μέλος Εξεταστικής Επιτροπής]Diakoloukas Vasilisen
Συντελεστής [Μέλος Εξεταστικής Επιτροπής]Διακολουκας Βασιλeioςel
ΕκδότηςTechnical University of Creteen
ΕκδότηςΠολυτεχνείο Κρήτηςel
Ακαδημαϊκή ΜονάδαΠολυτεχνείο Κρήτης::Σχολή Ηλεκτρονικών Μηχανικών και Μηχανικών Υπολογιστώνel
ΠερίληψηThe present thesis investigates the use of discriminative training on continuous Language Models. The main motivation for dealing with continuous language models was that by construction they overcome the limits of N-gram based models. N-gram models have been widely used in Language Modeling, but suffer from lack of generalizability and contain a very large number of parameters that are hard to adapt. Another flaw of N-gram models is the need for a large amount of training data, in order to cover as many N-grams as possible. Continuous Gaussian Mixture Language Models (GMLMs) for Speech Recognition have proven to be effective in terms of smoothing unseen events and adapting efficiently while using relatively small amount of data when compared to N-gram models. The training and testing data were extracted from the Wall Street Journal Corpus. Although the size of the vocabulary used in the corpus is large, the actual number of words being used in the present thesis is resticted. Data has the form of a continuous-space vector and consists of the history of each word in the corpus. The dimensions of these vectors were reduced by using SVD and LDA techniques. As far as the main objective of the thesis is concerned, attempts focus on improving the performance of GMLMs that have been previously trained by using the ML criterion on Language Models by adapting and using the Maximum Mutual Information(MMI) Estimation Method previously deployed in training HMMs for acoustic models. MMI acoustic models have proven to perform better than ML models, therefore MMI training gave a strong incentive in order to apply it on continuous language models. In addition, other discriminative criteria such as Minimum Phone Error (MPE) or Minimum Classification Error(MCE) are also theoretically investigated. Perplexity is the metric being used to measure the effectiveness of the presented method. The experiments of the thesis focus on testing MMI models that are smoothed with their correspondent baseline ML model and MMI models that are unsmoothed, with mixed results. The desired improvement is achieved in the case of unsmoothed MMI models against ML models.en
ΤύποςΔιπλωματική Εργασίαel
ΤύποςDiploma Worken
Άδεια Χρήσηςhttp://creativecommons.org/licenses/by-nc/4.0/en
Ημερομηνία2014-10-22-
Ημερομηνία Δημοσίευσης2014-
Θεματική ΚατηγορίαLanguage modelingen
Θεματική ΚατηγορίαPattern classification systemsen
Θεματική ΚατηγορίαPattern recognition computersen
Θεματική Κατηγορίαpattern recognition systemsen
Θεματική Κατηγορίαpattern classification systemsen
Θεματική Κατηγορίαpattern recognition computersen
Βιβλιογραφική ΑναφοράΝικόλαος Φυτόπουλος, "Discriminative training of language models", Διπλωματική Εργασία, Σχολή Ηλεκτρονικών Μηχανικών και Μηχανικών Υπολογιστών, Πολυτεχνείο Κρήτης, Χανιά, Ελλάς, 2014el
Βιβλιογραφική ΑναφοράNikolaos Fytopoulos, "Discriminative training of language models", Diploma Work, Σχολή Ηλεκτρονικών Μηχανικών και Μηχανικών Υπολογιστών, Πολυτεχνείο Κρήτης, Chania, Greece, 2014en

Διαθέσιμα αρχεία

Υπηρεσίες

Στατιστικά