Το έργο με τίτλο Linear dynamical models in speech synthesis από τον/τους δημιουργό/ούς Tsiaras Vasileios, Ranniery Maia, Diakoloukas Vasilis, Stylianou, Yannis, Digalakis Vasilis διατίθεται με την άδεια Creative Commons Αναφορά Δημιουργού 4.0 Διεθνές
Βιβλιογραφική Αναφορά
V. Tsiaras, R. Maia, V. Diakoloukas, Y. Stylianou and V. Digalakis, "Linear dynamical models in speech synthesis", in 2014 IEEE Int. Conf. on Acoust., Speech and Sign. Process. (ICASSP) doi: 10.1109/ICASSP.2014.6853606
https://doi.org/10.1109/ICASSP.2014.6853606
Hidden Markov models (HMMs) are becoming the dominant approach for text-to-speech synthesis (TTS). HMMs provide an attractive acoustic modeling scheme which has been exhaustively investigated and developed for many years. Modern HMM-based speech synthesizers have approached the quality of the best state-of-the-art unit selection systems. However, we believe that statistical parametric speech synthesis has not reached its potential, since HMMs are limited by several assumptions which do not apply to the properties of speech. We, therefore, propose in this paper to use Lin-ear Dynamical Models (LDMs) instead of HMMs. LDMs can better model the dynamics of speech and can produce a naturally smoother trajectory of the synthesized speech. We perform a series of experiments using different system configurations to check on the performance of LDMs for speech synthesis. We show that LDM-based synthesizers can outperform HMM-based ones in terms of cepstral distance and are a very promising acoustic modeling alternative for statistical parametric TTS.