Institutional Repository [SANDBOX]
Technical University of Crete
EN  |  EL

Search

Browse

My Space

Affective modeling on spoken dialogue

Chorianopoulou Arodami

Full record


URI: http://purl.tuc.gr/dl/dias/1C94DA49-2645-4D31-BDDB-260D98E52C8B
Year 2017
Type of Item Master Thesis
License
Details
Bibliographic Citation Arodami Chorianopoulou, "Affective modeling on spoken dialogue", Master Thesis, School of Electrical and Computer Engineering, Technical University of Crete, Chania, Greece, 2017 https://doi.org/10.26233/heallink.tuc.68631
Appears in Collections

Summary

Emotions are fundamental for human-human communication, impacting people’s perception, communication and decision-making. These are expressed through speech, facial expressions, gestures and other non-verbal cues. Speech is the main channel of human communication, interpreting emotional and semantic cues. Affective computing and specifically emotion recognition, is the process of decoding communication signals. It aims to improve the human-computer interaction (HCI) in a cognitive level allowing computers to adapt to the users needs. Hence, speech emotion recognition suggests that vocal parameters reflect the affective state of a person. This assumption is supportedby the fact that most affective states involve physiological reactions which inturn modify the process by which voice is produced. There are a number of potential applications for speech emotion recognition, including anger detection for Spoken Dialogue Systems (SDS) and emotional aids for people with autism.Attention is a concept studied in cognitive psychology that refers to how a person actively processes information. Salience is the level to which something in the environment can catch and retain one’s attention. While research on affective speech saliency is not extensive, salient information from audio and video has been investigated.It is argued that modeling the affective variation of speech can be approachedby integrating acoustic parameters from various prosodic timescales, ummarizing information from more localized (e.g. syllable-level) to more global prosodic phenomena (e.g. utterance-level).In this thesis, speech prosody and related acoustic features, e.g., spectral and voice quality, are investigated for the task of emotion recognition. Features derived from the Amplitude and Frequency Modulation (AM-FM) model are also examined. Moreover, the contribution of different information levels is also addressed for the task of emotion recognition. Additionally, we investigate the affective salient information over time on spoken dialogue utterances using prosodic variations from different timescales of the speech signal, by weighting speech segments. The proposed models are evaluated on datasets of spontaneous speech.For a human social and mental states are highly correlated. As a result affectivespeech is introduced on several areas of the computational community. For instance, people with Autism Spectrum Disorder (ASD) suffer from symptoms of anxiety and depression that significantly compromise their quality of life. Additionally, language in high-functioning autism is characterized by pragmatic and semantic deficits, and people with autism have a reduced tendency to integrate information. Motivated by these findings, we investigate the degree of engagament for children with ASD in interactions with their parents.

Available Files

Services

Statistics