Basic emotions in read Estonian speech: acoustic analysis and modelling
Kuupäev
2017-09-05
Autorid
Ajakirja pealkiri
Ajakirja ISSN
Köite pealkiri
Kirjastaja
Abstrakt
Doktoritööl oli kaks eesmärki: saada teada, milline on kolme põhiemotsiooni – rõõmu, kurbuse ja viha – akustiline väljendumine eestikeelses etteloetud kõnes, ning luua neile uurimistulemustele tuginedes eestikeelsele kõnesüntesaatorile parameetrilise sünteesi jaoks emotsionaalse kõne akustilised mudelid, mis aitaksid süntesaatoril äratuntavalt nimetatud emotsioone väljendada.
Kuna sünteeskõnet rakendatakse paljudes valdkondades, näiteks inimese ja masina suhtluses, multimeedias või puuetega inimeste abivahendites, siis on väga oluline, et sünteeskõne kõlaks loomulikuna, võimalikult inimese rääkimise moodi. Üks viis sünteeskõne loomulikumaks muuta on lisada sellesse emotsioone, tehes seda mudelite abil, mis annavad süntesaatorile ette emotsioonide väljendamiseks vajalikud akustiliste parameetrite väärtuste kombinatsioonid.
Emotsionaalse kõne mudelite loomiseks peab teadma, kuidas emotsioonid inimkõnes hääleliselt väljenduvad. Selleks tuli uurida, kas, millisel määral ja mis suunas emotsioonid akustiliste parameetrite (näiteks põhitooni, intensiivsuse ja kõnetempo) väärtusi mõjutavad ning millised parameetrid võimaldavad emotsioone üksteisest ja neutraalsest kõnest eristada. Saadud tulemuste põhjal oli võimalik luua emotsioonide akustilisi mudeleid* ning katseisikud hindasid, milliste mudelite järgi on emotsioonid sünteeskõnes äratuntavad. Eksperiment kinnitas, et akustikaanalüüsi tulemustele tuginevate mudelitega suudab eestikeelne kõnesüntesaator rahuldavalt väljendada nii kurbust kui ka viha, kuid mitte rõõmu.
Doktoritöö kajastab üht võimalikku viisi, kuidas rõõm, kurbus ja viha eestikeelses kõnes hääleliselt väljenduvad, ning esitab mudelid, mille abil emotsioone eestikeelsesse sünteeskõnesse lisada. Uurimistöö on lähtepunkt edasisele eestikeelse emotsionaalse sünteeskõne akustiliste mudelite arendamisele.
* Katsemudelite järgi sünteesitud emotsionaalset kõnet saab kuulata aadressil https://www.eki.ee/heli/index.php?option=com_content&view=article&id=7&Itemid=494.
The present doctoral dissertation had two major purposes: (a) to find out and describe the acoustic expression of three basic emotions – joy, sadness and anger – in read Estonian speech, and (b) to create, based on the resulting description, acoustic models of emotional speech, designed to help parametric synthesis of Estonian speech recognizably express the above emotions. As far as synthetic speech has many applications in different fields, such as human-machine interaction, multimedia, or aids for the disabled, it is vital that the synthetic speech should sound natural, that is, as human-like as possible. One of the ways to naturalness lies through adding emotions to the synthetic speech by means of models feeding the synthesiser with combinations of acoustic parametric values necessary for emotional expression. In order to create such models of emotional speech, it is first necessary to have a detailed knowledge of the vocal expression of emotions in human speech. For that purpose I had to investigate to what extent, if any, and in what direction emotions influence the values of speech acoustic parameters (e.g., fundamental frequency, intensity and speech rate), and which parameters enable discrimination of emotions from each other and from neutral speech. The results provided material for creating acoustic models of emotions* to be presented to evaluators, who were asked to decide which of the models helped to produce synthetic speech with recognisable emotions. The experiment proved that with models based on acoustic results, an Estonian speech synthesiser can satisfactorily express sadness and anger, while joy was not so well recognised by listeners. This doctoral dissertation describes one of the possible ways for the vocal expression of joy, sadness and anger in Estonian speech and presents some models enabling addition of emotions to Estonian synthetic speech. The study serves as a starting point for the future development of acoustic models for Estonian emotional synthetic speech. * Recorded examples of emotional speech synthesised using the test models can be accessed at https://www.eki.ee/heli/index.php?option=com_content&view=article&id=7&Itemid=494.
The present doctoral dissertation had two major purposes: (a) to find out and describe the acoustic expression of three basic emotions – joy, sadness and anger – in read Estonian speech, and (b) to create, based on the resulting description, acoustic models of emotional speech, designed to help parametric synthesis of Estonian speech recognizably express the above emotions. As far as synthetic speech has many applications in different fields, such as human-machine interaction, multimedia, or aids for the disabled, it is vital that the synthetic speech should sound natural, that is, as human-like as possible. One of the ways to naturalness lies through adding emotions to the synthetic speech by means of models feeding the synthesiser with combinations of acoustic parametric values necessary for emotional expression. In order to create such models of emotional speech, it is first necessary to have a detailed knowledge of the vocal expression of emotions in human speech. For that purpose I had to investigate to what extent, if any, and in what direction emotions influence the values of speech acoustic parameters (e.g., fundamental frequency, intensity and speech rate), and which parameters enable discrimination of emotions from each other and from neutral speech. The results provided material for creating acoustic models of emotions* to be presented to evaluators, who were asked to decide which of the models helped to produce synthetic speech with recognisable emotions. The experiment proved that with models based on acoustic results, an Estonian speech synthesiser can satisfactorily express sadness and anger, while joy was not so well recognised by listeners. This doctoral dissertation describes one of the possible ways for the vocal expression of joy, sadness and anger in Estonian speech and presents some models enabling addition of emotions to Estonian synthetic speech. The study serves as a starting point for the future development of acoustic models for Estonian emotional synthetic speech. * Recorded examples of emotional speech synthesised using the test models can be accessed at https://www.eki.ee/heli/index.php?option=com_content&view=article&id=7&Itemid=494.
Kirjeldus
Väitekirja elektrooniline versioon ei sisalda publikatsioone
Märksõnad
eesti keel, emotsioonid, kõnesüntees, akustika, Estonian language, emotions, speech synthesis, acoustics