合成語音表現的方式不該總是呆板且一成不變的。事實上,運用言語中影響情緒的特徵和適當的表現方式,情緒可以是資訊化的,呈現蘊含不同情緒的合成聲音亦是可行的。然而蘊含於合成聲音中情緒是否會影響文字內容情緒的認知,則待近一步探討。本研究透過腦波測量儀和問卷調查,企圖瞭解運用合成聲音傳遞文字內容情緒的可行性,實施2x2因子實驗設計(快樂的合成聲音/悲傷的合成聲音)x(快樂的新聞/悲傷的新聞),兩則聲音和文字內容的情緒互相搭配,兩則聲音和文字內容的情緒不搭配,聆聽過程的腦波反應和問卷經過收集與分析顯示:(1)蘊含不同情緒的合成聲音會具顯著差異地影響受試者對文字內容情緒的認知;(2)合成聲音與內容情緒是否搭配,並不影響受試者對內容信任度的認知;(3)合成聲音與內容情緒的搭配性影響受試者對合適度的認知。最後,本研究亦對相關實際運用加以討論。
Synthesized speech needs not be expressionless. In fact, by identifying the effects of emotion on speech and choosing an appropriate representation, the generation of affect is possible and can become computational. Nonetheless, whether emotion conveyed in synthesized voice affects perceptions of emotional valence of content has not be verified. This study applied Electroencephalogram (BEG) and questionnaire survey to examine the feasibility of conveying emotion of content by synthesized voice. A 2x2 within-subject experiment was conducted which includes 2 voices (happy voice/sad voice) and 2 contents (happy content/sad content), the consistency of voice emotion and content emotion in 2 news stories are matched, the other 2 are mismatched. The BEG responses and the data of questionnaire were collected and analyzed, the results of the experiment showed that (1) synthesized voice significantly affected perceptions of emotional valence of content, (2) credibility would not be influenced regardless of whether voice emotion and content emotion were matched or mismatched, (3) synthesized voice significantly affected participants' perceptions of suitability. Implications for design are discussed in this paper.