透過您的圖書館登入
IP:3.134.81.206
  • 學位論文

依據指定之情緒流動產生自動伴奏

Generation of Affective Accompaniment in Accordance with a Specified Emotion Flow

指導教授 : 陳宏銘
若您是本文的作者,可授權文章由華藝線上圖書館中協助推廣。

摘要


音樂作品所表現的情緒隨著樂曲的進行而改變。本論文將這樣的特性考慮到自動伴奏系統當中,以旋律與使用者所指定的情緒流動(emotion flow)作為輸入,以帶有情感的伴奏作為輸出。其中情緒流動是由一組情緒的正負度(valence)與高亢度(arousal)曲線所構成,兩者都是時間的函數。   伴奏由和弦進程(chord progression)與伴奏型態(accompaniment pattern)所構成。我們運用前者來控制音樂情緒的正負度,並用後者控制情緒的高亢度。在系統的實作上,我們同時考慮使用者輸入的旋律與正負度曲線,以動態規劃來產生和弦進程;本論文為此提出了一個數學模型來描述和弦進程與正負度在時間上的關係。至於伴奏型態,則是透過量化的高亢度曲線來產生。   我們以主觀的方式來衡量系統的性能:包括產生的伴奏搭配上旋律後是否符合指定的情緒、以及兩者之間是否和諧。平均而言,使用者所指定的、以及所感受到的高亢度曲線之間的互相關係數達0.85,而使用者所指定的、以及所感受到的正負度曲線之間的互相關係數達0.52。而如果只考慮專業音樂人士,兩個係數分別為0.92與0.88。結果顯示此系統在考慮情緒的條件下能產生適當的伴奏。

並列摘要


The emotion expressed by a music piece varies as the music unrolls in time. To create such dynamic emotion expression, we develop an algorithm that automatically generates the accompaniment for a melody according to an emotion flow specified by the user. The emotion flow is given in the form of arousal and valence curves, each as a function of time. The affective accompaniment is composed of chord progression and accompaniment pattern. Chord progression, which controls the valence of the composed music, is generated by dynamic programming using the input melody and valence data as constraints. A mathematical model is developed to describe the temporal relationship between valence and chord progression. Accompaniment pattern, which controls the arousal of the composed music, is determined according to the quantized values of the input arousal curve. The performance of the system is evaluated subjectively. The cross-correlation coefficient between the input arousal (valence) and the perceived arousal (valence) of the composed music is 0.85 (0.52). If only musician subjects are considered, the cross-correlation coefficients are 0.92 for arousal and 0.88 for valence. The results show that the proposed system is capable of generating subjectively appropriate accompaniments conforming to the user specification.

參考文獻


[1] A. Alpern, Techniques for Algorithmic Composition of Music. Hampshire College, MA, 1995.
[2] J. D. Fernández and F. J. Vico, “AI methods in algorithmic composition: A comprehensive survey.” Journal of Artificial Intelligence Research, vol. 48, pp. 513–582, 2013.
[3] B. Benward and M. Saker, Music: In Theory and Practice, 8th ed., Vol. I. New York: McGraw-Hill, 2008, ch. 5–7.
[4] I. Simon, D. Morris, and S. Basu, “MySong: Automatic accompaniment generation for vocal melodies,” in Proc. ACM CHI, Florence, Italy, pp. 725–734, 2008.
[5] D. Morris, I. Simon, and S. Basu, “Exposing parameters of a trained dynamic model for interactive music creation,” in Proc. AAAI, Chicago, IL, pp. 784–791, 2008.

延伸閱讀