透過您的圖書館登入
IP:18.116.8.110
  • 學位論文

基於情緒分析的動作合成法

Towards a Generative Model of Emotional Motion

指導教授 : 陳炳宇

摘要


在這篇論文裡面,我們提出建立一個擁有情緒成分的動作資料庫,為3D 圖學的虛擬動畫人物,賦予擬真性更高的肢體動作。一般來說我們無論是在遊戲產業或是商業用途上,在建立3D 的虛擬動畫人物時,大部份都是透過動畫師來製作,不過單是透過這樣的方式,做出來的人物動作還是都會不夠逼真,基本上,人類的動作是會因為自體本身的情緒而有著不同的反應,所以我們期待可以利用3D的動作捕捉技術(motion capture),將肢體動作中,受到情緒成分的波動而影響的行為模式擷取出來,並且建立一完整的資料庫,進而用來改善現在動畫角色的真實性。 我們透過利用動作捕捉技術將專業演員的肢體動作擷錄下來,並且根據這些行為,在直觀上做觀察的分析,同時也利用機器學習的數值計算,精確的定義出情緒動作的行為參數,然後以此建立起各種情緒之間的關係。如此無論是在單一情緒裡層次大小上的差異,或是不同的情緒的轉換,我們都可以透過這樣的關係模式,衍生出一連串更為細微的動作變化。 我們透過找出這些情緒的關聯性,可以將一個沒有情緒狀態或者只是單一情緒的動畫動作,重建出一系列連續的動畫動作變換,而這些動畫動作和原始動畫最大的差異,就是我們根據以上的關聯性,可以任意的將它在不同情緒的空間做轉換,如此我們就可以快速且方便的加大其擬真的程度,用在產業上,也可以加速製程並相對的節省下更多的成本。

並列摘要


In this paper, we present an advice of building a database of emotional motion sets,in order to making a 3D virtue character moves more like real human kind. Generally, no matter in the entertainments or business, when creating a 3D virtue character, they all depend on the animators, but the creating models still look unreal. Because human motion will be influenced by human emotions. Then we use the motion capture technology to get some principle components to describe the relationship between human motion and human emotions, through building the database to prove the realistic of the virtue character behavior. Using motion capture technology, we can get the professional actor’s body language, and base on these information, we can define the parameters of the emotional behavior equation through the senses and machine learning concepts. Then we can reproduce an whole new motion set with different emotion, different strength flawless. We can transform an emotionless motion set to a series animation with strongly emotions. The difference between the new one and the old one is we can use those principle concepts given the animation more variability in both emotional domain and original domain, then we can accelerate the product process time and make the whole animation industry cost down.

參考文獻


[3]Yan Li, Feng Yu, Ying-Qing Xu, Eric Chang, Heung-Yeung Shum: Speech-driven cartoon animation with emotions. ACM Multimedia 2001: 365-371
Computer Graphics and Interactive Techniques archive ACM SIGGRAPH 2007.
[9] E. de Aguiar, C. Theobalt, S. Thrun, and H.-P. Seidel, Automatic Conversion of Mesh Animations into Skeleton-based Animations. To appear in Proc. of EUROGRAPHICS 2008 (Computer Graphics Forum, vol. 27 issue 2)
[10]Ulrich Neumann, Zhigang Deng, efase: Expressive facial animation synthesis and editing with phoneme-level controls, ACM SIGGGRAPH/Eurographics Symposium on Computer Animation
[12]Zoran Popovic, Adrien Treuille and Yongjoon Lee, Near-optimal Character Animation with Continuous User Control ,in ACM Transactions on Graphics Vol. 26

延伸閱讀