透過您的圖書館登入
IP:3.147.73.35
  • 學位論文

立體臉部動作複製

3D Facial Motion Cloning

指導教授 : 曾定章
若您是本文的作者,可授權文章由華藝線上圖書館中協助推廣。

摘要


在虛擬實境的應用系統中,虛擬角色 (avatar) 是一個常見的功能;逼真的虛擬人物就要有虛擬臉部。虛擬臉部的主要表現就在於豐富的表情動作。但是為虛擬臉部加上逼真的表情動作是非常乏味且耗時。鑑於上述原因,我們希望能夠將建立好的動作 “重覆使用”,以節省時間與金錢。 我們提出一個可以複製臉部表情動作的方法,用來將一個已存在的臉部動作複製到另一個臉部上。每個臉部的特徵、五官大小、外形、及網格結構都不相同,但表情動作仍然能夠經過精密的計算而正確的複製及呈現出來;這就是一個重覆使用動作的技術。 被複製表情動作的臉部稱為來源臉部 (source face),而要複製到的臉部則稱為目標臉部 (target face)。我們的臉部表情動作是透過 “變形目標” (morph target) 記錄某個表情的所有臉部頂點與原來無表情臉部 (neutral face) 之對應點的位移向量。 我們的方法主要包含兩大步驟,首先是將做兩個臉部模型依五官位置做對應。過程中需要使用到人工定義的臉部特徵點,並將臉部模型投影到2D平面上,對特徵點做三角化,並透過計算質心座標的方式取得兩個模型之間的頂點對應關係。第二個步驟是複製動作,將來源臉部上所有的動作皆複製到目標臉部的正確位置上,並計算臉部五官的比例,以取得目標臉部的正確動作。另外,我們還希望系統能夠達到即時執行的效果;因此,我們也儘可能考慮快速的方法。

關鍵字

臉部動畫

並列摘要


In the applications of virtual reality, virtual actors (avatars) are commonly used. The key part of virtual actors is just the virtual face. The principal function of virtual faces is facial expression; however, to render expressions on virtual faces is tedious and time-consuming. Thus we expect to develop an automatic system to “reuse” the existed facial expressions. In this study, we propose an approach of facial motion cloning, which is used to transfer pre-existed facial motions from one face to another. The face models have different characteristics, shapes, the scales of facial features, and so on, but expressions can still be accurately duplicated after precise computation of the scales of facial motions. The face that provides the original motions is called the “source face,” and the face that will be added on the copied motions is called the “target face.” Facial motions are represented by sets of “morph targets,” which record the displacement vectors of all face vertices between the neutral state and a particular motion. There are two major steps in the proposed system. The first step is to correspond the two face models according to their facial features. In the step, we must use the facial feature points. We project the face models onto a 2D plane, and then re-triangulate the model according to the feature points. The corresponding relationship of the vertices of the two face models is acquired by calculating the barycentric coordinate. The second step is to clone the facial motions. We duplicate the motions from the source face to the target face, and calculate the scale of facial features between the two faces to get the correct motion scale. The facial motion animation is expected to be demonstrated in real-time; thus we also consider the fast algorithms to develop the cloning system.

並列關鍵字

facial motion animation

參考文獻


[1] Escher, M. and N. Magnenat-Thalmann, “Automatic 3D cloning and real-time animation of a human face,” in Proc. of Computer Animation ’97 Conf., Geneva, Switzerland, Jun.5-6, 1997, pp.58-66.
[3] Fang, S., R. Raghavan, and J. Richtsmeier, “Volume morphing methods for landmark based 3D image deformation,” in Proc. SPIE Int. Symposium on Medical Imaging, Newport Beach, CA, Apr. 1996, pp.404-415.
[9] Lavagetto, F., R. Pockaj, “The facial animation engine: Toward a high-level interface for the design of MPEG-4 compliant animated faces,” IEEE Trans. on Circuits and Systems for Video Technology, vol.9, no.2, pp.277-289, May 1999.
[12] Morishima, S., “Face analysis and synthesis,” IEEE Signal Processing Magazine, vol.18, no.3, pp.26-34, May 2001.
[14] Pandzic, I., “Facial motion cloning,” Journal of Graphical Models, vol.65, no.6, pp.385-404, Sep. 2001.

延伸閱讀


國際替代計量