透過您的圖書館登入
IP:3.12.154.121
  • 學位論文

大尺寸螢幕光學觸控之3D定位技術

Optical Touch on Large Screen ─The 3D Poisitioning Technology

指導教授 : 黃俊堯
若您是本文的作者,可授權文章由華藝線上圖書館中協助推廣。

摘要


大尺寸螢幕光學觸控計畫共分為兩部分:(A)3D定位技術與(B)手勢辨識技術,本文屬於第一部分的3D定位,且細分為前後端,前端主要是著墨於如何找出單手單一手指點,本文乃針對前端給與之手指點作3D定位研究。 本文設計可於任意尺寸的大型面板上架設兩個近紅外線攝影機,偵測螢幕前使用者的手指點位置,讓使用者不必透過硬體控制器(如;滑鼠與鍵盤),只需在3D空間中操作,且不需直接碰觸螢幕操作電腦,如同電影「關鍵報告」(Minority Report)中男主角操作電腦觀看影片檔案之動作。 本系統將攝影機規格與實驗環境的參數作為已知條件,並以3D計算機圖學原理做為基礎,求得手部在3D世界座標系中的3D座標。首先,從前端得到相對於左右攝影機的影像擷取卡座標系的座標點,繼續將其轉換為相對於左右攝影機針孔成像平面座標系的座標點;再分別透過透視投影法以及相透視投影法似三角理論得到相對於左右攝影機座標系 的座標點,進而推導出各自相對於左右攝影機的3D座標點之間的轉換關係式。最後,為了讓使用者能夠直觀地方便操作系統,我們將相對於左攝影機座標系的座標點轉換成相對於螢幕座標系的座標點。 本文方法不只大量簡化以往繁複的相機校正(Camera Calibration)步驟,比以往透過極線幾何法(Epipolar Geometry)求得座標的計算複雜度來得少。而硬體方面以兩個近紅外線攝影機,可將兩攝影機之間的距離調整成適合螢幕尺寸大小的長度,其成本性以及可調適性不是傳統硬體觸控面板所能比擬的,讓使用者以更直觀及更方便地操作機器。

並列摘要


The project, ” Technology of Optical Touch Technology on Large Screen” is divided into two steps, (A)3D Positioning technology and (B)Gesture Recognition technology. This thesis belongs to 3D positioning technology in the first step, subdividing front-end and post-end, the front-end was focused on how to find finger point using one hand, this thesis did 3D positioning research for finger point from front-end. We designed two Near-infrared cameras could set up on any size panels, detect ing users’ hand movement before panel and capture hand location, users need not use hardware controller, such as mouse and keyboard, just need do gesture on 3D space and don’t need to touch panel to operate computer directly, like chief actor operating the computer to watch video file in the movie “Minority Report”. The systems regarded camera specification and experiment environment para- meter as knowing condition and based 3D Computer Graphics theory on fundamental, acquiring hand 3D coordinate in 3D world coordinate. In the beginning, we obtained the finger point of the coordinate point in the image capture card coordinate system, continue to converted the point which in the image capture coordinate system to the pinhole camera image plane coordinate system. And we get the point in the camera coordinate system through pinhole imaging principle and similar triangle theory. Then we deduced the point in each camera coordinate system by the transformation matrix. Finally, we translate the coordinate point in the left-camera coordinate system to the screen coordinate system make so users can easy to operate. This method not only simplifying complicated step as Camera Calibration, but reducing calculated complexity by imagining handle than even before. On the other hand, hardware is adjusting the length to fit screen between two close infrared rays camera. The cost and adjustment is not the conventional hardware can compare to, users can easily and more convenient to operate computer.

參考文獻


[4] Johnny Chung Lee - Projects – Wii, http://www.johnnylee.net/projects/wii/
[8] 林靜,家庭多媒體影音系統操作之手勢符號認知設計,國立成功大學,2006.
[10] Elena Sánchez-Nielsen, Luis Antón-Canalís and Mario Hernández-Tejera, “Hand Getsure recognition for Human Machine Intercation”, In Proc. 12th International Conference on Computer Graphics, Visualization and Computer Vision : WSCG, 2004.
[11] B. Stenger, “Template based Hand Pose recognition using multiple cues”, In Proc. 7th Asian Conference on Computer Vision : ACCV 2006.
[12] Lars Bretzner, Ivan Laptev and Tony Lindeberg, “Hand gesture recognition using multiscale color features, hieracrchichal models and particle filtering”, in Proceedings of Int. Conf. on Automatic face and Gesture recognition, Washington D.C., May 2002.

延伸閱讀