透過您的圖書館登入
IP:3.141.244.201
  • 學位論文

智能家居環境中的手勢控制

Gesture-based Control in a Smart Home Environment

指導教授 : 林甫俊

摘要


在智慧家居的環境中,居住者可以利用簡單而且方便的方式來控制家裡的電器用品。在眾多的控制方法中,對於使用者來說,手勢是控制智慧家居最自然且方便的方式,而利用感測器來偵測手勢執行家電則是智慧居家的願景。在這樣的情境之下,智慧型手機扮演了一個重要的角色¬-一個將資料從智慧家居傳到後端伺服器的閘道。為了辨認手勢,需要透過測試資料來建立模組。而好的模組應該要能夠符合大部分的人所作出的手勢。有三種建立模組的方法:用戶相依模組、用戶不相依模組、以及混合式模組。用戶相依模組來自每一位使用者本身的測試資料,而用戶不相依模組則是從許多的測試資料而來,混和式模組則結合了用戶相依模組和用戶不相依模組來達到手勢辨認。 在本研究中,我們分析上述三種模組中,哪一種會是辨認手勢最好的模組。精準度、功耗,以及處理器/儲存使用率等三種指標將會被用來作為評估的標準。為了達成我們的分析,我們計畫使用Koala 感測器當作穿戴式裝置,Android的智慧型手機當作閘道,並在OneM2M兼容式物聯網平台分析手勢。為了展示智慧家居的控制,我們定義一些特定的手勢,並將其對照到智慧燈泡或者家用機器人上面。Koala感測器有來自加速器以及陀螺儀的六種初始資料,並且可以帶在手腕上。飛利浦的Hue是一顆智慧燈泡,我們可以透過不同顏色或強度的變化來對應不同手勢。OM2M是一個兼容式的開源物聯網平台。我們會建立一個OM2M附加元件作為計算平台,使我們能夠分析Koala感測器上的資料,並辨認穿戴者所做出的手勢。 我們預計辨認十種不同的手勢,、包括順時鐘、逆時鐘、上、下、左、右、交叉順時鐘、交叉逆時鐘、左到右V型、以及右到左V型。每一種手勢都在智慧燈泡上做出相對應的變化。左到右V型是打開,右到左V型是關閉;交叉順時鐘會使燈泡閃爍,交叉逆時鐘會使燈泡停止閃爍;往右會讓燈泡更紅,往左則會減少紅色的比例;往上會讓燈泡更綠,往下則是減少綠色的比例;順時鐘會讓燈泡更藍,逆時鐘會減少藍色的比例。 為了能夠辨識這十種手勢,初始資料會透過平均值、標準差、變異數以及變異數係數等方式,擷取出數據特徵。這些特徵扮演區分以及分類演算法重要的輸入,而邏輯回歸模型將會配合決策樹當作分類演算法。我們使用WEKA來進行特徵的分析。 本研究有四種特別的貢獻: 十種手勢的辨認、oneM2M兼容式平台達到機器之間的溝通、最佳的手勢辨認方法、以及混和使用者相依模型以及使用者不相依模型的混混和式模型。透過這本研究,我們可以得知哪一種模型可以餵手勢辨認提供最好的精確度,最低的CPU/儲存使用率,以及最低的功耗。

並列摘要


In a smart home environment, a homeowner can exercise her control of home appliances in a convenient and natural way. Among various control methods, hand gesture is the most convenient and natural way for a user to operate their smart home. Hand gesture detection by wearable devices is a visionary scenario for appliance control in a smart home environment. In such a scenario, smart phone will play an important role as a gateway for sending the data to the backend server and controlling smart home. In order to recognize the gestures, models need to be constructed from training data. Good models should fit with most people’s gestures. There are three ways to derive the models: user dependent model, user independent model, and hybrid method. A user dependent model is derived from a single user’s training data while a user independent model is derived from many people’s training data. A hybrid model combines both a user dependent model and a user independent model for gesture recognition. In this research, we analyze which of the above three ways is more reliable to derive the best model for hand gesture identification. Three metrics will be measured for the evaluation of models that include accuracy, power consumption, and CPU/storage usage. To achieve our analysis, we plan to use Koala sensor as the wearable device, Android smart phone as the gateway and oneM2M compliant IoT platform as the gesture analyzer. To exhibit smart home control, a particular hand gesture will be mapped to certain control on smart light or home robot. Koala sensor has 6 raw data from accelerometer and gyroscope sensors and can be used as a wristband accessory. Philips Hue will be used as the smart light; its different colors and light intensities are used to match different hand gestures. OM2M is a oneM2M complaint open source IoT platform. We will create an OM2M Plugin as our computing platform to analyze sensor data from Koala and identify the hand gesture of the wearer. We plan to identify 10 different kinds of hand gestures including clockwise, counter clockwise, swipe up, swipe down, swipe left, swipe right, cross clockwise, cross counter clockwise, V-form from left-to-right, and V-form from right-to-left. Each gesture then will be mapped respectively into a corresponding smart light action as follows: V-form from left-to-right for turning on, V-form from right-to-left, cross clockwise for blinking on, cross counter clockwise for blinking off, swipe right for redden, swipe left for dimming red, swipe up for greening, swipe down for dimming green, counter clockwise for bluing, and clockwise for dimming blue. To recognize these ten gestures, the raw data will first be extracted into statistical features such as mean, standard deviation, variance, and coefficient of variance. Features play an important role as a differentiator and input of classifier algorithm. In order to distinguish ten gestures, logistic regression will be used as the classifier algorithm along with the decision tree method. We will use WEKA to conduct the analysis of features. There are four unique contributions from this research: ten hand gestures recognition, oneM2M compliant implementation for the Machine to Machine communication, determination of the best approach to recognize hand gesture, and adoption of hybrid method between user dependent and user independent gesture models. By this research, we can understand which of user dependent, user independent or hybrid method give better accuracy, lower CPU/memory usage, and lower storage utilization for recognizing hand gesture.

參考文獻


[15] I. H. Witten and E. Frank, Data Mining: Practical machine learning tools and techniques. Morgan Kaufmann, 2005.
[17] J. Wu, G. Pan, D. Zhang, G. Qi, and S. Li, “Gesture Recognition with a 3-D Accelerometer,” in Ubiquitous Intelligence and Computing, D. Zhang, M. Portmann, A.-H. Tan, and J. Indulska, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009, pp. 25–38.
[1] J. Yang, E.-S. Choi, W. Chang, W.-C. Bang, S.-J. Cho, J.-K. Oh, J.-K. Cho, and D.-Y. Kim, “A novel hand gesture input device based on inertial sensing technique,” The 30th Annual Conference of the IEEE Industrial Electronics Society (IECON), vol. 3, pp. 2786–2791, 2004.
[2] H. Cheng, L. Yang, and Z. Liu, “A Survey on 3D Hand Gesture Recognition,” IEEE Transactions on Circuits and Systems for Video Technology, vol. PP, no. 99, p. 1, 2015.
[4] R. Xu, S. Zhou, and W. J. Li, “MEMS Accelerometer Based Nonspecific-User Hand Gesture Recognition,” Sensors Journal, IEEE, vol. 12, no. 5, pp. 1166–1173, May 2012.

延伸閱讀