透過您的圖書館登入
IP:3.135.194.251
  • 學位論文

智慧型可攜式動作行為辨識系統

IPARS: Intelligent Portable Activity Recognition System

指導教授 : 許永真

摘要


並列摘要


The things we normally do in daily living including any daily activity we perform for self-care (such as feeding ourselves, bathing, dressing, grooming), work, homemaking, and leisure. The ability or inability to perform activities of daily living (ADLs) can be used as a very practical measure of ability/disability in many disorders. Modeling human ADLs via contextual information is gaining increasing interest in the artificial intelligent and ubiquitous communities. There are several studies on tracking ADLs, such as using video cameras, or microphones. Many people are uncomfortable living with cameras and microphones. Furthermore, video cameras, especially in non-public place spaces, provoke strong privacy concerns. Those approaches sometimes cannot process sensor data with minimal computational resources (e.g., a personal digital assistant (PDA)). Our developed system: Intelligent Portable Activity Recognition System (IPARS) performs activity recognition online with minimal computational resources. Sensors should be low-maintenance, easy to replace and maintain. Tagging objects with a remotely readable identification tag is adopted in our system. In addition, we develop a wearable wrist, based on Radio Frequency Identification (RFID) reader to detect everyday objects. In addition, we use WiFi positioning system to capture a person’s current position. IPARS is equipped with an RFID reader, which connects to a PDA. The way to obtain contexts to infer the current activity of a person is by detecting person-object interactions, and movement. Our approach uses a general framework for activity recognition by building upon and extending multiway tree structure (trie) to model ADLs via contextual information. There are two steps for IPARS to achieve activity recognition. First, by using the interface provided by IPARS, the person can train his activities easily. Therefore, while the person performs activities, IPARS models sequential sensor readings involved in these activities. Second, after modeling human activities, IPARS makes inferences for activity recognition by collecting current contexts and extracting features to map trained activity models. There are two phases in our experiments. In the first phase, we conducted our experiments in the Computer Science and Information Engineering at National Taiwan University. The first goal is using IPARS to test object recognition and location tracking. In the second phase, the experiment is run in a real home. The second goal is to evaluate the proposed solution of the activity recognition problem. We found that a discriminative relational approach for activity recognition based on the framework of multi-tries models to be well-suited to model sequence of contexts for activity recognition. IPARS detected 80 percent correctly for activity recognition. The results are promising. In the future, we focus on how to detect activities about healthcare. We plan to extend our model in a number of ways. First, by collecting data from more subjects, we can learn a set of generic models by clustering the subjects based on their similarities; then we can use a mixture of these models to better recognize activities of a new person.

並列關鍵字

Activity recognition RFID ADL WiFi LeZi Trie

參考文獻


[2] T. Barger. Objective remote assessment of activities of daily living: Analysis
[1] D. Ashbrook and T. Starner. Using GPS to learn significant locations and
predict movement across multiple users. In Personal and Ubiquitous Computing,
Center, Univ. of Virginia Health System, 2002.
for personal mobility tracking in pcs networks. In ACM-Kluwer Wireless

延伸閱讀