透過您的圖書館登入
IP:3.137.213.128
  • 期刊

Human Activity Recognition with Multimodal Sensing of Wearable Sensors

摘要


Human activity sensed by wearable sensors has multi-granularity data characteristics. Although deep learning-based approaches have greatly improved the accuracy of recognition, most of them mainly focus on designing new models to obtain deeper features, ignoring the different effects of different deep features on the accuracy of recognition. We think that discriminative features learning would improve the recognition performance. In this paper, we propose an end-to-end model ABLSTM that consists of Attention model and BLSTM model to recognize human activities. Specifically, the BLSTM model is used to extract deep features of various activities. After that, the Attention model is used to obtain the discriminative features representation by reducing the irrelevant features and enhancing the positive correlation features to each activity. Therefore, compared with traditional deep learning-based approaches, such as CNN and RNN based etc., the features learned by ABLSTM are more discriminative, which can be in response to the changes of activities. By testing our model on two public benchmark datasets: UCI and Opportunity. The results show that our model can well recognize human activities with F1 scores as high as 99.0% and 92.7% respectively on the two datasets, which pushes the state-of-the-art in human activities recognition of mobile sensing.

延伸閱讀