透過您的圖書館登入
IP:18.223.124.244
  • 學位論文

基於注意力機制之多變量時間序列之早期預測與解析

Multivariate Time Series Early Classification with Interpretability using Attention Mechanism

指導教授 : 曾新穆 劉建良

摘要


許多機器學習及深度學習的應⽤都需要對於模型預測結果的解釋來讓專家相信這些模型。此研究專注於多變量時間序列的早期分類問題,並且使⽤注意⼒機制來讓模型產⽣可解釋的分類結果。我們提出的⽅法使⽤深度學習的技術來萃取多變量時間序列中不同變量之間的關係以及時序上的關係。此外,我們也使⽤注意⼒機制來找出時間序列中重要的⽚段,提供了⼀個可以讓使⽤者參考以及做進⼀步決定的的依據。我們相信我們出的⽅法可以應⽤於各個領域。我們在三個資料集上測試我們的⽅法並且與其他⽅法做⽐較,實驗結果顯⽰我們的⽅法在準確度及提早度上可以與其他⽅法抗衡。更重要的是,我們的⽅法所提供的可解釋性可以幫助使⽤者更加了解模型是如何做出最後的預測結果。

並列摘要


Many application domains require interpretable results to convince the domain experts to trust the prediction results. This work focuses on the time-series early classification problem, where the model performance, earliness, and interpretable results are considered to propose a framework based on the attention mechanism. In the proposed model, we used a deep-learning method to extract the features among multiple variables and capture the temporal relation that exists in multivariate time-series data. Additionally, the proposed method uses the attention mechanism to identify the critical segments related to model performance, providing a base to facilitate the better understanding of the model for further decision making. Further, we believe that the proposed approach could be applied to many application domains. We conducted experiments on three datasets and compared with several alternatives. The experimental results indicate that the proposed method can achieve comparable performance results and earliness compared to other alternatives. More importantly, the proposed method can provide interpretable results by highlighting the important parts of the original data, rendering it easier for users to understand how the prediction is induced from the data. Furthermore, we also provide the detailed analysis of the proposed method.

參考文獻


[1] Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi
Bougares, Holger Schwenk, and Yoshua Bengio. “Learning phrase representations
using RNN encoder-decoder for statistical machine translation”. In: arXiv preprint
arXiv:1406.1078 (2014).
[2] Ilya Sutskever, Oriol Vinyals, and Quoc V Le. “Sequence to sequence learning with

延伸閱讀