|
一、 中文文獻 1. 王鈞威,(2021),基於整合RoBERTa與CRF模型之中文文法錯誤診斷系統,朝陽科技大學。 2. 吳晨皓,(2020),BERT 與 GPT-2 分別應用於刑事案件之罪名分類及判決書生成,國立高雄科技大學。 3. 呂明聲,(2020),基於深度學習之謠言檢測法: 以食安謠言為例,國立中央大學。 4. 邱彥誠,(2020),應用人工智慧於股市新聞與情感分析預測股價走勢,國立臺北大學。 5. 胡林辳,(2019),植基於深度學習假新聞人工智慧偵測: 台灣與美國真實資料實作,國立臺北大學。 6. 夏鶴芸,(2020),應用深度學習與自然語言處理新技術預測股票走勢-以台積電為例,國立臺北大學。 7. 翁嘉嫻,(2020),基於預訓練語言模型之中文虛假評論偵測,國立中興大學。 8. 黃若蓁,(2020),運用BERT模型對中文消費者評價之基於屬性的情緒分析,國立成功大學。 9. 黃慧宜,周倩,(2019),國中學生面對網路謠言之回應行為初探: 以 Facebook 謠言訊息為例,教育科學研究期刊,64(1),149-180。 10. 黃獻霆,(2021),應用RoBERTa-wwm預訓練模型與集成學習以增強機器閱讀理解之表現,國立臺灣大學。 11. 蔡楨永,龍希文,林家安,(2019),高齡者面對網路謠言困境之探討,國際數位媒體設計學刊,11(1),53-59。 12. 鍾士慕,(2019),深度學習技術在中文輿情分析之應用: 以BERT演算法為例,元智大學。 13. 蘇文群,(2021),真的假的? ! BERT你怎麼說?,國立臺中教育大學。 14. 蘇志昇,(2021),結合Google BERT語義特徵於LSTM遞迴神經網路建模之美食店家評論情緒分析,亞洲大學。
二、 英文文獻 1. Baevski, A., Edunov, S., Liu, Y., Zettlemoyer, L., & Auli, M. (2019). Cloze-driven pretraining of self-attention networks. arXiv preprint arXiv:1903.07785. 2. Bahdanau, D., Cho, K., & Bengio, Y. (2014). Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. 3. Cho, K., Van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., & Bengio, Y. (2014). Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078. 4. Chowdhary, K. (2020). Natural language processing. Fundamentals of artificial intelligence, 603-649. 5. Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. 6. Galassi, A., Lippi, M., & Torroni, P. (2020). Attention in natural language processing. IEEE Transactions on Neural Networks and Learning Systems, 32(10), 4291-4308. 7. Gillioz, A., Casas, J., Mugellini, E., & Abou Khaled, O. (2020, September). Overview of the Transformer-based Models for NLP Tasks. In 2020 15th Conference on Computer Science and Information Systems (FedCSIS) (pp. 179-183). IEEE. 8. Hirschberg, J., & Manning, C. D. (2015). Advances in natural language processing. Science, 349(6245), 261-266. 9. Jusoh, S., & Al-Fawareh, H. M. (2007, November). Natural language interface for online sales systems. In 2007 International Conference on Intelligent and Advanced Systems (pp. 224-228). IEEE. 10. Jusoh, S., & Alfawareh, H. M. (2012). Techniques, applications and challenging issue in text mining. International Journal of Computer Science Issues (IJCSI), 9(6), 431. 11. Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., ... & Stoyanov, V. (2019). Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. 12. Luong, M. T., Pham, H., & Manning, C. D. (2015). Effective approaches to attention-based neural machine translation. arXiv preprint arXiv:1508.04025. 13. Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving language understanding by generative pre-training. 14. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language models are unsupervised multitask learners. OpenAI blog, 1(8), 9. 15. Sennrich, R., Haddow, B., & Birch, A. (2015). Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909. 16. Socher, R., Bengio, Y., & Manning, C. D. (2012). Deep learning for NLP (without magic). In Tutorial Abstracts of ACL 2012 (pp. 5-5). 17. Sutskever, I., Vinyals, O., & Le, Q. V. (2014). Sequence to sequence learning with neural networks. Advances in neural information processing systems, 27. 18. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30. 19. Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., ... & Rush, A. M. (2020, October). Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 conference on empirical methods in natural language processing: system demonstrations (pp. 38-45). 20. Wu, Y., Schuster, M., Chen, Z., Le, Q. V., Norouzi, M., Macherey, W., ... & Dean, J. (2016). Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. 21. Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R. R., & Le, Q. V. (2019). Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems, 32.
|