透過您的圖書館登入
IP:3.144.156.112
  • 學位論文

架構在不同損失函數下的深度學習

Deep Learning under various loss functions

指導教授 : 洪文良

摘要


本次研究想得知將損失函數更換為Renyi entropy及Gini index是否能夠在深度學習中表現依然良好。 首先以均勻分布隨機產生模擬資料集,使用Shannon entropy、Renyi entropy、cross entropy及Gini Index四種不同的信息熵進行比較,發現Renyi entropy、cross entropy在模擬測試中表現較佳;接下來選擇較shannon entropy廣義的Renyi entropy以及使用Gini index當作損失函數,並與深度學習常中見的損失函數cross entropy,將它們套用在深度學習中進行比較。 其中表現最好的是Renyi entropy及cross entropy。

並列摘要


The approach of this research is to the loss function cross entropy with Renyi entropy and Gini index of deep learning, and see is it still good at training and test of deep learning. First we'd given a simulation data to compare four entropies; Shannon entropy, Renyi entropy, cross entropy and Gini index. The Renyi entropy and cross entropy are performed well in our simulation dataset. Next step,take Renyi entropy and Gini index as loss function compare with cross entropy, which is deep learning default usual loss function. The following are the main research contributions of this dissertation. The propose Renyi entropy is slightly better than cross entropy.

參考文獻


[1] P.A Bromiley, N.A. Thacker and E. Bouhova-Thacker, Shannon Entropy, Renyi
Entropy, and Information, Tina Memo No.2004-004 Internal Memo, 2004
[2] Ian Goodfellow and Yoshua Bengio and Aaron Courville, Deep Learning,
2016,MIT Press, http://www.deeplearningbook.org
[3] de Boer, Pieter-Tjerk; Kroese, Dirk P.; Mannor, Shie; Rubinstein, Reuven Y., A

延伸閱讀