透過您的圖書館登入
IP:3.15.7.43
  • 學位論文

深度卷積神經網路中卷積層之分析及比較

Analysis and Comparison of Convolution Layer in Deep Convolutional Neural Network

指導教授 : 陳文雄

摘要


隨著資訊科技的快速發展,目前大數據成為主流,許多辨識系統受到相當大的影響。因此像深度學習這種需要大數據庫學習模型因此成為主流,深度學習可以把利用機器自動學習任務目標的特徵,也因此深度學習這種架構成為學術非常熱門的技術。 如今在視覺影像領域中類神經網路大量流行,裡面表現最好的模型就是卷積神經網路。深度學習進展和卷積神經網路(Convolutional Neural Networks,CNN)有關。卷積神經網路又被稱為 CNNs 或 ConvNets,它是目前深度神經網路(deep neural network)領域的發展主力,在圖片辨別上甚至可以做到比人類還準確。如果說有任何方法能不負大家對深度學習的期望,卷積神經網路絕對是首選。 在卷積神經網路當中關鍵一環在於卷積層的kernel中權重值。進入卷積神經網路中卷積層這一步動作下通常會有三個可以改變方式,分別kernel size 、激活函數和卷積使用kernel數量,神經網路中有可能因為激活函數選擇或是kernel size不同改善準確率。

並列摘要


With the rapid development of information technology, big data has become mainstream, and many identification systems have been greatly affected. Therefore, deep learning requires a large database learning model and thus becomes the mainstream. Deep learning can take advantage of the characteristics of robots to automatically learn to task objectives, and thus deep learning of this architecture has become a very popular technology in academics. Nowadays, neural networks are popular in the field of visual imaging. The best performing model is the convolutional neural network. The progress of deep learning is related to Convolutional Neural Networks (CNN). Convolutional neural networks, also known as CNNs or ConvNets, are the main developments in the field of deep neural networks, and can even be more accurate than humans in image recognition. If there is any way to live up to the expectations of deep learning, convolutional neural networks are definitely the first choice. A key part of the convolutional neural network is the weight value in the kernel of the convolutional layer. There are usually three ways to change the convolutional layer in the convolutional neural network. Kernel size, activation function, and convolution use the number of kernels. The neural network may have different activation function selection or kernel size. The accuracy is not high.

參考文獻


[1] Hinton, Geoffrey E, and Ruslan R. Salakhutdinov. "Reducing the dimensionality of data with neural networks." science, vol. 313, no. 5786 , pp. 504-507, July 2006.
[2] Hinton, Geoffrey E, Simon Osindero, and Yee-Whye Teh. "A fast learning algorithm for deep belief nets." Neural computation, vol. 18, no. 7, pp. 1527-1554, 2006.
[3] LeCun, Yann, et al. "Gradient-based learning applied to document recognition." Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, 1998.
[4] Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter. "Self-Normalizing Neural Networks." Advances in neural information processing systems30 (NIPS), 2017.
[5] Boureau, Y-Lan, et al. "Learning mid-level features for recognition." Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on. IEEE, 2010.

延伸閱讀