Title

基於卷積神經網路之影像超解析

Translated Titles

Image Super Resolution Based on Convolutional Neural Nework

Authors

周柏宇

Key Words

超解析 ; 卷積神經網路 ; 深度學習 ; Super resolution ; Convolutional neural network ; Deep learning

PublicationName

義守大學資訊工程學系學位論文

Volume or Term/Year and Month of Publication

2017年

Academic Degree Category

碩士

Advisor

林義隆

Content Language

繁體中文

Chinese Abstract

影像超解析在影像處理領域是一個流行的研究主題,影像超解析是從單張或多張低解析度的影像增加像素數量得到高解析度影像的一個過程。而近年來深度學習受到許多領域的高度關注,所有相關領域中都有出色的效果,其中卷積神經網路被廣泛的應用在電腦視覺和影像辨識上。 本研究中,將以深度學習中的卷積神經網路以及全卷積網路的方式來做影像超解析,探討出架構上對於整體解析度的影響,透過反卷積方法理解卷積神經網路及全卷積網路過程,進而做出有效的調整改善其影像解析度,與傳統內插法做比較。

English Abstract

Image super-resolution has been a popular research topic in the field of image processing. It is a process of getting a high-resolution image from one or multiple low-resolution images to increase the number of pixels. Deep learning has been highly concerned by many fields in recent years, and all related fields have excellent results. Convolutional neural networks are widely used in computer vision and image recognition. In this thesis, adopt deep learning of convolutional neural network and full convolution network to do super-resolution, to explore the structure for the impact of resolution. Through the deconvolution method to understand the process of convolutional neural network and full convolution network, and then make effective adjustments to improve its image resolution, compared with the traditional interpolation method.

Topic Category 基礎與應用科學 > 資訊科學
電機資訊學院 > 資訊工程學系
Reference
  1. [1] C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 2, pp. 295-307, 2015.
    連結:
  2. [6] L. Shen, Z. Y. Xiao, and H. Han, “Image super-resolution bases in MCA and wavelet-domain HMT,” Information Technology and Applications, pp. 264-269, 2010.
    連結:
  3. [7] G. Jing, Y. Shi, and B. Lu, “Single-image super-resolution bases on decomposition and sparse representation,” Information Conference on Multimedia Communication, pp. 127-130, 2010.
    連結:
  4. [8] C. Y. Tsai, D. A. Huang, M. C. Yang, L. W. Kang, and Y. C. F. Wang, “Context-aware single image super-resolution using locality-constrained group sparse representation,” Visual Communications and Image Processing, pp. 1-6, 2012.
    連結:
  5. [9] Y. LeCun, L. Buttou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceeding of IEEE, november, 1998.
    連結:
  6. [10] S. Lawrence, C. L. Giles, A. C. Tsoi, and A. D. Back, “Face recognition: a convolutional neural-network approach,” IEEE Transactions on Neural Network, vol. 8, pp. 98-113, 1997.
    連結:
  7. [13] K. He, X. Zhang, S. Ren and J. Sun, “Delving deep into rectifiers: surpassing human-level performance on imagenet classification,” IEEE International Conference on Computer Vision (ICCV), pp. 1026-1034, 2015.
    連結:
  8. [17] E. Shelhamer, J. Long and T. Darrell, “Fully convolutional networks for semantic segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. pp, iss. 99, 2016.
    連結:
  9. [2] C. Dong, C. C. Loy, K. He, and X. Tang, “Learning a deep convolutional network for Image super-resolution,” In Proceedings of European Conference on Computer Vision, vol. 8692, pp. 184-199, 2014.
  10. [3] Z. Wang, D. Liu, S. Chang, Q. Ling, Y. Yang, and T. Huang, “Deep networks for image super-resolution with sparse prior,” In Proceeding of International Conference on Computer Vision (ICCV), 2015.
  11. [4] M. H. Cheng, N. W. Lin, K. S. Hwang, and J. H. Jeng, “Fast video super-resolution using artificial neural networks,” 8th IEEE International Symposium on Communication Systems, Networks and Digital Signal Processing(CSNDSP), Poznan, Poland, pp. 1-4, 2012.
  12. [5] 邱垂汶,「使用視覺高頻增強濾波器之高品質超解析度影像與視訊研究」,國立雲林科技大學電機工程系碩士論文,2004。
  13. [11] X. Glorot, A. Bordes and Y. Bengio, “Deep sparse rectier neural networks,” Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics (AISTATS-11), pp. 315-323, 2011.
  14. [12] A. L. Maas, A. Y. Hannun, and A. Y. Ng. “Rectifier nonlinearities improve neural network acoustic models,” ICML Workshop on Deep Learning for Audio, Speech, and Language Processing (WDLASL 2013), 2013.
  15. [14] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever and R. Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting,” The Journal of Machine Learning Research, vol. 15, no. 1, pp. 1929-1958, 2014.
  16. [15] P. Baldi and P. J. Sadowski, “Understanding dropout,” Advances in Neural Information Processing Systems 26(NIPS 2013), pp. 2814-2822, 2013.
  17. [16] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever and R. R. Salakhutdinov, “Improving neural networks by preventing co-adaptation of feature detectors,” arXiv preprint arXiv:1207.0580, 2012.