透過您的圖書館登入
IP:18.117.137.64
  • 學位論文

基於擴張卷積類神經網路之改良影像超解析技術

Improved Image Super Resolution Technology Based on Dilated Convolutional Neural Network

指導教授 : 林春宏 廖珗洲
共同指導教授 : 黃馨逸

摘要


影像超解析在影像處理及電腦視覺上的應用非常廣,其中超解析原影像不可逆,而且影像放大後,造成失真的像素值,將是具有挑戰的課題。因此,如何以已知像素進行最佳的預測,一直是影像超解析領域中的學者所研究的目標。 本文提出兩種使用深度學習中的卷積類神經網路架構來進行影像超解析,藉由卷積類經神網路的神經元來估測超解析影像的像素。第一種架構改良擴張卷積類神經網路係將擴張卷積類神經網路精簡成六層卷積層,並於第二層卷積層至第四卷積層使用2倍擴張來進行卷積,並藉由第一層卷積層的輸出與第四層卷積層的輸出及第二層卷積層的輸出與第三層卷積層的輸出兩個連結進一步深入學習;第二種架構廣度擴張卷積類神經網路將輸入經由兩種不同的擴張率的卷積得到兩個輸出,以達到廣度來進行學習,藉由串接兩個輸出做為下一層的輸入,使得類神經網路同時學習兩種不同大小擴張率卷積的輸入,能夠進行更詳細的特徵萃取,並且同時達到廣度學習的效果。 本實驗所採用的卷積類神經網路參數,係利用擴張卷積類神經網路架構,並分別以時期、驗證比例、驗證資料方式、子影像大小、子影像數量及批次大小進行實驗後,挑選最適合的參數分別為500時期、隨機取出訓練資料集0.2比例做為驗證資料集、驗證資料方式採用以單張影像的所有子影像隨機取出做為驗證資料、子影像的大小採用41×41像素、一張影像採取50張子影像及批次一次使用64張子影像。實驗結果指出改良擴張卷積類神經網路的PSNR值高於擴張卷積類神經網路0.13dB且標準差小0.07dB;廣度擴張卷積類神經網路的PSNR值高於擴張卷積類神經網路0.08dB且標準差小0.09dB。實驗同時包含本文方法兩個架構在不同倍數間影像超解析差別以及使用不同類型的資料集去訓練評測之間的差異。最後,實驗本文方法應用於監控影像,結果指出影像超解析能使部分影像特徵有所強化,也能使雜訊有明顯改善的情況下,影像紋理也不因影像超解析而變得模糊。

並列摘要


Image super resolution is wide application in image processing and computer vision. Because original super resolution image can’t be irreversible and it have distorted pixel values after the image is enlarged are challenging subjects. This paper proposed two architectures which is using convolutional neural network architecture of deep learning to carry out image super resolution. They estimate pixels of super resolution image by neurons of convolutional neural network. The first architecture is reduced dilated convolutional neural network. It reduces dilated convolutional neural network to six convolutional layers. In the second layer to fourth layer use convolution of double dilated rate. The first layer output concatenates the fourth layer output and the second layer output concatenates the third layer output are to deeper learning. The other is wide dilated convolutional neural network. It lets input pass convolutions of difference dilated rate to get output. It achieves wide learning. Neural network learns convolutional input of difference dilated rate by concatenating two outputs to be input of next layer at the same time. It is able to more detail feature extraction and achieve effect of wide learning. This experiments use the parameters of convolutional neural network employed dilated convolutional neural network architecture. The experimental parameters include epoch, validation split, validation mode, sub image size, sub image number, batch size. The experiments appoint appropriate parameters to be 500 epoch, 0.2 validation split, random single sub image which is sub images of the image, 41×41 sub image size, 50 sub image number, 64 batch size. Experimental results appoint PSNR of reduced dilated convolutional network higher than dilated convolutional neural network 0.13dB and strand error smaller 0.07dB. PSNR of wide dilated convolutional network higher than dilated convolutional neural network 0.08dB and strand error smaller 0.09dB. Experiments also include difference scale of image super resolution and using difference types of data sets to test difference on the two proposed architectures. Final, proposed method applied to surveillance system. Results appoint image super resolution is able to enhance part of image features. In noise is improved, image texture isn’t blurry after image super resolution.

參考文獻


[1] Z. Shi , X. Sun , F. Wu , Spatially scalable video coding For HEVC, IEEE Trans. Circuits Syst. Video Technol, vol. 22, pp. 1813–1826, 2012.
[2] M. Shen , P. Xue , C. Wang , Down-sampling based video coding using su- per-resolution technique, IEEE Trans. Circuits Syst. Video Technol, vol. 21, pp. 755–765, 2011.
[3] X. Wu , X. Zhang , X. Wang , Low bit-rate image compression via adaptive down-sampling and constrained least squares upconversion, IEEE Trans. Image Process, vol. 18, pp. 552–561, 2009.
[4] O.L. Meur , M. Ebdelli , C. Guillemot , Hierarchical super-resolution-based in- painting, IEEE Trans. Image Process, vol. 22, pp. 3779–3790, 2013.
[5] K-W. Hung , W-C. Siu , Depth-assisted nonlocal means hole filling for novel view synthesis, in: Proceedings of the 19th IEEE International Conference on Image Processing, Orlando, FL, pp. 2737–2740, 2012.

延伸閱讀