透過您的圖書館登入
IP:18.222.39.205
  • 學位論文

深度圖像之時間與空間上的雜訊過濾

Temporal and Spatial Denoising of Depth Maps

指導教授 : 陳少傑
共同指導教授 : 林伯星(Bor-Shing Lin)
若您是本文的作者,可授權文章由華藝線上圖書館中協助推廣。

摘要


本研究改良了在 RGB-D 攝影機上所獲得的深度影像之品質。隨著新型的結 構光源攝影機的推出,例如:Microsoft Kinect 和Asus Xtion PRO LIVE,深度影像的獲得變得更便利且快速。這種深度影像可以應用在許多的領域上,如:虛擬實境、影像處理、3D 列印等等諸多應用。不過,這種深度的影像的產生通常都會伴隨著種種雜訊,像是不合法的影像深度值、錯誤的影像深度值、影像深度值在時間上所受到的擾動,這些雜訊會大幅度降低深度影像的應用推廣。為了 在未來能夠有更廣泛地應用,解決雜訊的干擾是提升影像品質、增加應用效果的 必要工作。因此,我們提出了一套有效的演算法可以成功解決以上所提及之雜訊 干擾,這套演算法是改良了exemplar-based 的影像修補方法[16];該演算法原本應用於填補彩色影像上所消失之區域之像素值,我們將之改良並且應用在深度影 像之雜訊的填補,進而提升RGB-D 攝影機所獲取之深度影像的影像品質。在實 驗最後的結果評估方面,我們將演算法實驗在日本筑波大學的立體影像資料庫 (Tsukuba Stereo Dataset)和Asus Xtion PRO LIVE 上所拍攝的深度影像,並且採用了峰值信噪比(Peak Signal-to-Noise Ratio)和計算時間量化的數據作比較,來證明我們所提出的演算法能夠大幅度提升深度影像的品質,讓深度影像在未來各種應用場合能有顯著的效果提升。

並列摘要


This work presents a refinement procedure of depth map acquired by RGB-D(Depth) cameras. With the release of many new structured-light RGB-D cameras,such as Microsoft Kincet or Asus Xtion PRO LIVE, it is very conventional and consumer-accessible to acquire high-resolution depth maps. This 3D depth information can be applied to many fields, like augmented reality, image processing,and 3D printer. However, RGB-D cameras suffered from problems such as undesired occlusion, inaccuracy of depth value, and temporal variation. To broaden its application, it is crucial to solve the above-mentioned problems. Thus, The proposed novel algorithm based on the exemplar-based inpainting method to cope with the artifact in RGB-D cameras’ depth maps. This exemplar-based inpainting has been used to repair an object-removed image with missing information. The idea of this inpainting method is similar to the procedure of padding the occlusions of RGB-D cameras’ depth data. Therefore, the proposed method enhances and adjusts the inpainting method to fit and refine the image quality of RGB-D cameras’ depth data. For evaluating the experiment results, our proposed method will be tested on Tsukuba Stereo Dataset, which provides a 3D video with ground truth of depth maps, occlusion maps, and RGB images, PSNR, and computational time as evaluation metrics. Moreover, a set of self-shooting RGB-D depth maps and their refinement results will also be shown to prove the improvement of our performance compared with the original occluded depth maps.

並列關鍵字

Depth Map Asus Xtion PRO LIVE Kinect RGB-D Camera Inpainting

參考文獻


[1] K. Khoshelham, “Accuracy Analysis of Kinect Depth Data,” International
Archives of the Photogrammetry and Remote Sensing Information Science
Data Compression,” IEEE Transactions on Multimedia, vol. 15, no.6, Oct. 2013,
Denoising of Kinect Depth Data,” The 8th International Conference on Signal
Shotton, S. Hodges, D. Freeman, A. J. Davison, and A. Fitzgibbon,

延伸閱讀