透過您的圖書館登入
IP:3.141.8.247
  • 學位論文

考慮真實天氣下之單張霧氣影像能見度增強方法

Efficient Single Hazy Image Visibility Enhancement in Widely Real-World Poor Weather

指導教授 : 黃士嘉
若您是本文的作者,可授權文章由華藝線上圖書館中協助推廣。

摘要


在惡劣的天氣環境情況下,如霧、霾和沙塵暴…等等,戶外場景影像的能見度將會被降低,因此時常利用燈源提升視覺的能見度,如駕駛員使用車大燈和街燈被點亮,所以在惡劣的環境下進行拍攝時,時常有區域亮點的問題。另外,如果在沙塵暴的環境下拍攝影像時,其影像會有色偏的現象,這是因為沙塵的大氣粒子吸收特定的光譜所造成的。依照傳統先進的單張霧氣影像修復技術來修復包含區域亮點或色偏的影像時,將無法達到有效地能見度修復,因此我們針對此問題提出一個有效的能見度增強方法。我們方法結合了三個模組,分別為Hybrid Dark Channel Prior模組、色彩分析模組和能見度修復模組。實驗結果證明,經由跟傳統先進的單張霧氣影像修復技術所產生的能見度修復結果相較之下,所提出的方法能有效的提升影像能見度並校正影像的色彩,使其還原場景的原始狀態。

並列摘要


The visibility of images of outdoor scenes will generally become degraded when captured during inclement weather conditions such as haze, fog, sandstorms, and so on. Additionally, localized light sources are common when capturing scenes in these conditions. Drivers often turn on the headlights of their vehicles, and streetlights are often activated. Sandstorms are particularly challenging due to the propensity of atmospheric sand to absorb specific portions of spectrum and thereby cause color-shift problems. Traditional state-of-the-art restoration techniques for hazy images are unable to effectively contend with over-saturation artifacts caused by localized light sources or color-shifts arising from inadequate spectrum absorption. In response, we present a novel and effective haze removal approach to remedy problems caused by localized light sources and color-shifts, and thereby achieve superior restoration results for single hazy images. In order to achieve this, the proposed approach combines the hybrid dark channel prior module, the color analysis module, and the visibility recovery module. Experimental results demonstrate that the proposed haze removal technique can recover scene radiance in single images more effectively than can traditional state-of-the-art haze removal techniques.

參考文獻


[1] N. K. Kanhere and S. T. Birchfield, “A taxonomy and analysis of camera calibration methods for traffic monitoring applications,” IEEE Trans. Intelligent Transp. Syst., vol. 11, no. 2, pp. 441-452, Jun. 2010.
[2] L. Lee, R. Romano, and G. Stein, “Monitoring activities from multiple video streams: establishing a common coordinate frame,” IEEE Trans. Pattern Anal. Machine Intell., vol. 22, pp. 758-767, Aug. 2000.
[3] N. Buch, S.A. Velastin, and J. Orwell, “A Review of Computer Vision Techniques for the Analysis of Urban Traffic,” IEEE Trans. Intelligent Transp. Syst., vol. 12, no. 3, pp. 920-939, Sep. 2011.
[4] S. C. Huang, “An Advanced Motion Detection Algorithm with Video Quality Analysis for Video Surveillance Systems,” IEEE Trans. Circuits Syst. Video Technol., vol. 21, no. 1, pp. 1-14, Jan. 2011.
[5] F. C. Cheng, S. C. Huang, and S. J. Ruan, “Scene Analysis for Object Detection in Advanced Surveillance Systems using Laplacian Distribution Model,” IEEE Trans. Systems, Man, and Cybernetics -Part C: Applications and Reviews, vol. 41, pp. 589-598, Sep. 2011.

延伸閱讀