透過您的圖書館登入
IP:216.73.216.60
  • 學位論文

考量色調連續性及曝光值之高動態範圍視訊合成 及其在行車夜視強化之應用

Color Continuity-Aware & Exposure-Adaptive HDR Synthesis and its Real-Time Application on Autotronics Night Vision Enhancement

指導教授 : 林泰吉 葉經緯
若您是本文的作者,可授權文章由華藝線上圖書館中協助推廣。

摘要


利用多張不同曝光畫面合成單一高動態範圍(high dynamic range;HDR)影像之技術已廣泛應用在數位相機及智慧型手機等消費性電子產品,但相關技術卻一直未被成功地應用於動態視訊,主要原因是在一般合理的幀率(frame rate)條件下(如每秒30張),提供HDR合成之曝光值將嚴重被限制。本論文嘗試採取僅兩種曝光值,即:一張高曝光(high exposure;HE)與一張低曝光(low exposure;LE),進行HDR合成,並針對其「色調不連續」與「細節喪失」問題提出「考量色調連續性(color continuity-aware)」及「根據不同曝光值之適應性(exposure-adaptive)」演算法,同時整合「改良式色調對應(tone mapping)」進行畫面輸出。經過模擬,我們提出的方法在各種不同場景下皆有媲美傳統多張靜態畫面HDR合成之畫質。我們接著嘗試將此種僅利用兩種曝光值的HDR合成技術用於加強夜間行車視訊之畫質,並提出「自動曝光(auto exposure;AE)與低曝光(low exposure;LE)之HDR合成」以解決在低照度環境下兩張來源畫面皆曝光不足的窘境,換句話說HDR合成多仰賴AE成像,並在AE過曝時以LE進行細節加強;接著,我們也針對LE資訊不足所造成的鬼影,與接縫不連續問題提出有效解決方案。最後,我們使用TI OMAP4430應用處理器(內含雙核心ARM Cortex-A9)之發展板進行嵌入式系統實作,由兩架USB相機分別進行AE與LE之視訊畫面擷取,將畫面對齊後即時進行HDR合成,利用多執行緒程式撰寫可達到每秒約10張畫面之處理效能。

並列摘要


High dynamic range (HDR) synthesis is popular in consumer products such as digital cameras and smart phones to generate high-quality still pictures. However, these techniques are difficult to be applied on video, owing to the strict timing constraints (i.e. 1/30 sec for a typical 30fps frame rate) posed on the image sources of various exposure values for HDR synthesis.This thesis tries to synthesize HDR frames based on only two sources, namely “high exposure (HE)” and “low exposure (LE)” respectively. The “color dis-continuity” and “loss of details” problems caused by the limited sources have been properly solved with our proposed “color continuity-aware” and “exposure-adaptive” pixel merge. Moreover, an improved tone mapping algorithm, which combines the advantages of conventional photographic and gradient-based approaches, was been proposed to generate the final result. Simulations on various scenes show the proposed HE/LE algorithm is able to achieve the subjective quality of conventional ones based on multiple source images. (e.g. five images of -2ev, -1ev, 0ev, 1ev,2ev )The proposed HDR video synthesis has then been applied in autotronics night vision enhancement. An alternative algorithm with “auto exposure (AE)” and LE has been proposed instead of the HE/LE approach to solve the both under-exposure problem in dark environments. In other words, the result pixels come directly from AE (i.e. appropriate exposure for most pixels) except those over-exposure ones, which will be augmented by the corresponding pixels in LE to improve the details. LE entropy-aware augmentation and seam smoothing have also been proposed to effectively improve the synthesis quality.Finally, the proposed algorithm has been implemented and verified using the TI OMAP4430 (including dual-core ARM Cortex A9) embedded platform. The source images for HDR synthesis are captured from two USB cameras respectively with software-based alignment. The multithreaded implementation can achieve 10 frames/sec real-time HDR video synthesis.

參考文獻


[1] S. Cho, H. S. Hong, H. Han, and Y. Choi, “Alternating line high dynamic range imaging,” in Proc. DSP, 2011, pp. 1-6.
[2] J. Gu, Y. Hitomi, T. Mitsunaga, and S. K. Nayar, “Coded rolling shutter photography: Flexible space-time sampling,” in Proc. ICCP, 2010, pp. 1-8.
[3] S. Nayer and T. Mitsunaga, “High dynamic range imaging: spatially varying pixel exposures,” in Proc. CVPR, 2000, pp. 472-479.
[5] P. E. Debevec and J. Malik, “Recovering high dynamic range radiance maps from photographs,” in Proc. SIGGRAPH, 1997, pp. 369-378.
[6] T. Mitsunaga and S. K. Nayar, “Radiometric self-calibration,” in Proc. CVPR, 1999, pp. 374-380.

延伸閱讀