在本文中,我們基於UNet和現有設計(包括聚合變換,初始模塊和遞歸殘差卷積神經網絡)的組合,提出UNet-AIR2作為有效的圖像去霧模型。 與以前的依賴於物理散射模型的方法不同,UNet-AIR2直接生成除霧後的圖像,而無需估計透射圖和大氣光。 為了更好地證明每個模塊的有效性,我們進行了消融研究,評估使用了峰值信噪比(PSNR),結構相似度(SSIM)和主觀視覺效果。 此外,我們確定了每個模塊在UNet-AIR2中有效的原因,並獲得了顯著圖以觀察每個輸出像素與輸入圖像之間的關係。 在合成數據集和真實數據集上進行的大量實驗表明,與現有的圖像霧度去除技術相比,該方法具有明顯的改進。
In this paper, we propose UNet-AIR2 as an effective image dehazing model, based on UNet and a combination of state-of-the-art designs, including the aggregated transformation, inception module, and recurrent residual convolutional neural network. Unlike previous methods that depend on physical scattering models, UNet-AIR2 directly generates the dehazed image without estimating the transmission map and atmospheric light. To better demonstrate the effectiveness of each module, we conduct an ablation study evaluated using the peak signal-to-signal (PSNR), Structural Similarity (SSIM), and subjective visual effects. Furthermore, we determine the reasons why each module is valid in UNet-AIR2, and we obtained a saliency map to observe how each output pixel is related to the input image. Extensive experiments on synthetic datasets and real-world datasets reveal that the proposed method has significant improvements over the existing state-of-the-art methods for image haze removal.