一般而言,我們都會預期收到的影像是清晰的,進而對它做其它電腦視覺的影像處理。然而,一旦天氣狀況改變,導致起霧的話,會造成顏色淡化,對比降低,進一步影響我們人眼視覺和電腦視覺的判斷。 而當今已有許多方法可以對霧化的影像使其清晰。目前(2010)當以dark channel prior的方法在速度和效果方面都令人印象深刻。可惜離即時處理依舊有一段距離, 因此我們借用他的假設, 試圖簡化計算負擔最大的步驟, 同時產生可接受的效果, 簡化過程也導致色偏的問題產生, 這是必然的結果, 我們以恢復光學模型最原本樣子來處理之。而在去雨方面, 目前絕大多數方法都是針對影片, 利用frame之間的motion變化來初步偵測所有可能是雨的物件,再利用一些條件來進一步篩選它們,最後針對是判斷為雨的位置進行blur或是temporal median filter。它們效果都不錯,可惜都無法去除單張的雨。我們對於大雨提出一個不同於以往的處理方法,把它視為texture。藉由image decomposition的概念使用MCA framework把雨分解出來。處理過程遇到的最大問題就是字典並非orthogonal,因此並非字典越大越好,導致我們決定從test影像挑選exemplar進行字典訓練供後續使用。
In general, we expect to receive vivid image, and further apply computer vision image processing on it. However, weather varies with time. Haze lowers hue saturation, degrades contrast, and affects recognition of human vision and computer vision. There exists many dehazing methods. Among them, the dark channel prior method displays impressive result, but it still cost 10~20 seconds for a 400*600 image. Therefore, we utilize it assumption and try to reduce its most time-consuming step while producing acceptable output. During the modification, we encounter the inevitable color shift problem, which is solved by recovering the optical model to its original form. In rain removal part, most current methods need video as input. Motion between frames are used to detect possible candidate rain streaks and more strict constraints are applied to further filtered out non-rain object. Blur and temporal median filter are implemented on those rain streaks. Each of these methods shows amazing results, but they fail to achieve rain removal in single image. For heavy rain, we propose a different method from the past. By viewing rain streaks as texture, it meets the image decomposition idea and MCA framework is adopted. But the dictionary is not orthogonal, thus we have to select exemplar patches from test image and train a corresponding dictionary.