近年來隨著手持攝影機的普及,使人們得以在任何時間及地點拍攝照片與影片。然而,因為相機晃動或是景物中物體移動,這些照片及影片會產生模糊的現象。雖然現今的影片去模糊方法能部分消除非均勻分布的模糊,但他們都需要使用到未來時間,甚至是全部時間的影格才能做到,這也使這些方法的實用度受限。為了解決這樣的問題,我們提出了連續一對一影格去模糊網路,使我們在不需要使用未來影格的情況下,仍能有效消除模糊。我們透過遞迴結構讓網路得以同時傳遞空間與時序上的資訊到下一個影格,以幫助其消除模糊。透過大量的實驗結果,我們證明了所提出的方法在多個困難的資料集上,質與量都勝過了之前的最好的去模糊方法。此外,因為我們的方法不需要以後處理的方式去模糊,更適合應用在實際的場景上。
As the growing availability of hand-held cameras in recent years, more and more images and videos are taken at any time and any place. However, images and videos are suffering from undesirable blur due to camera shake or objects moving in the scene. While modern video deblurring methods can remove non-uniform blur and achieve impressive performance, most of them are based on batch processing, implying that they require future frames or even all video frames to perform deblurring. As a result, their practical applications are limited. To address this, we propose a sequentially one-to-one video deblurring network (SOON) which can deblur effectively without any future information. It transfers both spatial and temporal information to the next frame by utilizing the recurrent architecture. In addition, we design a novel Spatio-Temporal Attention module to nudge the network to focus on the meaningful and essential features in the past. Extensive experiments demonstrate that the proposed method outperforms the state-of-the-art deblurring methods, both quantitatively and qualitatively, on various challenging real-world deblurring datasets. Moreover, as our method deblurs in an online manner and is potentially real-time, it is more suitable for practical applications.