隨著影像編輯軟體的功能越來越強大,現今大部分的人都能夠輕易的竄改一張影像。雖然影像竄改偵測的技術已經發展了一段時間,大部分的方法只著重在討論單一種類的竄改方式。除此之外,大部分的方法都只能判斷一個可疑的區域是否可信,無法直接的標示出一張影像中的竄改位置。這篇論文的目的是提出一個能夠同時利用所有資訊的特徵混和模型,並且能夠自動及直接的找出竄改區域。我們採用強健式主成分分析(Robust Principal Component Analysis)將一張待測影像拆解成「未被竄改」以及「被竄改」兩部分,其中「未被竄改」的區域的特徵彼此具有相似的表現,換句話說,這些特徵所構成的矩陣具有低秩(Low-Rank)的性質;而「被竄改」的區域的特徵所構成的矩陣是稀疏的(Sparse)且具有低秩(Low-Rank)的特性。我們利用群稀疏(Group-Sparsity)的技術使得偵測到的竄改區域在空間上具有一致性的效果。實驗結果顯示在不同的例子中,我們提出的方法表現得都比現存的方法還要好。
Nowadays, image editing softwares are powerful and user-friendly that most people can easily create visual-pleasant tampered images. The techniques of image forensics have been developed for about two decades. However, most techniques only focus on one tampering trace. In addition, they sometimes assume that the suspicious region is known a priori. The purpose of this work is to develop a feature fusion model which can utilize all the available traces and automatically localize the tampered region. We adopt the early fusion scheme to fuse features in order to consider all the available features simultaneously. We propose to utilize Robust Principal Component Analysis (RPCA) to decompose one test image into authentic parts and tampered parts. We assume the authentic parts share similar feature behaviors, i.e., low-rank, and the tampered parts are sparse and also share similar feature behaviors, i.e., sparse and low-rank. We consider the spatial consistency of the detected tampered parts by using Group-Sparsity. The experimental results demonstrate the effectiveness of the proposed method, which outperforms the state-of-the-art methods in both synthetic and realistic cases.