透過您的圖書館登入
IP:3.144.22.162
  • 學位論文

基於預測編碼和機率模型的影像壓縮演算法

Prediction Coding and Probability Model Based Image Compression Algorithms

指導教授 : 丁建均

摘要


隨著數位相機與數位攝影機和網際網路的普及,數位影像現今已經在人類日常生活中佔有十分重要的地位。然而,一般未經壓縮的數位影像需要占用相當龐大的記憶體空間,不但浪費了儲存資源也不利於在網際網路上傳輸。於是各種影像壓縮的標準和演算法陸續被提出,如當今最廣為使用的JPEG影像壓縮標準,以及最新的JPEG2000靜態影像壓縮標準,都是影像壓縮的應用實例。儘管現存幾種常見的壓縮標準能夠提供不錯的壓縮品質,但隨著消費者更多對品質和應用的需求出現,影像壓縮仍是現今一個重要的研究領域。 本篇論文旨在探討數位影像壓縮中的兩個重要問題。其一,針對提升影像壓縮效率的目標,我們設計了能顯著提升目前JPEG影像壓縮標準壓縮效率的演算法。其二,針對減少影像壓縮過程中緩衝暫存空間的使用量,我們也設計了可以達到此目標的無失真和近無失真影像壓縮演算法。 首先,我們設計的演算法採用雙重預測和柏瑞圖機率分佈模型,應用於JPEG標準中的直流項編碼。Joint Photographic Experts Group (JPEG)是第一個國際數位影像壓縮標準,可應用於灰階和彩色影像壓縮。在JPEG中,直流項( DC係數)使用差分編碼,亦即被編碼的是影像中當前區塊的DC係數與前一個區塊的DC係數之間的差異項。在我們提出的方法中,我們用四個相鄰區塊的DC係數來預測當前的DC係數。然後,我們進一步使用四個相鄰區塊的預測誤差值來估計當前區塊的預測誤差。接著,用柏瑞圖機率模型來近似預測誤差的機率分佈並使用算術編碼器來做編碼。模擬結果顯示,此演算法可顯著地減少25%∼60%的壓縮資料量,達到更高的壓縮率。 針對減少緩衝暫存空間,我們採用一個基於行的預測方法,並使用上下文建模的技術,來解決無失真和近無失真影像壓縮演算法中緩衝暫存空間過大的問題。較大的緩衝暫存空間意味著需要更大的記憶體容量和較高的生產成本,因此在許多嵌入式系統和移動設備,例如手機、印表機、電視機、數位相機等,緩衝暫存空間通常是非常小的。基於此原因,研究者在設計影像壓縮演算法時,除了提高壓縮率和PSNR之外,也應注意要盡量降低緩衝暫存空間的使用。雖然JPEG LS影像壓縮標準和CALIC的壓縮演算法已被證明優於許多無失真和近無失真影像壓縮演算法,這些演算法是否可以在降低緩衝暫存空間的情況下而沒有顯著的性能損失,仍然是一個實作上的重要問題。我們設計了一個完整的系統來實現使用極小緩衝暫存空間的無失真和近無失真影像壓縮演算法。提出的方法是“以行為主”,因此只須在緩衝暫存空間中保存最小的行數。我們還利用其他技術,如適應性取樣、量化和上下文建模,進一步提升了壓縮效率。

並列摘要


In this thesis, we probe into two major problems in the current research stream of digital image compression. First, we propose a new compression algorithm to enhance the compression efficiency of current Joint Photographic Experts Group (JPEG) image compression standard. Second, we also design an algorithm to address the problem of large buffer size in lossless and near-lossless image compression. We devise a new algorithm which adopts the techniques of double prediction and the Pareto probability model was applied to encode the DC term in the JPEG compression process. JPEG is a popular international digital image compression standard for still images of both grayscale and color. In JPEG, the DC term was encoded by differential coding, i.e., encoding the difference of the DC values between the current block and the previous block. In our proposal, we first use the DC terms of four adjacent blocks to predict the current DC value. We then further use the prediction error of the four adjacent blocks to estimate the variance of the prediction error of the current block. Next, the Pareto distribution is applied to model the probability distribution of the prediction error. Simulation results have shown that the proposed algorithms can significantly reduce the data size by 25% ~ 60% and achieve a much higher compression rate. In addition, a new algorithm which adopts the techniques of line-based prediction and context modeling was applied to address the problem of large buffer size in lossless and near-lossless image compression. Less buffer size requirement is an important issue since a larger buffer size means larger memory space requirement and accordingly higher costs of manufacturing. Therefore, in many embedded systems and mobile devices, e.g., cell phones, printers, TV sets and digital cameras, the buffer sizes are usually very small. While an image compression algorithm is developed to increase the compression rate and PSNR, attentions should also be paid to lower the buffer size as well. Although the JPEG LS image compression standard and CALIC compression scheme have been shown to be superior to many lossless and near-lossless coding techniques, whether these coding schemes can be implemented in small buffer size without significant loss in performance is still an important issue regarding the practicality of these techniques. We designed a complete system to perform small buffer sized lossless and near-lossless image coding. The proposed approach is “line-based”, since the images are read line by line and thus only a small number of pixels are kept in memory. We also utilized techniques such as adaptive sampling, quantization, and context modeling to further achieve compression efficiency. This low buffer size encoding system could achieve performance comparable to the state of the art lossless and near-lossless image compression scheme at a fraction of their buffer size utilization.

參考文獻


A. Overview of Image and Video Coding Algorithms
[3] Yao Wang, Jorn Ostermann, and Ya-Qin Zhang, "Video Processing and Communications," Prentice Hall, 2007.
[4] Lain E. G. Richardson, "Video Codec Design: Developing Image and Video Compression Systems," John Wiley & Sons Inc, 2002.
B. Transform Coding
[6] Ahmed, N., Natarajan, T., and Rao, K. R., "Discrete Cosine Transform," IEEE Trans. Computers, vol. C-23, Jan. 1974, pp. 90-93.

延伸閱讀