在數位時代,影像壓縮在眾多領域扮演著關鍵的角色,從網路媒體到串流服務,再到高解析度醫學影像和車聯網等,都有助於實現資料的有效儲存和傳輸。隨著對高品質圖像通訊的需求不斷增加,對先進壓縮技術的需求變得日益迫切。近年來,已提出了一些學習型影像壓縮方法,並在傳統標準下取得了令人信服的成果。然而,可變率影像壓縮仍然是一個待解決的問題。一些學習型影像壓縮方法利用多個網路實現不同壓縮率,而其他方法則使用單一模型,但這可能會增加計算複雜度並降低性能。在本文中,我們透過漸進式學習實現了一種基於參數高效微調方法,Low-Rank Adaptation(LoRA),的可變壓縮率影像壓縮方法。由於LoRA 的參數化合併,我們所提出的方法在推論時並不會增加任何的計算複雜度,並且在完整的實驗中表明,與基於多個模型的方法相比,該方法在性能相近的狀況下,在參數量上減少百之九十九,在數據集上減少百分之九十,在訓練步驟上減少百分之九十七。
In the digital age, image compression is crucial for numerous applications, including web media, streaming services, high-resolution medical imaging, and connected vehicle networks, enabling efficient data storage and transmission. With the increasing demand for high-quality image communication, the need for advanced compression techniques becomes increasingly critical. Numerous learned image compression techniques have recently been introduced, showing impressive performance compared to traditional standards. However, variable rate image compression remains an unresolved issue. Specific learned image compression methods deploy multiple networks to attain different compression rates, whereas others use a single model, which often results in higher computational complexity and reduced performance. In this thesis, we propose a progressive learning approach for variable rate image compression based on the parameter-efficient fine-tuning method, the Low-Rank Adaptation. Due to the re-parameterized merging of Low-Rank Adaptation, our proposed method does not introduce additional computational complexity during inference. Compared to methods utilizing multiple models, comprehensive experiments demonstrate that our approach achieves similar performance, saving 99% in parameter storage, 90% in datasets, and 97% in training steps.