分析社交媒體用戶的評論呈現出重大挑戰,因為要辨識留言主體與留言情感之間的關係在複雜性上有所困難,特別是當使用者的評論在長度上有很大的變化時。本文介紹了一種新穎的意見樹解析模型,該模型能夠處理評論中不同方面之間錯綜複雜的互動,在訓練模型時,加入連接詞和語義修飾詞來提高解析的準確度。且在模型複雜化之下,為了提高訓練過程的效率並管理計算需求,我們在模型上實作了可以使參數量減少卻能達到差不多效能的方法(PEFT)。 我們在 ACOS 數據集上評估了我們提出的模型,鑑於描述用戶對特定方面情感的數據集有限,以及由於其資源密集性對大型預訓練語言模型(LLMs)進行下游調整的挑戰,我們的方法提出了一種改變計算方式的 OTP 模型。這種方法改變了模型的Loss function,專注於戰略性放置的模塊訓練,且在加入Adpater的情況下,顯著減少了 GPU 記憶體占用,並減輕了記憶體不足(OOM)問題,而不損害預訓練模型的整體完整性。這種方法不僅提高了訓練效率,而且還維持了與原始 LLM 配置接近的性能水平。
Analyzing social media user comments presents significant challenges due to the complexity of discerning relationships between opinions and aspects, particularly when comments vary greatly in length. This paper introduces a novel Opinion Tree Parser Model that navigates the intricate interplay between different aspects within comments, utilizing conjunctions and semantic modifiers to enhance the parsing accuracy. To improve the efficiency of the training process and manage the computational demands, we have implemented Position-Encoded Fine-Tuning (PEFT) methods on the decoder side. We evaluated our proposed model on ACOS datasets, given the limited availability of datasets that describe user sentiments towards specific aspects and the challenges of fine-tuning large pre-trained language models (LLMs) due to their resource intensity, our approach proposes an advanced context-free opinion grammar. This method integrates an adapter to focus training on strategically placed modules, significantly reducing the GPU memory footprint and mitigating out-of-memory (OOM) issues without compromising the overall integrity of the pre-trained model. This approach not only enhances training efficiency but also maintains performance levels close to those of the original LLM configurations.