Transformer-base模型在自然語言任務中取得了優異的成績,因為它是語境化詞嵌入的絕佳實現。但其結構複雜,因此一般人難以理解。由於使用者缺乏對模型的徹底了解,很難進一步交互和利用模型,也很難理解為什麼會出現錯誤。但是,由於模型包含許多無法直接理解的複雜參數,因此很難通過簡單的參數或數學分析來解決這個問題。因此,我們針對該模型結構提出了可視化分析工具,幫助使用者詳細了解模型。包括輸入資料對模型的影響,以及模型各層的運作。我們專注於模型在在自然語言任務上的決策過程, 因此我們的工具基於自然語言任務。我們設計了一套完整的流程,使用者可以清楚的了解模型每一步的細節,可以清晰的分析輸入的數據,並且能和模型內部直接互動。用戶可以製定自己的假設並在此工具中進行驗證。 關鍵字:資料視覺化、模型可解釋性、語境化詞嵌入
The Transformer-base model has achieved excellent results in natural language tasks because it is a wonderful implement for contextualized word representation, but its structure is complex and therefore difficult to understand. Because the user does not fully understand the model, it is difficult to interact further and utilize the model, and it is difficult to understand why an error occurs. However, because the model contains many complex parameters that cannot be directly understood, it is difficult to solve this problem through simple parameters or mathematical analysis. Therefore, we propose a visual analysis tool for this model structure to help users understand the model in detail. Include the impact of input data on the model, and the operation of each layer of the model. We focus on the model's decision-making process on natural language tasks, so our tool is based on natural language tasks. We design a complete set of processes, users can clearly understand the details of each step of the model, can clearly analyze the input data, and can interact directly with the model. Users are able to formulate their own hypotheses and verify them in this tool. Keywords: Data visualization, Model Interpretation, Contextualize word representation