法律用語中之『裁判』,依刑事訴訟法第220條規定:「裁判,除依本法應已判決行之者外,以裁定之」。主要依形式區分為『裁定』與『判決』兩類,而就裁定與判決事項所下判斷之裁定書與判決書,合稱『裁判書』。 裁判書為法律工作者或訴訟相關人士在處理法律問題時的重要參考資料,包含著法院對於特定法律問題的見解。然而,裁判書中亦包含著大量無法適用於其他類型案件之資訊,故常需花費大量時間精讀。若能透過閱讀專業法律工作者所製作具有參考價值之『裁判要旨』,便能透過裁判要旨快速領略裁判書之摘要與重點。但由於製作裁判要旨亦需花費大量時間、精力,故大部分裁判書目前仍不具有人工製作之裁判要旨。 在法院所製作之裁判要旨中,多為節錄原先裁判書內文中之敘述,故應適用於機器學習之抽取式自動文本摘要技術。若能透過此一技術輔助法律工作者製作裁判書之裁判要旨,應能進一步提升製作裁判書要旨之效率。 本研究將抽取式自動文本摘要視為二元分類任務,使用深度神經網路搭建分類模型,進行了不同上下文長度實驗、不同嵌入模型實驗、加入不同特徵實驗、不同深度神經網路的實驗,最終發現在使用BiLSTM和BiGRU作為模型中深度神經網路結構的實驗效果最佳,最後更通過使用bagging的投票機制進一步提升模型分類效果。 由於裁判書中要旨遠比非要旨敘述來得更少,故資料類別比例十分失衡。在這樣的情況下,本研究所提出的模型在地方法院裁判書資料集中F1之分數可達0.547、高等法院裁判書資料集中F1之分數可達0.492、最高法院裁判書資料集中F1之分數可達0.576,可證實分類模型有確實學習到如何抽取裁判書中的裁判要旨。
The judgments of courts are important reference material for legal practitioners or persons involved in litigation when they dealing with legal issues, they contain the court’s opinions on specific legal issues. However, the judgments also contain a lot of information that cannot be suitable for the other types of cases, so we need spend a lot of time to read and understand. If we can read a good gist of judgment which produced by professional legal practitioners, we can quickly understand the summary and key points of the judgment via the gist of judgment. Since it takes a lot of times and energy to extract the gist of judgment, most of the judgments of courts do not have the gist, currently. Most of the gist of judgment produced by the court is an excerpt from the paragraph in the original judgment, so we should be able to imitate this action to use machine learn technique to train a extractive automatic text summarization model. If we can build a system to assist legal practitioners to excerpt the gist of judgment, it should be able to further improve the efficiency of making the gist of judgment. In this research, extractive automatic text summarization is regarded as a binary classification task and we can use deep neural network to build a classification model. We experiment with different context length experiments, different embedding models experiments and different deep neural networks experiments. We found that using BiLSTM to build the classification model structure are the best. Finally, we use ensemble learning “bagging” to improve the classification effect. F1 score is 0.547 in the District Court judgment data set. F1 score is 0.493 in the District Court judgment data set. F1 score is 0.576 in the Supreme Court judgment data set.