Legal text summarization via judicial syllogism with large language models
Abstract Legal judgment documents are typically lengthy, complex, and filled with domain-specific language. Existing summarization methods often struggle to capture the essential legal reasoning, frequently omitting critical details such as the applicable law and the logical connection between facts...
Saved in:
| Main Authors: | , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Springer
2025-07-01
|
| Series: | Journal of King Saud University: Computer and Information Sciences |
| Subjects: | |
| Online Access: | https://doi.org/10.1007/s44443-025-00113-3 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1849235647788744704 |
|---|---|
| author | Yumei Song Yongbin Qin Ruizhang Huang Yanping Chen Chuan Lin |
| author_facet | Yumei Song Yongbin Qin Ruizhang Huang Yanping Chen Chuan Lin |
| author_sort | Yumei Song |
| collection | DOAJ |
| description | Abstract Legal judgment documents are typically lengthy, complex, and filled with domain-specific language. Existing summarization methods often struggle to capture the essential legal reasoning, frequently omitting critical details such as the applicable law and the logical connection between facts and outcomes-resulting in summaries that lack interpretability and coherence. Recent advances in large language models (LLMs) have shown strong capabilities in multi-step reasoning and language generation, making them a promising foundation for tackling the challenges of legal summarization. However, their potential remains underutilized in the legal domain due to the need for explicit legal logic and structured inference. To address this issue, we propose a novel summarization framework based on judicial syllogism using large language models (LLMs). Specifically, we leverage the large language model guiding it explicitly to emulate classical judicial reasoning: extracting the major premise (applicable legal provisions), the minor premise (case-specific facts), and finally deducing the conclusion (judgment outcome). This structured reasoning is operationalized through carefully designed prompt engineering and fine-tuning techniques, enabling the model to reason step-by-step like a human judge. We evaluate our proposed method on two judicial summarization datasets. Experimental results demonstrate that our judicial syllogism approach achieves competitive performance compared to several strong baseline LLMs. Ablation studies further verify that each component (law, facts, conclusion) of the judicial syllogism contributes meaningfully to the summarization quality. Our findings clearly indicate that embedding explicit judicial syllogistic reasoning significantly enhances the accuracy and logical coherence of legal text summarization. |
| format | Article |
| id | doaj-art-d10c20ffbc694126a131311644f43c8a |
| institution | Kabale University |
| issn | 1319-1578 2213-1248 |
| language | English |
| publishDate | 2025-07-01 |
| publisher | Springer |
| record_format | Article |
| series | Journal of King Saud University: Computer and Information Sciences |
| spelling | doaj-art-d10c20ffbc694126a131311644f43c8a2025-08-20T04:02:42ZengSpringerJournal of King Saud University: Computer and Information Sciences1319-15782213-12482025-07-0137511310.1007/s44443-025-00113-3Legal text summarization via judicial syllogism with large language modelsYumei Song0Yongbin Qin1Ruizhang Huang2Yanping Chen3Chuan Lin4Engineering Research Center of Text Computing & Cognitive Intelligence Ministry of Education, Guizhou UniversityEngineering Research Center of Text Computing & Cognitive Intelligence Ministry of Education, Guizhou UniversityEngineering Research Center of Text Computing & Cognitive Intelligence Ministry of Education, Guizhou UniversityEngineering Research Center of Text Computing & Cognitive Intelligence Ministry of Education, Guizhou UniversityEngineering Research Center of Text Computing & Cognitive Intelligence Ministry of Education, Guizhou UniversityAbstract Legal judgment documents are typically lengthy, complex, and filled with domain-specific language. Existing summarization methods often struggle to capture the essential legal reasoning, frequently omitting critical details such as the applicable law and the logical connection between facts and outcomes-resulting in summaries that lack interpretability and coherence. Recent advances in large language models (LLMs) have shown strong capabilities in multi-step reasoning and language generation, making them a promising foundation for tackling the challenges of legal summarization. However, their potential remains underutilized in the legal domain due to the need for explicit legal logic and structured inference. To address this issue, we propose a novel summarization framework based on judicial syllogism using large language models (LLMs). Specifically, we leverage the large language model guiding it explicitly to emulate classical judicial reasoning: extracting the major premise (applicable legal provisions), the minor premise (case-specific facts), and finally deducing the conclusion (judgment outcome). This structured reasoning is operationalized through carefully designed prompt engineering and fine-tuning techniques, enabling the model to reason step-by-step like a human judge. We evaluate our proposed method on two judicial summarization datasets. Experimental results demonstrate that our judicial syllogism approach achieves competitive performance compared to several strong baseline LLMs. Ablation studies further verify that each component (law, facts, conclusion) of the judicial syllogism contributes meaningfully to the summarization quality. Our findings clearly indicate that embedding explicit judicial syllogistic reasoning significantly enhances the accuracy and logical coherence of legal text summarization.https://doi.org/10.1007/s44443-025-00113-3Natural language processingNatural language generationLegal text summarizationJudicial syllogismPrompt engineering |
| spellingShingle | Yumei Song Yongbin Qin Ruizhang Huang Yanping Chen Chuan Lin Legal text summarization via judicial syllogism with large language models Journal of King Saud University: Computer and Information Sciences Natural language processing Natural language generation Legal text summarization Judicial syllogism Prompt engineering |
| title | Legal text summarization via judicial syllogism with large language models |
| title_full | Legal text summarization via judicial syllogism with large language models |
| title_fullStr | Legal text summarization via judicial syllogism with large language models |
| title_full_unstemmed | Legal text summarization via judicial syllogism with large language models |
| title_short | Legal text summarization via judicial syllogism with large language models |
| title_sort | legal text summarization via judicial syllogism with large language models |
| topic | Natural language processing Natural language generation Legal text summarization Judicial syllogism Prompt engineering |
| url | https://doi.org/10.1007/s44443-025-00113-3 |
| work_keys_str_mv | AT yumeisong legaltextsummarizationviajudicialsyllogismwithlargelanguagemodels AT yongbinqin legaltextsummarizationviajudicialsyllogismwithlargelanguagemodels AT ruizhanghuang legaltextsummarizationviajudicialsyllogismwithlargelanguagemodels AT yanpingchen legaltextsummarizationviajudicialsyllogismwithlargelanguagemodels AT chuanlin legaltextsummarizationviajudicialsyllogismwithlargelanguagemodels |