LawLLM: Law Large Language Model for the US Legal System

Document Type

Conference Proceeding

Publication Date

10-21-2024

Abstract

In the rapidly evolving field of legal analytics, finding relevant cases and accurately predicting judicial outcomes are challenging because of the complexity of legal language, which often includes specialized terminology, complex syntax, and historical context. Moreover, the subtle distinctions between similar and precedent cases require a deep understanding of legal knowledge. Researchers often conflate these concepts, making it difficult to develop specialized techniques to effectively address these nuanced tasks. In this paper, we introduce the Law Large Language Model (LawLLM), a multi-task model specifically designed for the US legal domain to address these challenges. LawLLM excels at Similar Case Retrieval (SCR), Precedent Case Recommendation (PCR), and Legal Judgment Prediction (LJP). By clearly distinguishing between precedent and similar cases, we provide essential clarity, guiding future research in developing specialized strategies for these tasks. We propose customized data preprocessing techniques for each task that transform raw legal data into a trainable format. Furthermore, we also use techniques such as in-context learning (ICL) and advanced information retrieval methods in LawLLM. The evaluation results demonstrate that LawLLM consistently outperforms existing baselines in both zero-shot and few-shot scenarios, offering unparalleled multi-task capabilities and filling critical gaps in the legal domain. Code and data are available at https://github.com/Tizzzzy/Law_LLM.

Identifier

85210024413 (Scopus)

ISBN

[9798400704369]

Publication Title

International Conference on Information and Knowledge Management, Proceedings

External Full Text Location

https://doi.org/10.1145/3627673.3680020

ISSN

21550751

First Page

4882

Last Page

4889

This document is currently not available here.

Share

COinS