Secure Your Model: An Effective Key Prompt Protection Mechanism for Large Language Models

Document Type

Conference Proceeding

Publication Date

1-1-2024

Abstract

Large language models (LLMs) have notably revolutionized many domains within natural language processing due to their exceptional performance. Their security has become increasingly vital. This study is centered on protecting LLMs against unauthorized access and potential theft. We propose a simple yet effective protective measure wherein a unique key prompt is embedded within the LLM. This mechanism enables the model to respond only when presented with the correct key prompt; otherwise, LLMs will refuse to react to any input instructions. This key prompt protection offers a robust solution to prevent the unauthorized use of LLMs, as the model becomes unusable without the correct key. We evaluated the proposed protection on multiple LLMs and NLP tasks. Results demonstrate that our method can successfully protect the LLM without significantly impacting the model's original function. Moreover, we demonstrate potential attacks that attempt to bypass the protection mechanism will adversely affect the model's performance, further emphasizing the effectiveness of the proposed protection method.

Identifier

85197878892 (Scopus)

ISBN

[9798891761193]

Publication Title

Findings of the Association for Computational Linguistics: NAACL 2024 - Findings

External Full Text Location

https://doi.org/10.18653/v1/2024.findings-naacl.256

First Page

4061

Last Page

4073

This document is currently not available here.

Share

COinS