CMR Scaling Law: Predicting Critical Mixture Ratios for Continual Pre-training of Language Models
Document Type
Conference Proceeding
Publication Date
1-1-2024
Abstract
Large Language Models (LLMs) excel in diverse tasks but often underperform in specialized fields due to limited domain-specific or proprietary corpus. Continual pre-training (CPT) enhances LLM capabilities by imbuing new domain-specific or proprietary knowledge while replaying general corpus to prevent catastrophic forgetting. The data mixture ratio of general corpus and domain-specific corpus, however, has been chosen heuristically, leading to sub-optimal training efficiency in practice. In this context, we attempt to re-visit the scaling behavior of LLMs under the hood of CPT, and discover a power-law relationship between loss, mixture ratio, and training tokens scale. We formalize the trade-off between general and domain-specific capabilities, leading to a well-defined Critical Mixture Ratio (CMR) of general and domain data. By striking the balance, CMR maintains the model's general ability and achieves the desired domain transfer, ensuring the highest utilization of available resources. Considering the balance between efficiency and effectiveness, CMR can be regarded as the optimal mixture ratio. Through extensive experiments, we ascertain the predictability of CMR, propose CMR scaling law and have substantiated its generalization. These findings offer practical guidelines for optimizing LLM training in specialized domains, ensuring both general and domain-specific performance while efficiently managing training resources.
Identifier
85217812174 (Scopus)
ISBN
[9798891761643]
Publication Title
EMNLP 2024 - 2024 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference
External Full Text Location
https://doi.org/10.18653/v1/2024.emnlp-main.903
First Page
16143
Last Page
16162
Recommended Citation
Gu, Jiawei; Yang, Zacc; Ding, Chuanghao; Zhao, Rui; and Tan, Fei, "CMR Scaling Law: Predicting Critical Mixture Ratios for Continual Pre-training of Language Models" (2024). Faculty Publications. 720.
https://digitalcommons.njit.edu/fac_pubs/720