Interpretable machine learning for weather and climate prediction: A review

Document Type

Article

Publication Date

12-1-2024

Abstract

Advanced machine learning models have recently achieved high predictive accuracy for weather and climate prediction. However, these complex models often lack inherent transparency and interpretability, acting as “black boxes” that impede user trust and hinder further model improvements. As such, interpretable machine learning techniques have become crucial in enhancing the credibility and utility of weather and climate modeling. In this paper, we review current interpretable machine learning approaches applied to meteorological predictions. We categorize methods into two major paradigms: (1) Post-hoc interpretability techniques that explain pre-trained models, such as perturbation-based, game theory based, and gradient-based attribution methods. (2) Designing inherently interpretable models from scratch using architectures like tree ensembles and explainable neural networks. We summarize how each technique provides insights into the predictions, uncovering novel meteorological relationships captured by machine learning. Lastly, we discuss research challenges and provide future perspectives around achieving deeper mechanistic interpretations aligned with physical principles, developing standardized evaluation benchmarks, integrating interpretability into iterative model development workflows, and providing explainability for large foundation models.

Identifier

85203821376 (Scopus)

Publication Title

Atmospheric Environment

External Full Text Location

https://doi.org/10.1016/j.atmosenv.2024.120797

e-ISSN

18732844

ISSN

13522310

Volume

338

Grant

62106270

Fund Ref

National Natural Science Foundation of China

This document is currently not available here.

Share

COinS