Interpretability Diversity for Decision-Tree-Initialized Dendritic Neuron Model Ensemble

Document Type

Article

Publication Date

1-1-2024

Abstract

To construct a strong classifier ensemble, base classifiers should be accurate and diverse. However, there is no uniform standard for the definition and measurement of diversity. This work proposes a learners' interpretability diversity (LID) to measure the diversity of interpretable machine learners. It then proposes a LID-based classifier ensemble. Such an ensemble concept is novel because: 1) interpretability is used as an important basis for diversity measurement and 2) before its training, the difference between two interpretable base learners can be measured. To verify the proposed method's effectiveness, we choose a decision-tree-initialized dendritic neuron model (DDNM) as a base learner for ensemble design. We apply it to seven benchmark datasets. The results show that the DDNM ensemble combined with LID obtains superior performance in terms of accuracy and computational efficiency compared to some popular classifier ensembles. A random-forest-initialized dendritic neuron model (RDNM) combined with LID is an outstanding representative of the DDNM ensemble.

Identifier

85164431827 (Scopus)

Publication Title

IEEE Transactions on Neural Networks and Learning Systems

External Full Text Location

https://doi.org/10.1109/TNNLS.2023.3290203

e-ISSN

21622388

ISSN

2162237X

PubMed ID

37410644

First Page

15896

Last Page

15909

Issue

11

Volume

35

Grant

61971383

Fund Ref

National Natural Science Foundation of China

This document is currently not available here.

Share

COinS