Adaptive Divergence-Based Non-Negative Latent Factor Analysis of High-Dimensional and Incomplete Matrices from Industrial Applications

Document Type

Article

Publication Date

4-1-2024

Abstract

High-Dimensional and Incomplete (HDI) data are commonly seen in various big-data-related applications concerning the inherent non-negativity interactions among numerous nodes. A Non-negative Latent Factor Analysis (NLFA) model performs efficient representation learning to such HDI data. However, existing NLFA models all adopt a static divergence metric like Euclidean distance or α-β divergence to build its learning objective, which evidently restricts its scalability in representing HDI data from different domains. Aiming at addressing this critical issue, this study proposes an Adaptive Divergence-based Non-negative Latent-factor-analysis (ADNL) model with three-fold ideas: a) generalizing the objective function with the α-β-divergence to expand its potential of representing various HDI data; b) facilitating a smooth non-negative bridging function to connect the optimization variables with output latent factors for keeping non-negativity; and c) making the divergence parameters adaptive through position-transitional particle swarm optimization, thereby facilitating adaptive divergence in the learning objective to achieve high scalability. Empirical studies on six HDI datasets from real applications demonstrate that an ADNL model outperforms the state-of-the-art models in both estimation accuracy and computational efficiency for missing data of an HDI matrix.

Identifier

85182941935 (Scopus)

Publication Title

IEEE Transactions on Emerging Topics in Computational Intelligence

External Full Text Location

https://doi.org/10.1109/TETCI.2023.3332550

e-ISSN

2471285X

First Page

1209

Last Page

1222

Issue

2

Volume

8

Grant

62272078

Fund Ref

Natural Science Foundation of Chongqing Municipality

This document is currently not available here.

Share

COinS