"Hierarchical Particle Swarm Optimization-incorporated Latent Factor An" by Jia Chen, Xin Luo et al.
 

Hierarchical Particle Swarm Optimization-incorporated Latent Factor Analysis for Large-Scale Incomplete Matrices

Document Type

Article

Publication Date

12-1-2022

Abstract

A Stochastic Gradient Descent (SGD)-based Latent Factor Analysis (LFA) model is highly efficient in representative learning on a High-Dimensional and Sparse (HiDS) matrix, where the learning rate adaptation is vital in its efficiency and practicability. The learning rate adaptation of an SGD-based LFA model can be achieved efficiently by learning rate evolution with an evolutionary computing algorithm. However, a resultant model commonly suffers from twofold premature convergence issues, i.e., a) the premature convergence of the learning rate swarm relying on an evolution algorithm, and b) the premature convergence of an LFA model relying on the compound effects of evolution-based learning rate adaptation and adopted optimization algorithm. Aiming at addressed such issues, this work proposes an Hierarchical Particle swarm optimization-incorporated Latent factor analysis (HPL) model with a two-layered structure. The first layer pre-trains desired latent factors with a position-transitional particle swarm optimization-based LFA model with learning rate adaptation; while the second layer performs latent factor refinement with a newly-proposed mini-batch particle swarm optimization algorithm. Experimental results on four HiDS matrices generated by industrial applications demonstrate that an HPL model can well handle the mentioned premature convergence issues, thereby achieving highly-accurate representation to HiDS matrices.

Identifier

85112463380 (Scopus)

Publication Title

IEEE Transactions on Big Data

External Full Text Location

https://doi.org/10.1109/TBDATA.2021.3090905

e-ISSN

23327790

First Page

1524

Last Page

1536

Issue

6

Volume

8

This document is currently not available here.

Share

COinS