RELDEC: Reinforcement Learning-Based Decoding of Moderate Length LDPC Codes

Document Type

Conference Proceeding

Publication Date

10-1-2023

Abstract

In this work we propose RELDEC, a novel approach for sequential decoding of moderate length low-density parity-check (LDPC) codes. The main idea behind RELDEC is that an optimized decoding policy is subsequently obtained via reinforcement learning based on a Markov decision process (MDP). In contrast to our previous work, where an agent learns to schedule only a single check node (CN) within a group (cluster) of CNs per iteration, in this work we train the agent to schedule all CNs in a cluster, and all clusters in every iteration. That is, in each learning step of RELDEC an agent learns to schedule CN clusters sequentially depending on a reward associated with the outcome of scheduling a particular cluster. We also modify the state space representation of the MDP, enabling RELDEC to be suitable for larger block length LDPC codes than those studied in our previous work. Furthermore, to address decoding under varying channel conditions, we propose agile meta-RELDEC (AM-RELDEC) that employs meta-reinforcement learning. The proposed RELDEC scheme significantly outperforms standard flooding and random sequential decoding for a variety of LDPC codes, including codes designed for 5G new radio.

Identifier

85165249877 (Scopus)

Publication Title

IEEE Transactions on Communications

External Full Text Location

https://doi.org/10.1109/TCOMM.2023.3296621

e-ISSN

15580857

ISSN

00906778

First Page

5661

Last Page

5674

Issue

10

Volume

71

This document is currently not available here.

Share

COinS