"Embedding Imputation With Self-Supervised Graph Neural Networks" by Uras Varolgunes, Shibo Yao et al.
 

Embedding Imputation With Self-Supervised Graph Neural Networks

Document Type

Article

Publication Date

1-1-2023

Abstract

Embedding learning is essential in various research areas, especially in natural language processing (NLP). However, given the nature of unstructured data and word frequency distribution, general pre-trained embeddings, such as word2vec and GloVe, are often inferior in language tasks for specific domains because of missing or unreliable embedding. In many domain-specific language tasks, pre-existing side information can often be converted to a graph to depict the pair-wise relationship between words. Previous methods use kernel tricks to pre-compute a fixed graph for propagating information across different words and imputing missing representations. These methods require predefining the optimal graph construction strategy before any model training, resulting in an inflexible two-step process. In this paper, we leverage the recent advances in graph neural networks and self-supervision strategy to simultaneously learn a similarity graph and impute missing embeddings in an end-to-end fashion with the overall time complexity well controlled. We undertake extensive experiments to show that the integrated approach performs better than several baseline methods.

Identifier

85166481717 (Scopus)

Publication Title

IEEE Access

External Full Text Location

https://doi.org/10.1109/ACCESS.2023.3292314

e-ISSN

21693536

First Page

70610

Last Page

70620

Volume

11

Grant

DE-SC0022346

Fund Ref

U.S. Department of Energy

This document is currently not available here.

Share

COinS