Unsupervised Graph-Based Tibetan Multi-Document Summarization
Document Type
Article
Publication Date
1-1-2022
Abstract
Text summarization creates subset that represents the most important or relevant information in the original content, which effectively reduce information redundancy. Recently neural network method has achieved good results in the task of text summarization both in Chinese and English, but the research of text summarization in low-resource languages is still in the exploratory stage, especially in Tibetan. What’s more, there is no large-scale annotated corpus for text summarization. The lack of dataset severely limits the development of low-resource text summarization. In this case, unsupervised learning approaches are more appealing in low-resource languages as they do not require labeled data. In this paper, we propose an unsupervised graph-based Tibetan multi-document summarization method, which divides a large number of Tibetan news documents into topics and extracts the summarization of each topic. Summarization obtained by using traditional graph-based methods have high redundancy and the division of documents topics are not detailed enough. In terms of topic division, we adopt two level clustering methods converting original document into document-level and sentence-level graph, next we take both linguistic and deep representation into account and integrate external corpus into graph to obtain the sentence semantic clustering. Improve the shortcomings of the traditional K-Means clustering method and perform more detailed clustering of documents. Then model sentence clusters into graphs, finally remeasure sentence nodes based on the topic semantic information and the impact of topic features on sentences, higher topic relevance summary is extracted. In order to promote the development of Tibetan text summarization, and to meet the needs of relevant researchers for high-quality Tibetan text summarization datasets, this paper manually constructs a Tibetan summarization dataset and carries out relevant experiments. The experiment results show that our method can effectively improve the quality of summarization and our method is competitive to previous unsupervised methods.
Identifier
85130180286 (Scopus)
Publication Title
Computers Materials and Continua
External Full Text Location
https://doi.org/10.32604/cmc.2022.027301
e-ISSN
15462226
ISSN
15462218
First Page
1769
Last Page
1781
Issue
1
Volume
73
Grant
52071349
Fund Ref
National Science Foundation
Recommended Citation
Yan, Xiaodong; Wang, Yiqin; Song, Wei; Zhao, Xiaobing; Run, A.; and Yanxing, Yang, "Unsupervised Graph-Based Tibetan Multi-Document Summarization" (2022). Faculty Publications. 3471.
https://digitalcommons.njit.edu/fac_pubs/3471