Perceptual Enhancement for Autonomous Vehicles: Restoring Visually Degraded Images for Context Prediction via Adversarial Training
Document Type
Article
Publication Date
7-1-2022
Abstract
Realizing autonomous vehicles is one of the ultimate dreams for humans. However, perceptual information collected by sensors in dynamic and complicated environments, in particular, vision information, may exhibit various types of degradation. This may lead to mispredictions of context followed by more severe consequences. Thus, it is necessary to improve degraded images before employing them for context prediction. To this end, we propose a generative adversarial network to restore images from common types of degradation. The proposed model features a novel architecture with an inverse and a reverse module to address additional attributes between image styles. With the supplementary information, the decoding for restoration can be more precise. In addition, we develop a loss function to stabilize the adversarial training with better training efficiency for the proposed model. Compared with several state-of-the-art methods, the proposed method can achieve better restoration performance with high efficiency. It is highly reliable for assisting in context prediction in autonomous vehicles.
Identifier
85118544615 (Scopus)
Publication Title
IEEE Transactions on Intelligent Transportation Systems
External Full Text Location
https://doi.org/10.1109/TITS.2021.3120075
e-ISSN
15580016
ISSN
15249050
First Page
9430
Last Page
9441
Issue
7
Volume
23
Recommended Citation
Ding, Feng; Yu, Keping; Gu, Zonghua; Li, Xiangjun; and Shi, Yunqing, "Perceptual Enhancement for Autonomous Vehicles: Restoring Visually Degraded Images for Context Prediction via Adversarial Training" (2022). Faculty Publications. 2842.
https://digitalcommons.njit.edu/fac_pubs/2842