End-to-end Semantic Segmentation Network for Low-Light Scenes
Document Type
Conference Proceeding
Publication Date
1-1-2024
Abstract
In the fields of robotic perception and computer vision, achieving accurate semantic segmentation of low-light or nighttime scenes is challenging. This is primarily due to the limited visibility of objects and the reduced texture and color contrasts among them. To address the issue of limited visibility, we propose a hierarchical gated convolution unit, which simultaneously expands the receptive field and restores edge texture. To address the issue of reduced texture among objects, we propose a dual closed-loop bipartite matching algorithm to establish a total loss function consisting of the unsupervised illumination enhancement loss and supervised intersection-over-union loss, thus enabling the joint minimization of both losses via the Hungarian algorithm. We thus achieve end-to-end training for a semantic segmentation network especially suitable for handling low-light scenes. Experimental results demonstrate that the proposed network surpasses existing methods on the Cityscapes dataset and notably outperforms state-of-the-art methods on both Dark Zurich and Nighttime Driving datasets.
Identifier
85202443488 (Scopus)
ISBN
[9798350384574]
Publication Title
Proceedings - IEEE International Conference on Robotics and Automation
External Full Text Location
https://doi.org/10.1109/ICRA57147.2024.10611148
ISSN
10504729
First Page
7725
Last Page
7731
Grant
L223019
Fund Ref
Natural Science Foundation of Beijing Municipality
Recommended Citation
Mu, Hongmin; Zhang, Gang; Zhou, Meng Chu; and Cao, Zhengcai, "End-to-end Semantic Segmentation Network for Low-Light Scenes" (2024). Faculty Publications. 911.
https://digitalcommons.njit.edu/fac_pubs/911