"Exploring the vertical dimension of street view image based on deep le" by Huan Ning, Zhenlong Li et al.
 

Exploring the vertical dimension of street view image based on deep learning: a case study on lowest floor elevation estimation

Document Type

Article

Publication Date

1-1-2022

Abstract

Street view imagery such as Google Street View is widely used in people’s daily lives. Many studies have been conducted to detect and map objects such as traffic signs and sidewalks for urban built-up environment analysis. While mapping objects in the horizontal dimension is common in those studies, automatic vertical measuring in large areas is underexploited. Vertical information from street view imagery can benefit a variety of studies. One notable application is estimating the lowest floor elevation, which is critical for building flood vulnerability assessment and insurance premium calculation. In this article, we explored the vertical measurement in street view imagery using the principle of tacheometric surveying. In the case study of lowest floor elevation estimation using Google Street View images, we trained a neural network (YOLO-v5) for door detection and used the fixed height of doors to measure doors’ elevation. The results suggest that the average error of estimated elevation is 0.218 m. The depthmaps of Google Street View were utilized to traverse the elevation from the roadway surface to target objects. The proposed pipeline provides a novel approach for automatic elevation estimation from street view imagery and is expected to benefit future terrain-related studies for large areas.

Identifier

85116428674 (Scopus)

Publication Title

International Journal of Geographical Information Science

External Full Text Location

https://doi.org/10.1080/13658816.2021.1981334

e-ISSN

13623087

ISSN

13658816

First Page

1317

Last Page

1342

Issue

7

Volume

36

Grant

SMA- 2122054

Fund Ref

National Science Foundation

This document is currently not available here.

Share

COinS