To read this content please select one of the options below:

Real-time location estimation for indoor navigation using a visual-inertial sensor

Zhe Wang (School of Automation and Electrical Engineering, University of Science and Technology Beijing, China)
Xisheng Li (School of Automation and Electrical Engineering, University of Science and Technology Beijing, China)
Xiaojuan Zhang (School of Automation and Electrical Engineering, University of Science and Technology Beijing, China)
Yanru Bai (School of Automation and Electrical Engineering, University of Science and Technology Beijing, China and School of Advanced Engineering, University of Science and Technology Beijing, China)
Chengcai Zheng (School of Automation and Electrical Engineering, University of Science and Technology Beijing, China)

Sensor Review

ISSN: 0260-2288

Article publication date: 8 June 2020

Issue publication date: 21 July 2020

146

Abstract

Purpose

The purpose of this study is to use visual and inertial sensors to achieve real-time location. How to provide an accurate location has become a popular research topic in the field of indoor navigation. Although the complementarity of vision and inertia has been widely applied in indoor navigation, many problems remain, such as inertial sensor deviation calibration, unsynchronized visual and inertial data acquisition and large amount of stored data.

Design/methodology/approach

First, this study demonstrates that the vanishing point (VP) evaluation function improves the precision of extraction, and the nearest ground corner point (NGCP) of the adjacent frame is estimated by pre-integrating the inertial sensor. The Sequential Similarity Detection Algorithm (SSDA) and Random Sample Consensus (RANSAC) algorithms are adopted to accurately match the adjacent NGCP in the estimated region of interest. Second, the model of visual pose is established by using the parameters of the camera itself, VP and NGCP. The model of inertial pose is established by pre-integrating. Third, location is calculated by fusing the model of vision and inertia.

Findings

In this paper, a novel method is proposed to fuse visual and inertial sensor to locate indoor environment. The authors describe the building of an embedded hardware platform to the best of their knowledge and compare the result with a mature method and POSAV310.

Originality/value

This paper proposes a VP evaluation function that is used to extract the most advantages in the intersection of a plurality of parallel lines. To improve the extraction speed of adjacent frame, the authors first proposed fusing the NGCP of the current frame and the calibrated pre-integration to estimate the NGCP of the next frame. The visual pose model was established using extinction VP and NGCP, calibration of inertial sensor. This theory offers the linear processing equation of gyroscope and accelerometer by the model of visual and inertial pose.

Keywords

Acknowledgements

This research was supported by the National Natural Science Foundation of China (Grant no. 61273082 and 61602041).

Citation

Wang, Z., Li, X., Zhang, X., Bai, Y. and Zheng, C. (2020), "Real-time location estimation for indoor navigation using a visual-inertial sensor", Sensor Review, Vol. 40 No. 4, pp. 455-464. https://doi.org/10.1108/SR-01-2020-0014

Publisher

:

Emerald Publishing Limited

Copyright © 2020, Emerald Publishing Limited

Related articles