scholarly journals Toward 3D Reconstruction of Outdoor Scenes Using an MMW Radar and a Monocular Vision Sensor

Sensors ◽  
2015 ◽  
Vol 15 (10) ◽  
pp. 25937-25967 ◽  
Author(s):  
Ghina Natour ◽  
Omar Ait-Aider ◽  
Raphael Rouveure ◽  
François Berry ◽  
Patrice Faure
Sensors ◽  
2016 ◽  
Vol 16 (3) ◽  
pp. 311 ◽  
Author(s):  
Tae-Jae Lee ◽  
Dong-Hoon Yi ◽  
Dong-Il Cho

2019 ◽  
Vol 10 (1) ◽  
Author(s):  
Julián Tachella ◽  
Yoann Altmann ◽  
Nicolas Mellado ◽  
Aongus McCarthy ◽  
Rachael Tobin ◽  
...  

Abstract Single-photon lidar has emerged as a prime candidate technology for depth imaging through challenging environments. Until now, a major limitation has been the significant amount of time required for the analysis of the recorded data. Here we show a new computational framework for real-time three-dimensional (3D) scene reconstruction from single-photon data. By combining statistical models with highly scalable computational tools from the computer graphics community, we demonstrate 3D reconstruction of complex outdoor scenes with processing times of the order of 20 ms, where the lidar data was acquired in broad daylight from distances up to 320 metres. The proposed method can handle an unknown number of surfaces in each pixel, allowing for target detection and imaging through cluttered scenes. This enables robust, real-time target reconstruction of complex moving scenes, paving the way for single-photon lidar at video rates for practical 3D imaging applications.


Sensors ◽  
2021 ◽  
Vol 21 (17) ◽  
pp. 5909
Author(s):  
Qingyu Jia ◽  
Liang Chang ◽  
Baohua Qiang ◽  
Shihao Zhang ◽  
Wu Xie ◽  
...  

Real-time 3D reconstruction is one of the current popular research directions of computer vision, and it has become the core technology in the fields of virtual reality, industrialized automatic systems, and mobile robot path planning. Currently, there are three main problems in the real-time 3D reconstruction field. Firstly, it is expensive. It requires more varied sensors, so it is less convenient. Secondly, the reconstruction speed is slow, and the 3D model cannot be established accurately in real time. Thirdly, the reconstruction error is large, which cannot meet the requirements of scenes with accuracy. For this reason, we propose a real-time 3D reconstruction method based on monocular vision in this paper. Firstly, a single RGB-D camera is used to collect visual information in real time, and the YOLACT++ network is used to identify and segment the visual information to extract part of the important visual information. Secondly, we combine the three stages of depth recovery, depth optimization, and deep fusion to propose a three-dimensional position estimation method based on deep learning for joint coding of visual information. It can reduce the depth error caused by the depth measurement process, and the accurate 3D point values of the segmented image can be obtained directly. Finally, we propose a method based on the limited outlier adjustment of the cluster center distance to optimize the three-dimensional point values obtained above. It improves the real-time reconstruction accuracy and obtains the three-dimensional model of the object in real time. Experimental results show that this method only needs a single RGB-D camera, which is not only low cost and convenient to use, but also significantly improves the speed and accuracy of 3D reconstruction.


2011 ◽  
Author(s):  
R. Cortland Tompkins ◽  
Yakov Diskin ◽  
Menatoallah M. Youssef ◽  
Vijayan K. Asari

Sign in / Sign up

Export Citation Format

Share Document