scholarly journals Real-Time 3D Reconstruction Method Based on Monocular Vision

Sensors ◽  
2021 ◽  
Vol 21 (17) ◽  
pp. 5909
Author(s):  
Qingyu Jia ◽  
Liang Chang ◽  
Baohua Qiang ◽  
Shihao Zhang ◽  
Wu Xie ◽  
...  

Real-time 3D reconstruction is one of the current popular research directions of computer vision, and it has become the core technology in the fields of virtual reality, industrialized automatic systems, and mobile robot path planning. Currently, there are three main problems in the real-time 3D reconstruction field. Firstly, it is expensive. It requires more varied sensors, so it is less convenient. Secondly, the reconstruction speed is slow, and the 3D model cannot be established accurately in real time. Thirdly, the reconstruction error is large, which cannot meet the requirements of scenes with accuracy. For this reason, we propose a real-time 3D reconstruction method based on monocular vision in this paper. Firstly, a single RGB-D camera is used to collect visual information in real time, and the YOLACT++ network is used to identify and segment the visual information to extract part of the important visual information. Secondly, we combine the three stages of depth recovery, depth optimization, and deep fusion to propose a three-dimensional position estimation method based on deep learning for joint coding of visual information. It can reduce the depth error caused by the depth measurement process, and the accurate 3D point values of the segmented image can be obtained directly. Finally, we propose a method based on the limited outlier adjustment of the cluster center distance to optimize the three-dimensional point values obtained above. It improves the real-time reconstruction accuracy and obtains the three-dimensional model of the object in real time. Experimental results show that this method only needs a single RGB-D camera, which is not only low cost and convenient to use, but also significantly improves the speed and accuracy of 3D reconstruction.

2021 ◽  
Vol 11 (11) ◽  
pp. 5111
Author(s):  
Zhihua Wu ◽  
Gongfa Chen ◽  
Qiong Ding ◽  
Bing Yuan ◽  
Xiaomei Yang

This paper presents a measurement method of bridge vibration based on three-dimensional (3D) reconstruction. A video of bridge model vibration is recorded by an unmanned aerial vehicle (UAV), and the displacement of target points on the bridge model is tracked by the digital image correlation (DIC) method. Due to the UAV motion, the DIC-tracked displacement of the bridge model includes the absolute displacement caused by the excitation and the false displacement induced by the UAV motion. Therefore, the UAV motion must be corrected to measure the real displacement. Using four corner points on a fixed object plane as the reference points, the projection matrix for each frame of images can be estimated by the UAV camera calibration, and then the 3D world coordinates of the target points on the bridge model can be recovered. After that, the real displacement of the target points can be obtained. To verify the correctness of the results, the operational modal analysis (OMA) method is used to extract the natural frequencies of the bridge model. The results show that the first natural frequency obtained from the proposed method is consistent with the one obtained from the homography-based method. By further comparing with the homography-based correction method, it is found that the 3D reconstruction method can effectively overcome the limitation of the homography-based method that the fixed reference points and the target points must be coplanar.


2008 ◽  
Vol 20 (04) ◽  
pp. 205-218 ◽  
Author(s):  
Jyh-Fa Lee ◽  
Ming-Shium Hsieh ◽  
Chih-Wei Kuo ◽  
Ming-Dar Tsai ◽  
Ming Ma

This paper describes a three-dimensional reconstruction method to provide real-time visual responses for volume (constituted by tomographic slices) based surgery simulations. The proposed system uses dynamical data structures to record tissue triangles obtained from 3D reconstruction computation. Each tissue triangle in the structures can be modified or every structure can be deleted or allocated independently. Moreover, triangle reconstruction is optimized by only deleting or adding vertices from manipulated voxels that are classified as erosion (in which the voxels are changed from tissue to null) or generation (the voxels are changed from null to tissue). Therefore, by manipulating these structures, 3D reconstruction can be locally implemented for only manipulated voxels to achieve the highest efficiency without reconstructing tissue surfaces in the whole volume as general methods do. Three surgery simulation examples demonstrate that the proposed method can provide time-critical visual responses even under other time-consuming computations such as volume manipulations and haptic interactions.


Sensors ◽  
2020 ◽  
Vol 20 (24) ◽  
pp. 7045
Author(s):  
Fupei Wu ◽  
Shukai Zhu ◽  
Weilin Ye

Three-dimensional (3D) reconstruction and measurement are popular techniques in precision manufacturing processes. In this manuscript, a single image 3D reconstruction method is proposed based on a novel monocular vision system, which includes a three-level charge coupled device (3-CCD) camera and a ring structured multi-color light emitting diode (LED) illumination. Firstly, a procedure for the calibration of the illumination’s parameters, including LEDs’ mounted angles, distribution density and incident angles, is proposed. Secondly, the incident light information, the color distribution information and gray level information are extracted from the acquired image, and the 3D reconstruction model is built based on the camera imaging model. Thirdly, the surface height information of the detected object within the field of view is computed based on the built model. The proposed method aims at solving the uncertainty and the slow convergence issues arising in 3D surface topography reconstruction using current shape-from-shading (SFS) methods. Three-dimensional reconstruction experimental tests are carried out on convex, concave, angular surfaces and on a mobile subscriber identification module (SIM) card slot, showing relative errors less than 3.6%, respectively. Advantages of the proposed method include a reduced time for 3D surface reconstruction compared to other methods, demonstrating good suitability of the proposed method in reconstructing surface 3D morphology.


Author(s):  
Kevin Lesniak ◽  
Janis Terpenny ◽  
Conrad S. Tucker ◽  
Chimay Anumba ◽  
Sven G. Bilén

With design teams becoming more distributed, the sharing and interpreting of complex data about design concepts/prototypes and environments have become increasingly challenging. The size and quality of data that can be captured and shared directly affects the ability of receivers of that data to collaborate and provide meaningful feedback. To mitigate these challenges, the authors of this work propose the real-time translation of physical objects into an immersive virtual reality environment using readily available red, green, blue, and depth (RGB-D) sensing systems and standard networking connections. The emergence of commercial, off-the-shelf RGB-D sensing systems, such as the Microsoft Kinect, has enabled the rapid three-dimensional (3D) reconstruction of physical environments. The authors present a method that employs 3D mesh reconstruction algorithms and real-time rendering techniques to capture physical objects in the real world and represent their 3D reconstruction in an immersive virtual reality environment with which the user can then interact. Providing these features allows distributed design teams to share and interpret complex 3D data in a natural manner. The method reduces the processing requirements of the data capture system while enabling it to be portable. The method also provides an immersive environment in which designers can view and interpret the data remotely. A case study involving a commodity RGB-D sensor and multiple computers connected through standard TCP internet connections is presented to demonstrate the viability of the proposed method.


Author(s):  
Xiongfeng Peng ◽  
Liaoyuan Zeng ◽  
Wenyi Wang ◽  
Zhili Liu ◽  
Yifeng Yang ◽  
...  

2015 ◽  
Vol 75 (3) ◽  
Author(s):  
Juliana A. Abu Bakar ◽  
Chew Shiaujing ◽  
Ooi Wooisim ◽  
Pang Chongmeng ◽  
Hafizatul H. Abdrahman ◽  
...  

Virtual heritage is able to provide visual aesthetics, real-time navigation and interaction to impress and entertain users. This article describes the design and development of three dimensional (3D) virtual heritage to view and navigate the 3D representation of Malay traditional house which is rare to be found today. The Virtual Traditional House allows flexible exploration with real-time navigation in order for users to walkthrough the 3D reconstruction of the house while viewing relevant historical information at certain parts of the house. The process of design and development of Virtual Traditional House is outlined and points of particular importance are explained. The article discusses the preliminary results of user evaluation for Virtual Traditional House. Future work includes extensive user evaluation and to what extend user may absorb the historical information surfaced around the virtual environment.


2012 ◽  
Vol 588-589 ◽  
pp. 1320-1323
Author(s):  
Li Xia Wang

This paper takes the virtual reality technology as a core, has established the housing virtual reality roaming display system, Under the premise of the detailed analysis of system architecture, We focus on how to form the terrain database and the scenery three-dimensional database by using the MultiGen Creator, and call OpenGVS through MSVC to carry on the real-time scene control and the method of the complex special effect realization.


Author(s):  
Kevin Lesniak ◽  
Conrad S. Tucker ◽  
Sven Bilen ◽  
Janis Terpenny ◽  
Chimay Anumba

Immersive virtual reality systems have the potential to transform the manner in which designers create prototypes and collaborate in teams. Using technologies such as the Oculus Rift or the HTC Vive, a designer can attain a sense of “presence” and “immersion” typically not experienced by traditional CAD-based platforms. However, one of the fundamental challenges of creating a high quality immersive virtual reality experience is actually creating the immersive virtual reality environment itself. Typically, designers spend a considerable amount of time manually designing virtual models that replicate physical, real world artifacts. While there exists the ability to import standard 3D models into these immersive virtual reality environments, these models are typically generic in nature and do not represent the designer’s intent. To mitigate these challenges, the authors of this work propose the real time translation of physical objects into an immersive virtual reality environment using readily available RGB-D sensing systems and standard networking connections. The emergence of commercial, off-the shelf RGB-D sensing systems such as the Microsoft Kinect, have enabled the rapid 3D reconstruction of physical environments. The authors present a methodology that employs 3D mesh reconstruction algorithms and real time rendering techniques to capture physical objects in the real world and represent their 3D reconstruction in an immersive virtual realilty environment with which the user can then interact. A case study involving a commodity RGB-D sensor and multiple computers connected through standard TCP internet connections is presented to demonstrate the viability of the proposed methodology.


Sign in / Sign up

Export Citation Format

Share Document