Probabilistic Approach to Modeling of 3D Objects using Silhouettes

2006 ◽  
Vol 6 (4) ◽  
pp. 381-389 ◽  
Author(s):  
Ankur Jain ◽  
Vikas Yadav ◽  
Ankush Mittal ◽  
Sumit Gupta

Recently, 3D model construction from 2D images using an uncalibrated camera has attracted significant attention in the research community. Most of the algorithms for 3D model construction suffer from problems such as inefficiency, irregular construction, and necessity of camera calibration. In this paper, a novel algorithm is presented that uses the silhouette images obtained from the object to construct the 3D model. To carry out the 3D modeling, multiple views of the object are taken from different angles. Then using a silhouette based technique, new silhouettes are constructed and feature points are derived from them. These feature points are then used to construct the triangular meshes, which in turn construct the whole surface of the 3D model. The noise in the silhouette images is dealt with a probabilistic framework. In addition, a faster technique is presented to reduce the time and space complexity of this algorithm making it feasible for most commercial applications. The algorithm has been successfully tested on several objects. The experimental results and comparison with a voxelization technique over several sequences shows the superiority and the effectiveness of our technique.

2007 ◽  
Vol 1 (1) ◽  
pp. 25-34 ◽  
Author(s):  
Q. Chen ◽  
J. Yao ◽  
W.K. Cham

2020 ◽  
Vol 7 (2) ◽  
pp. 228-237
Author(s):  
Ting-Hao Li ◽  
Hiromasa Suzuki ◽  
Yutaka Ohtake

Abstract Eye tracking technology is widely applied to detect user’s attention in a 2D field, such as web page design, package design, and shooting games. However, because our surroundings primarily consist of 3D objects, applications will be expanded if there is an effective method to obtain and display user’s 3D gaze fixation. In this research, a methodology is proposed to demonstrate the user’s 3D gaze fixation on a digital model of a scene using only a pair of eye tracking glasses. The eye tracking glasses record user’s gaze data and scene video. Thus, using image-based 3D reconstruction, a 3D model of the scene can be reconstructed from the frame images; simultaneously, the transformation matrix of each frame image can be evaluated to find 3D gaze fixation on the 3D model. In addition, a method that demonstrates multiple users’ 3D gaze fixation on the same digital model is presented to analyze gaze distinction between different subjects. With this preliminary development, this approach shows potential to be applied to a larger environment and conduct a more reliable investigation.


Stomatologiya ◽  
2018 ◽  
Vol 97 (6) ◽  
pp. 17
Author(s):  
G. P. Kotelnikov ◽  
D. A. Trunin ◽  
A. V. Kolsanov ◽  
N. V. Popov ◽  
L. V. Limanova

2013 ◽  
Vol 07 (03) ◽  
pp. 1350039 ◽  
Author(s):  
M. HORI ◽  
W. LALITH ◽  
S. TANAKA ◽  
T. ICHIMURA

This paper seeks to develop a module for automated model construction of a pipeline network using geographical information system (GIS) of lifeline, for the sake of more rational seismic disaster assessment. The module is assigned a functionality which enables to generate two types of analysis models, a simple 2D model and a 3D model with high fidelity for pipe configuration. The module is coded in an objective oriented programming, so that it is easier to be extended to generate other types of analysis models. The module is applied to actual GIS, and the configuration of the generated model is verified. As an example, numerical analysis is made for the automatically constructed models by using a commercial finite element method package, and it is shown that these models are mechanically consistent and can be used for seismic disaster assessment.


2012 ◽  
Vol 443-444 ◽  
pp. 471-476
Author(s):  
Hong Fei Zhang ◽  
Xiao Jun Cheng ◽  
Yin Tao Shi

Taking a certain history building as an example, we introduce a real 3D digital method for Large-Scale history building using 3D laser scanner and total station, and analyze the precision of coordinate conversion model and established 3D model. Firstly, the building are separated into many stations which are scanned separately in order to get the points cloud of each station, at the same time, the coordinates of targets and feature points of each station are obtained with laser scanner and total station respectively; next step, we convert the points cloud of every station with conversion program developed by Matlab so that the data are under the uniform reference frame with the collected homonymy targets; finally, the points cloud which have been registered are meshed in order to build real 3D digital history building model, meanwhile we analyze the precision of conversion model and the real 3D model. The result shows that this method is fast, efficient, and the prospective model has high precision.


2017 ◽  
Vol 2017 ◽  
pp. 1-11 ◽  
Author(s):  
Yijun Ji ◽  
Qing Xia ◽  
Zhijiang Zhang

3D reconstruction based on structured light or laser scan has been widely used in industrial measurement, robot navigation, and virtual reality. However, most modern range sensors fail to scan transparent objects and some other special materials, of which the surface cannot reflect back the accurate depth because of the absorption and refraction of light. In this paper, we fuse the depth and silhouette information from an RGB-D sensor (Kinect v1) to recover the lost surface of transparent objects. Our system is divided into two parts. First, we utilize the zero and wrong depth led by transparent materials from multiple views to search for the 3D region which contains the transparent object. Then, based on shape from silhouette technology, we recover the 3D model by visual hull within these noisy regions. Joint Grabcut segmentation is operated on multiple color images to extract the silhouette. The initial constraint for Grabcut is automatically determined. Experiments validate that our approach can improve the 3D model of transparent object in real-world scene. Our system is time-saving, robust, and without any interactive operation throughout the process.


Sign in / Sign up

Export Citation Format

Share Document