scholarly journals Fusing Depth and Silhouette for Scanning Transparent Object with RGB-D Sensor

2017 ◽  
Vol 2017 ◽  
pp. 1-11 ◽  
Author(s):  
Yijun Ji ◽  
Qing Xia ◽  
Zhijiang Zhang

3D reconstruction based on structured light or laser scan has been widely used in industrial measurement, robot navigation, and virtual reality. However, most modern range sensors fail to scan transparent objects and some other special materials, of which the surface cannot reflect back the accurate depth because of the absorption and refraction of light. In this paper, we fuse the depth and silhouette information from an RGB-D sensor (Kinect v1) to recover the lost surface of transparent objects. Our system is divided into two parts. First, we utilize the zero and wrong depth led by transparent materials from multiple views to search for the 3D region which contains the transparent object. Then, based on shape from silhouette technology, we recover the 3D model by visual hull within these noisy regions. Joint Grabcut segmentation is operated on multiple color images to extract the silhouette. The initial constraint for Grabcut is automatically determined. Experiments validate that our approach can improve the 3D model of transparent object in real-world scene. Our system is time-saving, robust, and without any interactive operation throughout the process.

2021 ◽  
Vol 13 (3) ◽  
pp. 455
Author(s):  
Md Nazrul Islam ◽  
Murat Tahtali ◽  
Mark Pickering

Multispectral polarimetric light field imagery (MSPLFI) contains significant information about a transparent object’s distribution over spectra, the inherent properties of its surface and its directional movement, as well as intensity, which all together can distinguish its specular reflection. Due to multispectral polarimetric signatures being limited to an object’s properties, specular pixel detection of a transparent object is a difficult task because the object lacks its own texture. In this work, we propose a two-fold approach for determining the specular reflection detection (SRD) and the specular reflection inpainting (SRI) in a transparent object. Firstly, we capture and decode 18 different transparent objects with specularity signatures obtained using a light field (LF) camera. In addition to our image acquisition system, we place different multispectral filters from visible bands and polarimetric filters at different orientations to capture images from multisensory cues containing MSPLFI features. Then, we propose a change detection algorithm for detecting specular reflected pixels from different spectra. A Mahalanobis distance is calculated based on the mean and the covariance of both polarized and unpolarized images of an object in this connection. Secondly, an inpainting algorithm that captures pixel movements among sub-aperture images of the LF is proposed. In this regard, a distance matrix for all the four connected neighboring pixels is computed from the common pixel intensities of each color channel of both the polarized and the unpolarized images. The most correlated pixel pattern is selected for the task of inpainting for each sub-aperture image. This process is repeated for all the sub-aperture images to calculate the final SRI task. The experimental results demonstrate that the proposed two-fold approach significantly improves the accuracy of detection and the quality of inpainting. Furthermore, the proposed approach also improves the SRD metrics (with mean F1-score, G-mean, and accuracy as 0.643, 0.656, and 0.981, respectively) and SRI metrics (with mean structural similarity index (SSIM), peak signal-to-noise ratio (PSNR), mean squared error (IMMSE), and mean absolute deviation (MAD) as 0.966, 0.735, 0.073, and 0.226, respectively) for all the sub-apertures of the 18 transparent objects in MSPLFI dataset as compared with those obtained from the methods in the literature considered in this paper. Future work will exploit the integration of machine learning for better SRD accuracy and SRI quality.


1856 ◽  
Vol 7 ◽  
pp. 60-66

The explanation given by Dr. Goring and others of the advantage of increased angular aperture in microscopic objective-glasses appears to the author to be correct, as applied to the case of opake objects, and accordingly his remarks in the present communication have reference to transparent objects only. It is known that delicate markings on a transparent object, such as the valve of a Gyrosigma , may be rendered more distinctly visible by using an object-glass of large aperture, by bringing the mirror to one side, and by placing a central stop in the object-glass or the condenser or in both; the increased distinctness produced in these several ways being due to the illumination of the object by oblique light. Experiment also shows that the degree of obliquity of the light requisite varies with the delicacy or fineness of the markings, being greater as these are more delicate; so that the finest markings require the most oblique light which can possibly be obtained to render them evident, and the angular aperture of the object-glass must necessarily be proportionately large, otherwise none of these oblique rays could enter it.


2007 ◽  
Vol 1 (1) ◽  
pp. 25-34 ◽  
Author(s):  
Q. Chen ◽  
J. Yao ◽  
W.K. Cham

Sensors ◽  
2020 ◽  
Vol 20 (23) ◽  
pp. 6790
Author(s):  
Chi Xu ◽  
Jiale Chen ◽  
Mengyang Yao ◽  
Jun Zhou ◽  
Lijun Zhang ◽  
...  

6DoF object pose estimation is a foundation for many important applications, such as robotic grasping, automatic driving, and so on. However, it is very challenging to estimate 6DoF pose of transparent object which is commonly seen in our daily life, because the optical characteristics of transparent material lead to significant depth error which results in false estimation. To solve this problem, a two-stage approach is proposed to estimate 6DoF pose of transparent object from a single RGB-D image. In the first stage, the influence of the depth error is eliminated by transparent segmentation, surface normal recovering, and RANSAC plane estimation. In the second stage, an extended point-cloud representation is presented to accurately and efficiently estimate object pose. As far as we know, it is the first deep learning based approach which focuses on 6DoF pose estimation of transparent objects from a single RGB-D image. Experimental results show that the proposed approach can effectively estimate 6DoF pose of transparent object, and it out-performs the state-of-the-art baselines by a large margin.


2021 ◽  
Vol 13 (11) ◽  
pp. 6028
Author(s):  
Carlos Beltran-Velamazan ◽  
Marta Monzón-Chavarrías ◽  
Belinda López-Mesa

3D city models are a useful tool to analyze the solar potential of neighborhoods and cities. These models are built from buildings footprints and elevation measurements. Footprints are widely available, but elevation datasets remain expensive and time-consuming to acquire. Our hypothesis is that the GIS cadastral data can be used to build a 3D model automatically, so that generating complete cities 3D models can be done in a short time with already available data. We propose a method for the automatic construction of 3D models of cities and neighborhoods from 2D cadastral data and study their usefulness for solar analysis by comparing the results with those from a hand-built model. The results show that the accuracy in evaluating solar access on pedestrian areas and solar potential on rooftops with the automatic method is close to that from the hand-built model with slight differences of 3.4% and 2.2%, respectively. On the other hand, time saving with the automatic models is significant. A neighborhood of 400,000 m2 can be built up in 30 min, 50 times faster than by hand, and an entire city of 967 km2 can be built in 8.5 h.


2019 ◽  
Author(s):  
Robert Ennis ◽  
Katja Doerschner

AbstractStudies on the perceived color of transparent objects have elucidated potential mechanisms but have mainly focused on flat filters that overlay a flat background. However, studies with flat filters have not captured all aspects of physical transparency, such as caustics, specular reflections/highlights, and shadows. Here, we investigate color matching experiments with three-dimensional transparent objects for different matching stimuli: a uniform patch and a flat filter overlaying a variegated background. Two different instructions were given to observers: change the color of the matching stimulus until it has the same color as the transparent object (for the patch and flat filter) or until it has the same color as the dye that was used to tint the transparent object (for the patch). Regardless of instruction or matching element, observers match the mean chromaticity of the glass object, but the luminance of matches depends on the backgrounds of the test image and the matching element, indicating that a color constancy-esque discounting operation is at work. We applied three models from flat filter studies to see if they generalize to our stimuli: the convergence model and the ratio of either the means (RMC) or standard deviations (RSD) of cone excitations. The convergence model does not generalize to our stimuli, but the RMC generalizes to a wider range of stimuli than the RSD. However, there is an edge case where RMC also breaks down and there may be additional features that trade-off with RMC when observers match the color of thick, curved transparent objects.


Photonics ◽  
2021 ◽  
Vol 8 (10) ◽  
pp. 424
Author(s):  
Evelyn Gutierrez ◽  
Benjamín Castañeda ◽  
Sylvie Treuillet ◽  
Ivan Hernandez

Along with geometric and color indicators, thermography is another valuable source of information for wound monitoring. The interaction of geometry with thermography can provide predictive indicators of wound evolution; however, existing processes are focused on the use of high-cost devices with a static configuration, which restricts the scanning of large surfaces. In this study, we propose the use of commercial devices, such as mobile devices and portable thermography, to integrate information from different wavelengths onto the surface of a 3D model. A handheld acquisition is proposed in which color images are used to create a 3D model by using Structure from Motion (SfM), and thermography is incorporated into the 3D surface through a pose estimation refinement based on optimizing the temperature correlation between multiple views. Thermal and color 3D models were successfully created for six patients with multiple views from a low-cost commercial device. The results show the successful application of the proposed methodology where thermal mapping on 3D models is not limited in the scanning area and can provide consistent information between multiple thermal camera views. Further work will focus on studying the quantitative metrics obtained by the multi-view 3D models created with the proposed methodology.


2006 ◽  
Vol 6 (4) ◽  
pp. 381-389 ◽  
Author(s):  
Ankur Jain ◽  
Vikas Yadav ◽  
Ankush Mittal ◽  
Sumit Gupta

Recently, 3D model construction from 2D images using an uncalibrated camera has attracted significant attention in the research community. Most of the algorithms for 3D model construction suffer from problems such as inefficiency, irregular construction, and necessity of camera calibration. In this paper, a novel algorithm is presented that uses the silhouette images obtained from the object to construct the 3D model. To carry out the 3D modeling, multiple views of the object are taken from different angles. Then using a silhouette based technique, new silhouettes are constructed and feature points are derived from them. These feature points are then used to construct the triangular meshes, which in turn construct the whole surface of the 3D model. The noise in the silhouette images is dealt with a probabilistic framework. In addition, a faster technique is presented to reduce the time and space complexity of this algorithm making it feasible for most commercial applications. The algorithm has been successfully tested on several objects. The experimental results and comparison with a voxelization technique over several sequences shows the superiority and the effectiveness of our technique.


Sign in / Sign up

Export Citation Format

Share Document