scholarly journals Fast and Accurate 3D Measurement Based on Light-Field Camera and Deep Learning

Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4399 ◽  
Author(s):  
Haoxin Ma ◽  
Zhiwen Qian ◽  
Tingting Mu ◽  
Shengxian Shi

The precise combination of image sensor and micro-lens array enables light-field cameras to record both angular and spatial information of incoming light, therefore, one can calculate disparity and depth from one single light-field image captured by one single light-field camera. In turn, 3D models of the recorded objects can be recovered, which means a 3D measurement system can be built using a light-field camera. However, reflective and texture-less areas in light-field images have complicated conditions, making it hard to correctly calculate disparity with existing algorithms. To tackle this problem, we introduce a novel end-to-end network VommaNet to retrieve multi-scale features from reflective and texture-less regions for accurate disparity estimation. Meanwhile, our network has achieved similar or better performance in other regions for both synthetic light-field images and real-world data compared to the state-of-the-art algorithms.

2014 ◽  
Vol 687-691 ◽  
pp. 1091-1094
Author(s):  
Peng Liu ◽  
Ru Min Zhang ◽  
Di Jun Liu

Plenoptic camera provides us with the capability of refocusing photographs after exposure and extended depth of field. But the registration error for the micro lens array and image sensor influences on the quality of extracted 4D light field and thus leads to a degradation of the reconstructed image. The paper discusses two type of registration errors based on the structure of focused plenoptic camera and proposes a correction algorithm to acquire accurate light field from the sensor data. The simulation shows that registration errors deteriorate the reconstructed image and the artifact will be reduced significantly after using the algorithm.


2021 ◽  
Vol 7 (1) ◽  
pp. 540-555
Author(s):  
Hayley L. Mickleburgh ◽  
Liv Nilsson Stutz ◽  
Harry Fokkens

Abstract The reconstruction of past mortuary rituals and practices increasingly incorporates analysis of the taphonomic history of the grave and buried body, using the framework provided by archaeothanatology. Archaeothanatological analysis relies on interpretation of the three-dimensional (3D) relationship of bones within the grave and traditionally depends on elaborate written descriptions and two-dimensional (2D) images of the remains during excavation to capture this spatial information. With the rapid development of inexpensive 3D tools, digital replicas (3D models) are now commonly available to preserve 3D information on human burials during excavation. A procedure developed using a test case to enhance archaeothanatological analysis and improve post-excavation analysis of human burials is described. Beyond preservation of static spatial information, 3D visualization techniques can be used in archaeothanatology to reconstruct the spatial displacement of bones over time, from deposition of the body to excavation of the skeletonized remains. The purpose of the procedure is to produce 3D simulations to visualize and test archaeothanatological hypotheses, thereby augmenting traditional archaeothanatological analysis. We illustrate our approach with the reconstruction of mortuary practices and burial taphonomy of a Bell Beaker burial from the site of Oostwoud-Tuithoorn, West-Frisia, the Netherlands. This case study was selected as the test case because of its relatively complete context information. The test case shows the potential for application of the procedure to older 2D field documentation, even when the amount and detail of documentation is less than ideal.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Georges Hattab ◽  
Adamantini Hatzipanayioti ◽  
Anna Klimova ◽  
Micha Pfeiffer ◽  
Peter Klausing ◽  
...  

AbstractRecent technological advances have made Virtual Reality (VR) attractive in both research and real world applications such as training, rehabilitation, and gaming. Although these other fields benefited from VR technology, it remains unclear whether VR contributes to better spatial understanding and training in the context of surgical planning. In this study, we evaluated the use of VR by comparing the recall of spatial information in two learning conditions: a head-mounted display (HMD) and a desktop screen (DT). Specifically, we explored (a) a scene understanding and then (b) a direction estimation task using two 3D models (i.e., a liver and a pyramid). In the scene understanding task, participants had to navigate the rendered the 3D models by means of rotation, zoom and transparency in order to substantially identify the spatial relationships among its internal objects. In the subsequent direction estimation task, participants had to point at a previously identified target object, i.e., internal sphere, on a materialized 3D-printed version of the model using a tracked pointing tool. Results showed that the learning condition (HMD or DT) did not influence participants’ memory and confidence ratings of the models. In contrast, the model type, that is, whether the model to be recalled was a liver or a pyramid significantly affected participants’ memory about the internal structure of the model. Furthermore, localizing the internal position of the target sphere was also unaffected by participants’ previous experience of the model via HMD or DT. Overall, results provide novel insights on the use of VR in a surgical planning scenario and have paramount implications in medical learning by shedding light on the mental model we make to recall spatial structures.


2018 ◽  
Vol 43 (15) ◽  
pp. 3746 ◽  
Author(s):  
Zewei Cai ◽  
Xiaoli Liu ◽  
Qijian Tang ◽  
Xiang Peng ◽  
Bruce Zhi Gao
Keyword(s):  

2021 ◽  
Author(s):  
Zai Luo ◽  
Hongnan Zhao ◽  
Wensong Jiang ◽  
Zeliang Cai ◽  
Li Yang

Measurement ◽  
2021 ◽  
pp. 110140
Author(s):  
Qing Yu ◽  
Yali Zhang ◽  
Yi Zhang ◽  
Fang Cheng ◽  
Wenjian Shang ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document