scholarly journals Can presenting images behind the screen plane generate a sense of stereoscopic scene depth?

2016 ◽  
Vol 16 (12) ◽  
pp. 647
Author(s):  
Paul Hands ◽  
Jenny Read
Keyword(s):  
2010 ◽  
Vol 9 (1) ◽  
pp. 27-35
Author(s):  
Ryuji Shibata ◽  
Hajime Nagahara

Image-based modeling methods for generating 3D models from an image sequence have been widely studied. Most of these methods, however, require huge redundant spatio-temporal images to estimate scene depth. This is not an effective use of capturing higher resolution texture. On the other hand, a route panorama, which is a continuous panoramic image along a path, is an efficient way of consolidating information from multiple viewpoints into a single image. A route panorama captured by a line camera also has the advantage of capturing higher resolution easily. In this paper, we propose a method for estimating the depth of an image from a route panorama using color drifts. The proposed method detects color drift by deformable window matching of the color channels. It also uses a hierarchical belief propagation to estimate the depth stably and decrease the computation cost thereof.


Author(s):  
Vincent Casser ◽  
Soeren Pirk ◽  
Reza Mahjourian ◽  
Anelia Angelova

Learning to predict scene depth from RGB inputs is a challenging task both for indoor and outdoor robot navigation. In this work we address unsupervised learning of scene depth and robot ego-motion where supervision is provided by monocular videos, as cameras are the cheapest, least restrictive and most ubiquitous sensor for robotics. Previous work in unsupervised image-to-depth learning has established strong baselines in the domain. We propose a novel approach which produces higher quality results, is able to model moving objects and is shown to transfer across data domains, e.g. from outdoors to indoor scenes. The main idea is to introduce geometric structure in the learning process, by modeling the scene and the individual objects; camera ego-motion and object motions are learned from monocular videos as input. Furthermore an online refinement method is introduced to adapt learning on the fly to unknown domains. The proposed approach outperforms all state-of-the-art approaches, including those that handle motion e.g. through learned flow. Our results are comparable in quality to the ones which used stereo as supervision and significantly improve depth prediction on scenes and datasets which contain a lot of object motion. The approach is of practical relevance, as it allows transfer across environments, by transferring models trained on data collected for robot navigation in urban scenes to indoor navigation settings. The code associated with this paper can be found at https://sites.google.com/view/struct2depth.


Author(s):  
Jens Arnspang ◽  
Knud Henriksen ◽  
Fredrik Bergholm
Keyword(s):  

2015 ◽  
Vol 2015 ◽  
pp. 1-14 ◽  
Author(s):  
Wei Wang ◽  
Wenhui Li ◽  
Qingji Guan ◽  
Miao Qi

Removing the haze effects on images or videos is a challenging and meaningful task for image processing and computer vision applications. In this paper, we propose a multiscale fusion method to remove the haze from a single image. Based on the existing dark channel prior and optics theory, two atmospheric veils with different scales are first derived from the hazy image. Then, a novel and adaptive local similarity-based wavelet fusion method is proposed for preserving the significant scene depth property and avoiding blocky artifacts. Finally, the clear haze-free image is restored by solving the atmospheric scattering model. Experimental results demonstrate that the proposed method can yield comparative or even better results than several state-of-the-art methods by subjective and objective evaluations.


2019 ◽  
Author(s):  
Louisa Lok Yee Man ◽  
Karolina Krzys ◽  
Monica Castelhano

When you walk into a room, you perceive visual information that is both close to you and farther in depth. In the current study, we investigated how visual search is affected by information across scene depth and contrasted it with the effect of semantic scene context. Across two experiments, participants performed search for target objects appearing either in the foreground or background regions within scenes that were either normally configured or had semantically mismatched foreground and background contexts (Chimera scenes; Castelhano, Fernandes, & Theriault, 2018). In Experiment 1, we found participants had shorter latencies and fewer fixations to the target. This pattern was not explained by target size. In Experiment 2, a preview of the scene prior to search was added to better establish scene context prior to search. Results again show a Foreground Bias, with faster search performance for foreground targets. Together, these studies suggest processing differences across depth in scenes, with a preference for objects closer in space.


2021 ◽  
Vol 38 (6) ◽  
pp. 1719-1726
Author(s):  
Tanbo Zhu ◽  
Die Wang ◽  
Yuhua Li ◽  
Wenjie Dong

In real training, the training conditions are often undesirable, and the use of equipment is severely limited. These problems can be solved by virtual practical training, which breaks the limit of space, lowers the training cost, while ensuring the training quality. However, the existing methods work poorly in image reconstruction, because they fail to consider the fact that the environmental perception of actual scene is strongly regular by nature. Therefore, this paper investigates the three-dimensional (3D) image reconstruction for virtual talent training scene. Specifically, a fusion network model was deigned, and the deep-seated correlation between target detection and semantic segmentation was discussed for images shot in two-dimensional (2D) scenes, in order to enhance the extraction effect of image features. Next, the vertical and horizontal parallaxes of the scene were solved, and the depth-based virtual talent training scene was reconstructed three dimensionally, based on the continuity of scene depth. Finally, the proposed algorithm was proved effective through experiments.


Author(s):  
Jin Tian ◽  
Paul Croaker ◽  
Jiasheng Li ◽  
Hongxing Hua

This article presents the experimental and numerical studies on the flow-induced vibration of propeller blades under periodic inflows. A total of two 7-bladed highly skewed model propellers of identical geometries but different elastic characteristics were operated in four-cycle and six-cycle inflows to study the blade vibratory strain response. A total of two kinds of wire mesh wake screens located 400 mm upstream of the propeller plane were used to generate four-cycle and six-cycle inflows. A laser Doppler velocimetry system located 100 mm downstream of the wake screen plane was used to measure the axial velocity distributions produced by the wake screens. Strain gauges were bonded onto the propeller blades in different positions. Data from strain gauges quantified vibratory strain amplitudes and excitation frequencies induced by the wake screens. The propellers were accelerated through the flexible propeller’s fundamental frequency to investigate the effect of resonance on vibratory strain response. The numerical work was conducted using large eddy simulation and moving mesh technique to predict the unsteady forces acting on the propeller blade when operating in a nonuniform inflow.


Sign in / Sign up

Export Citation Format

Share Document