scholarly journals 3D reconstruction of indoor and outdoor scenes using a mobile range scanner

Author(s):  
Y. Sun ◽  
J.K. Paik ◽  
A. Koschan ◽  
M.A. Abidi
2008 ◽  
Vol 20 (7) ◽  
pp. 1250-1265 ◽  
Author(s):  
Daniela B. Fenker ◽  
Julietta U. Frey ◽  
Hartmut Schuetze ◽  
Dorothee Heipertz ◽  
Hans-Jochen Heinze ◽  
...  

Exploring a novel environment can facilitate subsequent hippocampal long-term potentiation in animals. We report a related behavioral enhancement in humans. In two separate experiments, recollection and free recall, both measures of hippocampus-dependent memory formation, were enhanced for words studied after a 5-min exposure to unrelated novel as opposed to familiar images depicting indoor and outdoor scenes. With functional magnetic resonance imaging, the enhancement was predicted by specific activity patterns observed during novelty exposure in parahippocampal and dorsal prefrontal cortices, regions which are known to be linked to attentional orienting to novel stimuli and perceptual processing of scenes. Novelty was also associated with activation of the substantia nigra/ventral tegmental area of the midbrain and the hippocampus, but these activations did not correlate with contextual memory enhancement. These findings indicate remarkable parallels between contextual memory enhancement in humans and existing evidence regarding contextually enhanced hippocampal plasticity in animals. They provide specific behavioral clues to enhancing hippocampus-dependent memory in humans.


2019 ◽  
Vol 10 (1) ◽  
Author(s):  
Julián Tachella ◽  
Yoann Altmann ◽  
Nicolas Mellado ◽  
Aongus McCarthy ◽  
Rachael Tobin ◽  
...  

Abstract Single-photon lidar has emerged as a prime candidate technology for depth imaging through challenging environments. Until now, a major limitation has been the significant amount of time required for the analysis of the recorded data. Here we show a new computational framework for real-time three-dimensional (3D) scene reconstruction from single-photon data. By combining statistical models with highly scalable computational tools from the computer graphics community, we demonstrate 3D reconstruction of complex outdoor scenes with processing times of the order of 20 ms, where the lidar data was acquired in broad daylight from distances up to 320 metres. The proposed method can handle an unknown number of surfaces in each pixel, allowing for target detection and imaging through cluttered scenes. This enables robust, real-time target reconstruction of complex moving scenes, paving the way for single-photon lidar at video rates for practical 3D imaging applications.


2019 ◽  
Vol 11 (4) ◽  
pp. 446 ◽  
Author(s):  
Zacharias Kandylakis ◽  
Konstantinos Vasili ◽  
Konstantinos Karantzalos

Single sensor systems and standard optical—usually RGB CCTV video cameras—fail to provide adequate observations, or the amount of spectral information required to build rich, expressive, discriminative features for object detection and tracking tasks in challenging outdoor and indoor scenes under various environmental/illumination conditions. Towards this direction, we have designed a multisensor system based on thermal, shortwave infrared, and hyperspectral video sensors and propose a processing pipeline able to perform in real-time object detection tasks despite the huge amount of the concurrently acquired video streams. In particular, in order to avoid the computationally intensive coregistration of the hyperspectral data with other imaging modalities, the initially detected targets are projected through a local coordinate system on the hypercube image plane. Regarding the object detection, a detector-agnostic procedure has been developed, integrating both unsupervised (background subtraction) and supervised (deep learning convolutional neural networks) techniques for validation purposes. The detected and verified targets are extracted through the fusion and data association steps based on temporal spectral signatures of both target and background. The quite promising experimental results in challenging indoor and outdoor scenes indicated the robust and efficient performance of the developed methodology under different conditions like fog, smoke, and illumination changes.


2003 ◽  
Vol 03 (01) ◽  
pp. 145-169 ◽  
Author(s):  
RITA CUCCHIARA ◽  
COSTANTINO GRANA ◽  
ANDREA PRATI

In this work we present a framework for on-the-fly video transcoding that exploits computer vision-based techniques to adapt the Web access to the user requirements. The proposed transcoding approach aims at coping with both user bandwidth and resources capabilities, and with user interests in the video's content. We propose an object-based semantic transcoding that, according to the user-defined classes of relevance, applies different transcoding techniques to the objects segmented in a scene. Object extraction is provided by on-the-fly video processing, without manual annotation. Multiple transcoding policies are reviewed and a performance evaluation metric based on the Weighted Mean Square Error (and corresponding PSNR), that takes into account the perceptual user requirements by means of classes of relevance, is defined. Results are analyzed by varying transcoding techniques, bandwidth requirements and video types (with indoor and outdoor scenes), showing that the use of semantics can dramatically improve the bandwidth to distortion ratio.


2017 ◽  
Author(s):  
Talia Brandman ◽  
Marius V. Peelen

AbstractWe internally represent the structure of our surroundings even when there is little layout information available in the visual image, such as when walking through fog or darkness. One way in which we disambiguate such scenes is through object cues; for example, seeing a boat supports the inference that the foggy scene is a lake. Recent studies have investigated the neural mechanisms by which object and scene processing interact to support object perception. The current study examines the reverse interaction, by which objects facilitate the neural representation of scene layout. Photographs of indoor (closed) and outdoor (open) real-world scenes were blurred such that they were difficult to categorize on their own, but easily disambiguated by the inclusion of an object. fMRI decoding was used to measure scene representations in scene-selective parahippocampal place area (PPA) and occipital place area (OPA). Classifiers were trained to distinguish response patterns to fully visible indoor and outdoor scenes, presented in an independent experiment. Testing these classifiers on blurred scenes revealed a strong improvement in classification in left PPA and OPA when objects were present, despite the reduced low-level visual feature overlap with the training set in this condition. These findings were specific to left PPA/OPA, with no evidence for object-driven facilitation in right PPA/OPA, object-selective areas, and early visual cortex. These findings demonstrate separate roles for left and right scene-selective cortex in scene representation, whereby left PPA/OPA represents inferred scene layout, influenced by contextual object cues, and right PPA/OPA represents a scene’s visual features.


Author(s):  
Julien P.C. Valentin ◽  
Sunando Sengupta ◽  
Jonathan Warrell ◽  
Ali Shahrokni ◽  
Philip H.S. Torr

Sensors ◽  
2015 ◽  
Vol 15 (10) ◽  
pp. 25937-25967 ◽  
Author(s):  
Ghina Natour ◽  
Omar Ait-Aider ◽  
Raphael Rouveure ◽  
François Berry ◽  
Patrice Faure

Sign in / Sign up

Export Citation Format

Share Document