Synthesis of a high-resolution 3D stereoscopic image pair from a high-resolution monoscopic image and a low-resolution depth map

Author(s):  
Kyung-tae Kim ◽  
Mel Siegel ◽  
Jung-Young Son
Author(s):  
Guoliang Wu ◽  
Yanjie Wang ◽  
Shi Li

Existing depth map-based super-resolution (SR) methods cannot achieve satisfactory results in depth map detail restoration. For example, boundaries of the depth map are always difficult to reconstruct effectively from the low-resolution (LR) guided depth map particularly at big magnification factors. In this paper, we present a novel super-resolution method for single depth map by introducing a deep feedback network (DFN), which can effectively enhance the feature representations at depth boundaries that utilize iterative up-sampling and down-sampling operations, building a deep feedback mechanism by projecting high-resolution (HR) representations to low-resolution spatial domain and then back-projecting to high-resolution spatial domain. The deep feedback (DF) block imitates the process of image degradation and reconstruction iteratively. The rich intermediate high-resolution features effectively tackle the problem of depth boundary ambiguity in depth map super-resolution. Extensive experimental results on the benchmark datasets show that our proposed DFN outperforms the state-of-the-art methods.


2016 ◽  
Vol 2016 ◽  
pp. 1-12
Author(s):  
Jino Hans William ◽  
N. Venkateswaran ◽  
Srinath Narayanan ◽  
Sandeep Ramachandran

A selfie is typically a self-portrait captured using the front camera of a smartphone. Most state-of-the-art smartphones are equipped with a high-resolution (HR) rear camera and a low-resolution (LR) front camera. As selfies are captured by front camera with limited pixel resolution, the fine details in it are explicitly missed. This paper aims to improve the resolution of selfies by exploiting the fine details in HR images captured by rear camera using an example-based super-resolution (SR) algorithm. HR images captured by rear camera carry significant fine details and are used as an exemplar to train an optimal matrix-value regression (MVR) operator. The MVR operator serves as an image-pair priori which learns the correspondence between the LR-HR patch-pairs and is effectively used to super-resolve LR selfie images. The proposed MVR algorithm avoids vectorization of image patch-pairs and preserves image-level information during both learning and recovering process. The proposed algorithm is evaluated for its efficiency and effectiveness both qualitatively and quantitatively with other state-of-the-art SR algorithms. The results validate that the proposed algorithm is efficient as it requires less than 3 seconds to super-resolve LR selfie and is effective as it preserves sharp details without introducing any counterfeit fine details.


2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Jiansheng Peng ◽  
Kui Fu ◽  
Qingjin Wei ◽  
Yong Qin ◽  
Qiwen He

As a representative technology of artificial intelligence, 3D reconstruction based on deep learning can be integrated into the edge computing framework to form an intelligent edge and then realize the intelligent processing of the edge. Recently, high-resolution representation of 3D objects using multiview decomposition (MVD) architecture is a fast reconstruction method for generating objects with realistic details from a single RGB image. The results of high-resolution 3D object reconstruction are related to two aspects. On the one hand, a low-resolution reconstruction network represents a good 3D object from a single RGB image. On the other hand, a high-resolution reconstruction network maximizes fine low-resolution 3D objects. To improve these two aspects and further enhance the high-resolution reconstruction capabilities of the 3D object generation network, we study and improve the low-resolution 3D generation network and the depth map superresolution network. Eventually, we get an improved multiview decomposition (IMVD) network. First, we use a 2D image encoder with multifeature fusion (MFF) to enhance the feature extraction capability of the model. Second, a 3D decoder using an effective subpixel convolutional neural network (3D ESPCN) improves the decoding speed in the decoding stage. Moreover, we design a multiresidual dense block (MRDB) to optimize the depth map superresolution network, which allows the model to capture more object details and reduce the model parameters by approximately 25% when the number of network layers is doubled. The experimental results show that the proposed IMVD is better than the original MVD in the 3D object superresolution experiment and the high-resolution 3D reconstruction experiment of a single image.


Electronics ◽  
2021 ◽  
Vol 10 (9) ◽  
pp. 1013
Author(s):  
Sayan Maity ◽  
Mohamed Abdel-Mottaleb ◽  
Shihab S. Asfour

Biometric identification using surveillance video has attracted the attention of many researchers as it can be applicable not only for robust identification but also personalized activity monitoring. In this paper, we present a novel multimodal recognition system that extracts frontal gait and low-resolution face images from frontal walking surveillance video clips to perform efficient biometric recognition. The proposed study addresses two important issues in surveillance video that did not receive appropriate attention in the past. First, it consolidates the model-free and model-based gait feature extraction approaches to perform robust gait recognition only using the frontal view. Second, it uses a low-resolution face recognition approach which can be trained and tested using low-resolution face information. This eliminates the need for obtaining high-resolution face images to create the gallery, which is required in the majority of low-resolution face recognition techniques. Moreover, the classification accuracy on high-resolution face images is considerably higher. Previous studies on frontal gait recognition incorporate assumptions to approximate the average gait cycle. However, we quantify the gait cycle precisely for each subject using only the frontal gait information. The approaches available in the literature use the high resolution images obtained in a controlled environment to train the recognition system. However, in our proposed system we train the recognition algorithm using the low-resolution face images captured in the unconstrained environment. The proposed system has two components, one is responsible for performing frontal gait recognition and one is responsible for low-resolution face recognition. Later, score level fusion is performed to fuse the results of the frontal gait recognition and the low-resolution face recognition. Experiments conducted on the Face and Ocular Challenge Series (FOCS) dataset resulted in a 93.5% Rank-1 for frontal gait recognition and 82.92% Rank-1 for low-resolution face recognition, respectively. The score level multimodal fusion resulted in 95.9% Rank-1 recognition, which demonstrates the superiority and robustness of the proposed approach.


Author(s):  
R. S. Hansen ◽  
D. W. Waldram ◽  
T. Q. Thai ◽  
R. B. Berke

Abstract Background High-resolution Digital Image Correlation (DIC) measurements have previously been produced by stitching of neighboring images, which often requires short working distances. Separately, the image processing community has developed super resolution (SR) imaging techniques, which improve resolution by combining multiple overlapping images. Objective This work investigates the novel pairing of super resolution with digital image correlation, as an alternative method to produce high-resolution full-field strain measurements. Methods First, an image reconstruction test is performed, comparing the ability of three previously published SR algorithms to replicate a high-resolution image. Second, an applied translation is compared against DIC measurement using both low- and super-resolution images. Third, a ring sample is mechanically deformed and DIC strain measurements from low- and super-resolution images are compared. Results SR measurements show improvements compared to low-resolution images, although they do not perfectly replicate the high-resolution image. SR-DIC demonstrates reduced error and improved confidence in measuring rigid body translation when compared to low resolution alternatives, and it also shows improvement in spatial resolution for strain measurements of ring deformation. Conclusions Super resolution imaging can be effectively paired with Digital Image Correlation, offering improved spatial resolution, reduced error, and increased measurement confidence.


2006 ◽  
Vol 2 (14) ◽  
pp. 169-194
Author(s):  
Ana I. Gómez de Castro ◽  
Martin A. Barstow

AbstractThe scientific program is presented as well a the abstracts of the contributions. An extended account is published in “The Ultraviolet Universe: stars from birth to death” (Ed. Gómez de Castro) published by the Editorial Complutense de Madrid (UCM), that can be accessed by electronic format through the website of the Network for UV Astronomy (www.ucm.es/info/nuva).There are five telescopes currently in orbit that have a UV capability of some description. At the moment, only FUSE provides any medium- to high-resolution spectroscopic capability. GALEX, the XMM UV-Optical Telescope (UVOT) and the Swift. UVOT mainly delivers broad-band imaging, but with some low-resolution spectroscopy using grisms. The primary UV spectroscopic capability of HST was lost when the Space Telescope Imaging Spectrograph failed in 2004, but UV imaging is still available with the HST-WFPC2 and HST-ACS instruments.With the expected limited lifetime of sl FUSE, UV spectroscopy will be effectively unavailable in the short-term future. Even if a servicing mission of HST does go ahead, to install COS and repair STIS, the availability of high-resolution spectroscopy well into the next decade will not have been addressed. Therefore, it is important to develop new missions to complement and follow on from the legacy of FUSE and HST, as well as the smaller imaging/low resolution spectroscopy facilities. This contribution presents an outline of the UV projects, some of which are already approved for flight, while others are still at the proposal/study stage of their development.This contribution outlines the main results from Joint Discussion 04 held during the IAU General Assembly in Prague, August 2006, concerning the rationale behind the needs of the astronomical community, in particular the stellar astrophysics community, for new UV instrumentation. Recent results from UV observations were presented and future science goals were laid out. These goals will lay the framework for future mission planning.


Author(s):  
Stephanie C. Herring ◽  
Nikolaos Christidis ◽  
Andrew Hoell ◽  
James P. Kossin ◽  
Carl J. Schreck ◽  
...  

Editors note: For easy download the posted pdf of the Explaining Extreme Events of 2016 is a very low-resolution file. A high-resolution copy of the report is available by clicking here. Please be patient as it may take a few minutes for the high-resolution file to download.


Sign in / Sign up

Export Citation Format

Share Document