scholarly journals Improved Multiview Decomposition for Single-Image High-Resolution 3D Object Reconstruction

2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Jiansheng Peng ◽  
Kui Fu ◽  
Qingjin Wei ◽  
Yong Qin ◽  
Qiwen He

As a representative technology of artificial intelligence, 3D reconstruction based on deep learning can be integrated into the edge computing framework to form an intelligent edge and then realize the intelligent processing of the edge. Recently, high-resolution representation of 3D objects using multiview decomposition (MVD) architecture is a fast reconstruction method for generating objects with realistic details from a single RGB image. The results of high-resolution 3D object reconstruction are related to two aspects. On the one hand, a low-resolution reconstruction network represents a good 3D object from a single RGB image. On the other hand, a high-resolution reconstruction network maximizes fine low-resolution 3D objects. To improve these two aspects and further enhance the high-resolution reconstruction capabilities of the 3D object generation network, we study and improve the low-resolution 3D generation network and the depth map superresolution network. Eventually, we get an improved multiview decomposition (IMVD) network. First, we use a 2D image encoder with multifeature fusion (MFF) to enhance the feature extraction capability of the model. Second, a 3D decoder using an effective subpixel convolutional neural network (3D ESPCN) improves the decoding speed in the decoding stage. Moreover, we design a multiresidual dense block (MRDB) to optimize the depth map superresolution network, which allows the model to capture more object details and reduce the model parameters by approximately 25% when the number of network layers is doubled. The experimental results show that the proposed IMVD is better than the original MVD in the 3D object superresolution experiment and the high-resolution 3D reconstruction experiment of a single image.

IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 57539-57549 ◽  
Author(s):  
Yang Zhang ◽  
Zhen Liu ◽  
Tianpeng Liu ◽  
Bo Peng ◽  
Xiang Li

IEEE Access ◽  
2021 ◽  
Vol 9 ◽  
pp. 110-121
Author(s):  
Ahmed J. Afifi ◽  
Jannes Magnusson ◽  
Toufique A. Soomro ◽  
Olaf Hellwich

2019 ◽  
Vol 9 (20) ◽  
pp. 4444
Author(s):  
Byunghyun Kim ◽  
Soojin Cho

In most hyperspectral super-resolution (HSR) methods, which are techniques used to improve the resolution of hyperspectral images (HSIs), the HSI and the target RGB image are assumed to have identical fields of view. However, because implementing these identical fields of view is difficult in practical applications, in this paper, we propose a HSR method that is applicable when an HSI and a target RGB image have different spatial information. The proposed HSR method first creates a low-resolution RGB image from a given HSI. Next, a histogram matching is performed on a high-resolution RGB image and a low-resolution RGB image obtained from an HSI. Finally, the proposed method optimizes endmember abundance of the high-resolution HSI towards the histogram-matched high-resolution RGB image. The entire procedure is evaluated using an open HSI dataset, the Harvard dataset, by adding spatial mismatch to the dataset. The spatial mismatch is implemented by shear transformation and cutting off the upper and left sides of the target RGB image. The proposed method achieved a lower error rate across the entire dataset, confirming its capability for super-resolution using images that have different fields of view.


2021 ◽  
Vol 33 (12) ◽  
pp. 1887-1898
Author(s):  
Weichao Shen ◽  
Tianshuo Ma ◽  
Yuwei Wu ◽  
Yunde Jia

Author(s):  
Guoliang Wu ◽  
Yanjie Wang ◽  
Shi Li

Existing depth map-based super-resolution (SR) methods cannot achieve satisfactory results in depth map detail restoration. For example, boundaries of the depth map are always difficult to reconstruct effectively from the low-resolution (LR) guided depth map particularly at big magnification factors. In this paper, we present a novel super-resolution method for single depth map by introducing a deep feedback network (DFN), which can effectively enhance the feature representations at depth boundaries that utilize iterative up-sampling and down-sampling operations, building a deep feedback mechanism by projecting high-resolution (HR) representations to low-resolution spatial domain and then back-projecting to high-resolution spatial domain. The deep feedback (DF) block imitates the process of image degradation and reconstruction iteratively. The rich intermediate high-resolution features effectively tackle the problem of depth boundary ambiguity in depth map super-resolution. Extensive experimental results on the benchmark datasets show that our proposed DFN outperforms the state-of-the-art methods.


Author(s):  
Andrey Salvi ◽  
Nathan Gavenski ◽  
Eduardo Pooch ◽  
Felipe Tasoniero ◽  
Rodrigo Barros

Sign in / Sign up

Export Citation Format

Share Document