scholarly journals Double-Constraint Inpainting Model of a Single-Depth Image

Sensors ◽  
2020 ◽  
Vol 20 (6) ◽  
pp. 1797
Author(s):  
Wu Jin ◽  
Li Zun ◽  
Liu Yong

In real applications, obtained depth images are incomplete; therefore, depth image inpainting is studied here. A novel model that is characterised by both a low-rank structure and nonlocal self-similarity is proposed. As a double constraint, the low-rank structure and nonlocal self-similarity can fully exploit the features of single-depth images to complete the inpainting task. First, according to the characteristics of pixel values, we divide the image into blocks, and similar block groups and three-dimensional arrangements are then formed. Then, the variable splitting technique is applied to effectively divide the inpainting problem into the sub-problems of the low-rank constraint and nonlocal self-similarity constraint. Finally, different strategies are used to solve different sub-problems, resulting in greater reliability. Experiments show that the proposed algorithm attains state-of-the-art performance.

2018 ◽  
Vol 15 (4) ◽  
pp. 172988141878774 ◽  
Author(s):  
Shahram Mohammadi ◽  
Omid Gervei

To use low-cost depth sensors such as Kinect for three-dimensional face recognition with an acceptable rate of recognition, the challenges of filling up nonmeasured pixels and smoothing of noisy data need to be addressed. The main goal of this article is presenting solutions for aforementioned challenges as well as offering feature extraction methods to reach the highest level of accuracy in the presence of different facial expressions and occlusions. To use this method, a domestic database was created. First, the noisy pixels-called holes-of depth image is removed by solving multiple linear equations resulted from the values of the surrounding pixels of the holes. Then, bilateral and block matching 3-D filtering approaches, as representatives of local and nonlocal filtering approaches, are used for depth image smoothing. Curvelet transform as a well-known nonlocal feature extraction technique applied on both RGB and depth images. Two unsupervised dimension reduction techniques, namely, principal component analysis and independent component analysis, are used to reduce the dimension of extracted features. Finally, support vector machine is used for classification. Experimental results show a recognition rate of 90% for just depth images and 100% when combining RGB and depth data of a Kinect sensor which is much higher than other recently proposed algorithms.


2019 ◽  
Vol 9 (6) ◽  
pp. 1103 ◽  
Author(s):  
Zun Li ◽  
Jin Wu

Due to the rapid development of RGB-D sensors, increasing attention is being paid to depth image applications. Depth images play an important role in computer vision research. In this paper, we address the problem of inpainting for single depth images without corresponding color images as a guide. Within the framework of model-based optimization methods for depth image inpainting, the split Bregman iteration algorithm was used to transform depth image inpainting into the corresponding denoising subproblem. Then, we trained a set of efficient convolutional neural network (CNN) denoisers to solve this subproblem. Experimental results demonstrate the effectiveness of the proposed algorithm in comparison with three traditional methods in terms of visual quality and objective metrics.


2019 ◽  
Vol 11 (6) ◽  
pp. 168781401985284
Author(s):  
Meiliang Wang ◽  
Mingjun Wang ◽  
Xiaobo Li

The use of the traditional fabric simulation model evidently shows that it cannot accurately reflect the material properties of the real fabric. This is against the background that the simulation result is artificial or an imitation, which leads to a low simulation equation. In order to solve such problems from occurring, there is need for a novel model that is designed to enhance the essential properties required for a flexible fabric, the simulation effect of the fabric, and the efficiency of simulation equation solving. Therefore, the improvement study results will offer a meaningful and practical understanding within the field of garment automation design, three-dimensional animation, virtual fitting to mention but a few.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1299
Author(s):  
Honglin Yuan ◽  
Tim Hoogenkamp ◽  
Remco C. Veltkamp

Deep learning has achieved great success on robotic vision tasks. However, when compared with other vision-based tasks, it is difficult to collect a representative and sufficiently large training set for six-dimensional (6D) object pose estimation, due to the inherent difficulty of data collection. In this paper, we propose the RobotP dataset consisting of commonly used objects for benchmarking in 6D object pose estimation. To create the dataset, we apply a 3D reconstruction pipeline to produce high-quality depth images, ground truth poses, and 3D models for well-selected objects. Subsequently, based on the generated data, we produce object segmentation masks and two-dimensional (2D) bounding boxes automatically. To further enrich the data, we synthesize a large number of photo-realistic color-and-depth image pairs with ground truth 6D poses. Our dataset is freely distributed to research groups by the Shape Retrieval Challenge benchmark on 6D pose estimation. Based on our benchmark, different learning-based approaches are trained and tested by the unified dataset. The evaluation results indicate that there is considerable room for improvement in 6D object pose estimation, particularly for objects with dark colors, and photo-realistic images are helpful in increasing the performance of pose estimation algorithms.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1356
Author(s):  
Linda Christin Büker ◽  
Finnja Zuber ◽  
Andreas Hein ◽  
Sebastian Fudickar

With approaches for the detection of joint positions in color images such as HRNet and OpenPose being available, consideration of corresponding approaches for depth images is limited even though depth images have several advantages over color images like robustness to light variation or color- and texture invariance. Correspondingly, we introduce High- Resolution Depth Net (HRDepthNet)—a machine learning driven approach to detect human joints (body, head, and upper and lower extremities) in purely depth images. HRDepthNet retrains the original HRNet for depth images. Therefore, a dataset is created holding depth (and RGB) images recorded with subjects conducting the timed up and go test—an established geriatric assessment. The images were manually annotated RGB images. The training and evaluation were conducted with this dataset. For accuracy evaluation, detection of body joints was evaluated via COCO’s evaluation metrics and indicated that the resulting depth image-based model achieved better results than the HRNet trained and applied on corresponding RGB images. An additional evaluation of the position errors showed a median deviation of 1.619 cm (x-axis), 2.342 cm (y-axis) and 2.4 cm (z-axis).


Mathematics ◽  
2021 ◽  
Vol 9 (21) ◽  
pp. 2815
Author(s):  
Shih-Hung Yang ◽  
Yao-Mao Cheng ◽  
Jyun-We Huang ◽  
Yon-Ping Chen

Automatic fingerspelling recognition tackles the communication barrier between deaf and hearing individuals. However, the accuracy of fingerspelling recognition is reduced by high intra-class variability and low inter-class variability. In the existing methods, regular convolutional kernels, which have limited receptive fields (RFs) and often cannot detect subtle discriminative details, are applied to learn features. In this study, we propose a receptive field-aware network with finger attention (RFaNet) that highlights the finger regions and builds inter-finger relations. To highlight the discriminative details of these fingers, RFaNet reweights the low-level features of the hand depth image with those of the non-forearm image and improves finger localization, even when the wrist is occluded. RFaNet captures neighboring and inter-region dependencies between fingers in high-level features. An atrous convolution procedure enlarges the RFs at multiple scales and a non-local operation computes the interactions between multi-scale feature maps, thereby facilitating the building of inter-finger relations. Thus, the representation of a sign is invariant to viewpoint changes, which are primarily responsible for intra-class variability. On an American Sign Language fingerspelling dataset, RFaNet achieved 1.77% higher classification accuracy than state-of-the-art methods. RFaNet achieved effective transfer learning when the number of labeled depth images was insufficient. The fingerspelling representation of a depth image can be effectively transferred from large- to small-scale datasets via highlighting the finger regions and building inter-finger relations, thereby reducing the requirement for expensive fingerspelling annotations.


2019 ◽  
Vol 2 (4) ◽  
pp. 370-381
Author(s):  
Zahra Hesari ◽  
Fatemeh Mottaghitalab ◽  
Akram Shafiee ◽  
Masoud Soleymani ◽  
Rasoul Dinarvand ◽  
...  

Neural differentiation of stem cells is an important issue in development of central nervous system. Different methods such as chemical stimulation with small molecules, scaffolds, and microRNA can be used for inducing the differentiation of neural stem cells. However, microfluidic systems with the potential to induce neuronal differentiation have established their reputation in the field of regenerative medicine. Organization of microfluidic system represents a novel model that mimic the physiologic microenvironment of cells among other two and three dimensional cell culture systems. Microfluidic system has patterned and well-organized structure that can be combined with other differentiation techniques to provide optimal conditions for neuronal differentiation of stem cells. In this review, different methods for effective differentiation of stem cells to neuronal cells are summarized. The efficacy of microfluidic systems in promoting neuronal differentiation is also addressed.


Sign in / Sign up

Export Citation Format

Share Document