scholarly journals A novel approach for Identification of pills based on the method of Depth from Focus

2019 ◽  
Vol 69 (06) ◽  
pp. 466-471
Author(s):  
YU LING JIE ◽  
WANG RONG WU ◽  
ZHOU JIN FENG

For automatic pilling evaluation of textiles, the depth information is one of the most critical and effective features in extracting pills from fabric image. Laser-scanning techniques are often used for acquiring 3D depth images. However, due to the high-cost and low-efficiency of Laser-scanning system, researchers have found it unsuitable for fabric analysis. This paper illustrates a new approach for acquiring the depth image used to extract pills by introducing the method of Depth From Focus (DFF). This approach firstly captures a sequence of images of the same view at different focal positions under the automatic optical microscope. Then the best-focused position (z) of each pixel(x, y) was determined by choosing the layer of image declaring the max sharpness and formed the depth image. This paper proposed a new sharpness-evaluation criterion which was based on the variance of gradients. Afterwards, a few basic points indicating the background area was selected from the depth image, and then the depth coordinates (x, y, z) at these basic points were used to calculate a predicted background plane. Via the background plane, pills above the background were extracted. A fabric sample with a single fiber upon it was presented to illustrate the process and result of the approach.

Sensors ◽  
2021 ◽  
Vol 21 (17) ◽  
pp. 5725
Author(s):  
Mingliang Zhou ◽  
Wen Cheng ◽  
Hongwei Huang ◽  
Jiayao Chen

The detection of concrete spalling is critical for tunnel inspectors to assess structural risks and guarantee the daily operation of the railway tunnel. However, traditional spalling detection methods mostly rely on visual inspection or camera images taken manually, which are inefficient and unreliable. In this study, an integrated approach based on laser intensity and depth features is proposed for the automated detection and quantification of concrete spalling. The Railway Tunnel Spalling Defects (RTSD) database, containing intensity images and depth images of the tunnel linings, is established via mobile laser scanning (MLS), and the Spalling Intensity Depurator Network (SIDNet) model is proposed for automatic extraction of the concrete spalling features. The proposed model is trained, validated and tested on the established RSTD dataset with impressive results. Comparison with several other spalling detection models shows that the proposed model performs better in terms of various indicators such as MPA (0.985) and MIoU (0.925). The extra depth information obtained from MLS allows for the accurate evaluation of the volume of detected spalling defects, which is beyond the reach of traditional methods. In addition, a triangulation mesh method is implemented to reconstruct the 3D tunnel lining model and visualize the 3D inspection results. As a result, a 3D inspection report can be outputted automatically containing quantified spalling defect information along with relevant spatial coordinates. The proposed approach has been conducted on several railway tunnels in Yunnan province, China and the experimental results have proved its validity and feasibility.


2019 ◽  
Vol 11 ◽  
pp. 175682931882232
Author(s):  
Navid Dorudian ◽  
Stanislao Lauria ◽  
Stephen Swift

A novel approach to detect micro air vehicles in GPS-denied environments using an external RGB-D sensor is presented. The nonparametric background subtraction technique incorporating several innovative mechanisms allows the detection of high-speed moving micro air vehicles by combining colour and depth information. The proposed method stores several colour and depth images as models and then compares each pixel from a frame with the stored models to classify the pixel as background or foreground. To adapt to scene changes, once a pixel is classified as background, the system updates the model by finding and substituting the closest pixel to the camera with the current pixel. The background model update presented uses different criteria from existing methods. Additionally, a blind update model is added to adapt to background sudden changes. The proposed architecture is compared with existing techniques using two different micro air vehicles and publicly available datasets. Results showing some improvements over existing methods are discussed.


Author(s):  
Yan Wu ◽  
Jiqian Li ◽  
Jing Bai

RGB-D-based object recognition has been enthusiastically investigated in the past few years. RGB and depth images provide useful and complementary information. Fusing RGB and depth features can significantly increase the accuracy of object recognition. However, previous works just simply take the depth image as the fourth channel of the RGB image and concatenate the RGB and depth features, ignoring the different power of RGB and depth information for different objects. In this paper, a new method which contains three different classifiers is proposed to fuse features extracted from RGB image and depth image for RGB-D-based object recognition. Firstly, a RGB classifier and a depth classifier are trained by cross-validation to get the accuracy difference between RGB and depth features for each object. Then a variant RGB-D classifier is trained with different initialization parameters for each class according to the accuracy difference. The variant RGB-D-classifier can result in a more robust classification performance. The proposed method is evaluated on two benchmark RGB-D datasets. Compared with previous methods, ours achieves comparable performance with the state-of-the-art method.


Author(s):  
M. Bleier ◽  
A. Nüchter

In-situ calibration of structured light scanners in underwater environments is time-consuming and complicated. This paper presents a self-calibrating line laser scanning system, which enables the creation of dense 3D models with a single fixed camera and a freely moving hand-held cross line laser projector. The proposed approach exploits geometric constraints, such as coplanarities, to recover the depth information and is applicable without any prior knowledge of the position and orientation of the laser projector. By employing an off-the-shelf underwater camera and a waterproof housing with high power line lasers an affordable 3D scanning solution can be built. In experiments the performance of the proposed technique is studied and compared with 3D reconstruction using explicit calibration. We demonstrate that the scanning system can be applied to above-the-water as well as underwater scenes.


2018 ◽  
Vol 50 (3) ◽  
pp. 310-322 ◽  
Author(s):  
Xiping Wang ◽  
Ed Thomas ◽  
Feng Xu ◽  
Yunfei Liu ◽  
Brian K Brashaw ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 1962
Author(s):  
Enrico Buratto ◽  
Adriano Simonetto ◽  
Gianluca Agresti ◽  
Henrik Schäfer ◽  
Pietro Zanuttigh

In this work, we propose a novel approach for correcting multi-path interference (MPI) in Time-of-Flight (ToF) cameras by estimating the direct and global components of the incoming light. MPI is an error source linked to the multiple reflections of light inside a scene; each sensor pixel receives information coming from different light paths which generally leads to an overestimation of the depth. We introduce a novel deep learning approach, which estimates the structure of the time-dependent scene impulse response and from it recovers a depth image with a reduced amount of MPI. The model consists of two main blocks: a predictive model that learns a compact encoded representation of the backscattering vector from the noisy input data and a fixed backscattering model which translates the encoded representation into the high dimensional light response. Experimental results on real data show the effectiveness of the proposed approach, which reaches state-of-the-art performances.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1299
Author(s):  
Honglin Yuan ◽  
Tim Hoogenkamp ◽  
Remco C. Veltkamp

Deep learning has achieved great success on robotic vision tasks. However, when compared with other vision-based tasks, it is difficult to collect a representative and sufficiently large training set for six-dimensional (6D) object pose estimation, due to the inherent difficulty of data collection. In this paper, we propose the RobotP dataset consisting of commonly used objects for benchmarking in 6D object pose estimation. To create the dataset, we apply a 3D reconstruction pipeline to produce high-quality depth images, ground truth poses, and 3D models for well-selected objects. Subsequently, based on the generated data, we produce object segmentation masks and two-dimensional (2D) bounding boxes automatically. To further enrich the data, we synthesize a large number of photo-realistic color-and-depth image pairs with ground truth 6D poses. Our dataset is freely distributed to research groups by the Shape Retrieval Challenge benchmark on 6D pose estimation. Based on our benchmark, different learning-based approaches are trained and tested by the unified dataset. The evaluation results indicate that there is considerable room for improvement in 6D object pose estimation, particularly for objects with dark colors, and photo-realistic images are helpful in increasing the performance of pose estimation algorithms.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1356
Author(s):  
Linda Christin Büker ◽  
Finnja Zuber ◽  
Andreas Hein ◽  
Sebastian Fudickar

With approaches for the detection of joint positions in color images such as HRNet and OpenPose being available, consideration of corresponding approaches for depth images is limited even though depth images have several advantages over color images like robustness to light variation or color- and texture invariance. Correspondingly, we introduce High- Resolution Depth Net (HRDepthNet)—a machine learning driven approach to detect human joints (body, head, and upper and lower extremities) in purely depth images. HRDepthNet retrains the original HRNet for depth images. Therefore, a dataset is created holding depth (and RGB) images recorded with subjects conducting the timed up and go test—an established geriatric assessment. The images were manually annotated RGB images. The training and evaluation were conducted with this dataset. For accuracy evaluation, detection of body joints was evaluated via COCO’s evaluation metrics and indicated that the resulting depth image-based model achieved better results than the HRNet trained and applied on corresponding RGB images. An additional evaluation of the position errors showed a median deviation of 1.619 cm (x-axis), 2.342 cm (y-axis) and 2.4 cm (z-axis).


Coatings ◽  
2021 ◽  
Vol 11 (7) ◽  
pp. 758
Author(s):  
Cibi Pranav ◽  
Minh-Tan Do ◽  
Yi-Chang Tsai

High Friction Surfaces (HFS) are applied to increase friction capacity on critical roadway sections, such as horizontal curves. HFS friction deterioration on these sections is a safety concern. This study deals with characterization of the aggregate loss, one of the main failure mechanisms of HFS, using texture parameters to study its relationship with friction. Tests are conducted on selected HFS spots with different aggregate loss severity levels at the National Center for Asphalt Technology (NCAT) Test Track. Friction tests are performed using a Dynamic Friction Tester (DFT). The surface texture is measured by means of a high-resolution 3D pavement scanning system (0.025 mm vertical resolution). Texture data are processed and analyzed by means of the MountainsMap software. The correlations between the DFT friction coefficient and the texture parameters confirm the impact of change in aggregates’ characteristics (including height, shape, and material volume) on friction. A novel approach to detect the HFS friction coefficient transition based on aggregate loss, inspired by previous works on the tribology of coatings, is proposed. Using the proposed approach, preliminary outcomes show it is possible to observe the rapid friction coefficient transition, similar to observations at NCAT. Perspectives for future research are presented and discussed.


Sign in / Sign up

Export Citation Format

Share Document