scholarly journals Automated Measurement of Heart Girth for Pigs Using Two Kinect Depth Sensors

Sensors ◽  
2020 ◽  
Vol 20 (14) ◽  
pp. 3848
Author(s):  
Xinyue Zhang ◽  
Gang Liu ◽  
Ling Jing ◽  
Siyao Chen

The heart girth parameter is an important indicator reflecting the growth and development of pigs that provides critical guidance for the optimization of healthy pig breeding. To overcome the heavy workloads and poor adaptability of traditional measurement methods currently used in pig breeding, this paper proposes an automated pig heart girth measurement method using two Kinect depth sensors. First, a two-view pig depth image acquisition platform is established for data collection; the two-view point clouds after preprocessing are registered and fused by feature-based improved 4-Point Congruent Set (4PCS) method. Second, the fused point cloud is pose-normalized, and the axillary contour is used to automatically extract the heart girth measurement point. Finally, this point is taken as the starting point to intercept the circumferential perpendicular to the ground from the pig point cloud, and the complete heart girth point cloud is obtained by mirror symmetry. The heart girth is measured along this point cloud using the shortest path method. Using the proposed method, experiments were conducted on two-view data from 26 live pigs. The results showed that the heart girth measurement absolute errors were all less than 4.19 cm, and the average relative error was 2.14%, which indicating a high accuracy and efficiency of this method.

Author(s):  
Ghazanfar Ali Shah ◽  
Jean-Philippe Pernot ◽  
Arnaud Polette ◽  
Franca Giannini ◽  
Marina Monti

Abstract This paper introduces a novel reverse engineering technique for the reconstruction of editable CAD models of mechanical parts' assemblies. The input is a point cloud of a mechanical parts' assembly that has been acquired as a whole, i.e. without disassembling it prior to its digitization. The proposed framework allows for the reconstruction of the parametric CAD assembly model through a multi-step reconstruction and fitting approach. It is modular and it supports various exploitation scenarios depending on the available data and starting point. It also handles incomplete datasets. The reconstruction process starts from roughly sketched and parameterized geometries (i.e 2D sketches, 3D parts or assemblies) that are then used as input of a simulated annealing-based fitting algorithm, which minimizes the deviation between the point cloud and the reconstructed geometries. The coherence of the CAD models is maintained by a CAD modeler that performs the updates and satisfies the geometric constraints as the fitting process goes on. The optimization process leverages a two-level filtering technique able to capture and manage the boundaries of the geometries inside the overall point cloud in order to allow for local fitting and interfaces detection. It is a user-driven approach where the user decides what are the most suitable steps and sequence to operate. It has been tested and validated on both real scanned point clouds and as-scanned virtually generated point clouds incorporating several artifacts that would appear with real acquisition devices.


2013 ◽  
Vol 760-762 ◽  
pp. 1556-1561
Author(s):  
Ting Wei Du ◽  
Bo Liu

Indoor scene understanding based on the depth image data is a cutting-edge issue in the field of three-dimensional computer vision. Taking the layout characteristics of the indoor scenes and more plane features in these scenes into account, this paper presents a depth image segmentation method based on Gauss Mixture Model clustering. First, transform the Kinect depth image data into point cloud which is in the form of discrete three-dimensional point data, and denoise and down-sample the point cloud data; second, calculate the point normal of all points in the entire point cloud, then cluster the entire normal using Gaussian Mixture Model, and finally implement the entire point clouds segmentation by RANSAC algorithm. Experimental results show that the divided regions have obvious boundaries and segmentation quality is above normal, and lay a good foundation for object recognition.


Author(s):  
Jinglu Wang ◽  
Bo Sun ◽  
Yan Lu

In this paper, we address the problem of reconstructing an object’s surface from a single image using generative networks. First, we represent a 3D surface with an aggregation of dense point clouds from multiple views. Each point cloud is embedded in a regular 2D grid aligned on an image plane of a viewpoint, making the point cloud convolution-favored and ordered so as to fit into deep network architectures. The point clouds can be easily triangulated by exploiting connectivities of the 2D grids to form mesh-based surfaces. Second, we propose an encoder-decoder network that generates such kind of multiple view-dependent point clouds from a single image by regressing their 3D coordinates and visibilities. We also introduce a novel geometric loss that is able to interpret discrepancy over 3D surfaces as opposed to 2D projective planes, resorting to the surface discretization on the constructed meshes. We demonstrate that the multi-view point regression network outperforms state-of-the-art methods with a significant improvement on challenging datasets.


2020 ◽  
Vol 12 (6) ◽  
pp. 942 ◽  
Author(s):  
Maria Rosaria De Blasiis ◽  
Alessandro Di Benedetto ◽  
Margherita Fiani

The surface conditions of road pavements, including the occurrence and severity of distresses present on the surface, are an important indicator of pavement performance. Periodic monitoring and condition assessment is an essential requirement for the safety of vehicles moving on that road and the wellbeing of people. The traditional characterization of the different types of distress often involves complex activities, sometimes inefficient and risky, as they interfere with road traffic. The mobile laser systems (MLS) are now widely used to acquire detailed information about the road surface in terms of a three-dimensional point cloud. Despite its increasing use, there are still no standards for the acquisition and processing of the data collected. The aim of our work was to develop a procedure for processing the data acquired by MLS, in order to identify the localized degradations that mostly affect safety. We have studied the data flow and implemented several processing algorithms to identify and quantify a few types of distresses, namely potholes and swells/shoves, starting from very dense point clouds. We have implemented data processing in four steps: (i) editing of the point cloud to extract only the points belonging to the road surface, (ii) determination of the road roughness as deviation in height of every single point of the cloud with respect to the modeled road surface, (iii) segmentation of the distress (iv) computation of the main geometric parameters of the distress in order to classify it by severity levels. The results obtained by the proposed methodology are promising. The procedures implemented have made it possible to correctly segmented and identify the types of distress to be analyzed, in accordance with the on-site inspections. The tests carried out have shown that the choice of the values of some parameters to give as input to the software is not trivial: the choice of some of them is based on considerations related to the nature of the data, for others, it derives from the distress to be segmented. Due to the different possible configurations of the various distresses it is better to choose these parameters according to the boundary conditions and not to impose default values. The test involved a 100-m long urban road segment, the surface of which was measured with an MLS installed on a vehicle that traveled the road at 10 km/h.


Author(s):  
D. Tosic ◽  
S. Tuttas ◽  
L. Hoegner ◽  
U. Stilla

<p><strong>Abstract.</strong> This work proposes an approach for semantic classification of an outdoor-scene point cloud acquired with a high precision Mobile Mapping System (MMS), with major goal to contribute to the automatic creation of High Definition (HD) Maps. The automatic point labeling is achieved by utilizing the combination of a feature-based approach for semantic classification of point clouds and a deep learning approach for semantic segmentation of images. Both, point cloud data, as well as the data from a multi-camera system are used for gaining spatial information in an urban scene. Two types of classification applied for this task are: 1) Feature-based approach, in which the point cloud is organized into a supervoxel structure for capturing geometric characteristics of points. Several geometric features are then extracted for appropriate representation of the local geometry, followed by removing the effect of local tendency for each supervoxel to enhance the distinction between similar structures. And lastly, the Random Forests (RF) algorithm is applied in the classification phase, for assigning labels to supervoxels and therefore to points within them. 2) The deep learning approach is employed for semantic segmentation of MMS images of the same scene. To achieve this, an implementation of Pyramid Scene Parsing Network is used. Resulting segmented images with each pixel containing a class label are then projected onto the point cloud, enabling label assignment for each point. At the end, experiment results are presented from a complex urban scene and the performance of this method is evaluated on a manually labeled dataset, for the deep learning and feature-based classification individually, as well as for the result of the labels fusion. The achieved overall accuracy with fusioned output is 0.87 on the final test set, which significantly outperforms the results of individual methods on the same point cloud. The labeled data is published on the TUM-PF Semantic-Labeling-Benchmark.</p>


2020 ◽  
Vol 12 (7) ◽  
pp. 1142
Author(s):  
Jeonghoon Kwak ◽  
Yunsick Sung

To provide a realistic environment for remote sensing applications, point clouds are used to realize a three-dimensional (3D) digital world for the user. Motion recognition of objects, e.g., humans, is required to provide realistic experiences in the 3D digital world. To recognize a user’s motions, 3D landmarks are provided by analyzing a 3D point cloud collected through a light detection and ranging (LiDAR) system or a red green blue (RGB) image collected visually. However, manual supervision is required to extract 3D landmarks as to whether they originate from the RGB image or the 3D point cloud. Thus, there is a need for a method for extracting 3D landmarks without manual supervision. Herein, an RGB image and a 3D point cloud are used to extract 3D landmarks. The 3D point cloud is utilized as the relative distance between a LiDAR and a user. Because it cannot contain all information the user’s entire body due to disparities, it cannot generate a dense depth image that provides the boundary of user’s body. Therefore, up-sampling is performed to increase the density of the depth image generated based on the 3D point cloud; the density depends on the 3D point cloud. This paper proposes a system for extracting 3D landmarks using 3D point clouds and RGB images without manual supervision. A depth image provides the boundary of a user’s motion and is generated by using 3D point cloud and RGB image collected by a LiDAR and an RGB camera, respectively. To extract 3D landmarks automatically, an encoder–decoder model is trained with the generated depth images, and the RGB images and 3D landmarks are extracted from these images with the trained encoder model. The method of extracting 3D landmarks using RGB depth (RGBD) images was verified experimentally, and 3D landmarks were extracted to evaluate the user’s motions with RGBD images. In this manner, landmarks could be extracted according to the user’s motions, rather than by extracting them using the RGB images. The depth images generated by the proposed method were 1.832 times denser than the up-sampling-based depth images generated with bilateral filtering.


Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4395
Author(s):  
Miloš Antić ◽  
Andrej Zdešar ◽  
Igor Škrjanc

This paper presents an approach of depth image segmentation based on the Evolving Principal Component Clustering (EPCC) method, which exploits data locality in an ordered data stream. The parameters of linear prototypes, which are used to describe different clusters, are estimated in a recursive manner. The main contribution of this work is the extension and application of the EPCC to 3D space for recursive and real-time detection of flat connected surfaces based on linear segments, which are all detected in an evolving way. To obtain optimal results when processing homogeneous surfaces, we introduced two-step filtering for outlier detection within a clustering framework and considered the noise model, which allowed for the compensation of characteristic uncertainties that are introduced into the measurements of depth sensors. The developed algorithm was compared with well-known methods for point cloud segmentation. The proposed approach achieves better segmentation results over longer distances for which the signal-to-noise ratio is low, without prior filtering of the data. On the given database, an average rate higher than 90% was obtained for successfully detected flat surfaces, which indicates high performance when processing huge point clouds in a non-iterative manner.


2011 ◽  
Vol 101-102 ◽  
pp. 232-235
Author(s):  
Xue Сhang Zhang ◽  
Xue Jun Gao

A method for accurate registration on point clouds is presented in the paper. Manual alignment or the use of landmarks is avoided in the process of multi-view point clouds. Firstly, the differential geometric information is extracted from the point clouds. The extended Gaussian sphere and combination features are used to define the corresponding points of crude alignment. Secondly, the optimal algorithm,the point-to-point Iterated Closest Point, is applied to the accurate registration on point clouds. Thus, the complete point cloud can be obtained in the method.


Author(s):  
Sara Greenberg ◽  
John McPhee ◽  
Alexander Wong

Fitting a kinematic model of the human body to an image withoutthe use of markers is a method of pose estimation that is usefulfor tracking and posture evaluation. This model-fitting is challengingdue to the variation in human physique and the large numberof possible poses. One type of modeling is to represent the humanbody as a set of rigid body volumes. These volumes can beregistered to a target point cloud acquired from a depth camerausing the Iterative Closest Point (ICP) algorithm. The speed of ICPregistration is inversely proportional to the number of points in themodel and the target point clouds, and using the entire target pointcloud in this registration is too slow for real-time applications. Thiswork proposes the use of data-driven Monte Carlo methods to selecta subset of points from the target point cloud that maintains orimproves the accuracy of the point cloud registration for joint localizationin real time. For this application, we investigate curvature ofthe depth image as the driving variable to guide the sampling, andcompare it with benchmark random sampling techniques.


2017 ◽  
Vol 2017 ◽  
pp. 1-11 ◽  
Author(s):  
Zhiying Song ◽  
Huiyan Jiang ◽  
Qiyao Yang ◽  
Zhiguo Wang ◽  
Guoxu Zhang

The PET and CT fusion image, combining the anatomical and functional information, has important clinical meaning. An effective registration of PET and CT images is the basis of image fusion. This paper presents a multithread registration method based on contour point cloud for 3D whole-body PET and CT images. Firstly, a geometric feature-based segmentation (GFS) method and a dynamic threshold denoising (DTD) method are creatively proposed to preprocess CT and PET images, respectively. Next, a new automated trunk slices extraction method is presented for extracting feature point clouds. Finally, the multithread Iterative Closet Point is adopted to drive an affine transform. We compare our method with a multiresolution registration method based on Mattes Mutual Information on 13 pairs (246~286 slices per pair) of 3D whole-body PET and CT data. Experimental results demonstrate the registration effectiveness of our method with lower negative normalization correlation (NC = −0.933) on feature images and less Euclidean distance error (ED = 2.826) on landmark points, outperforming the source data (NC = −0.496, ED = 25.847) and the compared method (NC = −0.614, ED = 16.085). Moreover, our method is about ten times faster than the compared one.


Sign in / Sign up

Export Citation Format

Share Document