scholarly journals Deep Learning and Machine Vision Approaches for Posture Detection of Individual Pigs

Sensors ◽  
2019 ◽  
Vol 19 (17) ◽  
pp. 3738 ◽  
Author(s):  
Abozar Nasirahmadi ◽  
Barbara Sturm ◽  
Sandra Edwards ◽  
Knut-Håkan Jeppsson ◽  
Anne-Charlotte Olsson ◽  
...  

Posture detection targeted towards providing assessments for the monitoring of health and welfare of pigs has been of great interest to researchers from different disciplines. Existing studies applying machine vision techniques are mostly based on methods using three-dimensional imaging systems, or two-dimensional systems with the limitation of monitoring under controlled conditions. Thus, the main goal of this study was to determine whether a two-dimensional imaging system, along with deep learning approaches, could be utilized to detect the standing and lying (belly and side) postures of pigs under commercial farm conditions. Three deep learning-based detector methods, including faster regions with convolutional neural network features (Faster R-CNN), single shot multibox detector (SSD) and region-based fully convolutional network (R-FCN), combined with Inception V2, Residual Network (ResNet) and Inception ResNet V2 feature extractions of RGB images were proposed. Data from different commercial farms were used for training and validation of the proposed models. The experimental results demonstrated that the R-FCN ResNet101 method was able to detect lying and standing postures with higher average precision (AP) of 0.93, 0.95 and 0.92 for standing, lying on side and lying on belly postures, respectively and mean average precision (mAP) of more than 0.93.

Author(s):  
Jun-Li Xu ◽  
Cecilia Riccioli ◽  
Ana Herrero-Langreo ◽  
Aoife Gowen

Deep learning (DL) has recently achieved considerable successes in a wide range of applications, such as speech recognition, machine translation and visual recognition. This tutorial provides guidelines and useful strategies to apply DL techniques to address pixel-wise classification of spectral images. A one-dimensional convolutional neural network (1-D CNN) is used to extract features from the spectral domain, which are subsequently used for classification. In contrast to conventional classification methods for spectral images that examine primarily the spectral context, a three-dimensional (3-D) CNN is applied to simultaneously extract spatial and spectral features to enhance classificationaccuracy. This tutorial paper explains, in a stepwise manner, how to develop 1-D CNN and 3-D CNN models to discriminate spectral imaging data in a food authenticity context. The example image data provided consists of three varieties of puffed cereals imaged in the NIR range (943–1643 nm). The tutorial is presented in the MATLAB environment and scripts and dataset used are provided. Starting from spectral image pre-processing (background removal and spectral pre-treatment), the typical steps encountered in development of CNN models are presented. The example dataset provided demonstrates that deep learning approaches can increase classification accuracy compared to conventional approaches, increasing the accuracy of the model tested on an independent image from 92.33 % using partial least squares-discriminant analysis to 99.4 % using 3-CNN model at pixel level. The paper concludes with a discussion on the challenges and suggestions in the application of DL techniques for spectral image classification.


2021 ◽  
Author(s):  
Wing Keung Cheung ◽  
Robert Bell ◽  
Arjun Nair ◽  
Leon Menezies ◽  
Riyaz Patel ◽  
...  

AbstractA fully automatic two-dimensional Unet model is proposed to segment aorta and coronary arteries in computed tomography images. Two models are trained to segment two regions of interest, (1) the aorta and the coronary arteries or (2) the coronary arteries alone. Our method achieves 91.20% and 88.80% dice similarity coefficient accuracy on regions of interest 1 and 2 respectively. Compared with a semi-automatic segmentation method, our model performs better when segmenting the coronary arteries alone. The performance of the proposed method is comparable to existing published two-dimensional or three-dimensional deep learning models. Furthermore, the algorithmic and graphical processing unit memory efficiencies are maintained such that the model can be deployed within hospital computer networks where graphical processing units are typically not available.


2011 ◽  
Vol 15 (6) ◽  
pp. 648-653 ◽  
Author(s):  
Takaaki Urakawa ◽  
Hitoshi Matsuzawa ◽  
Yuji Suzuki ◽  
Naoto Endo ◽  
Ingrid L. Kwee ◽  
...  

Object The authors assessed the role of 3D anisotropy contrast (3DAC) in evaluating specific ascending tract degeneration in patients with cervical spondylotic myelopathy (CSM). Methods The authors studied 10 patients (2 women, 8 men; mean age 59.8 ± 14.6 years) with CSM and spinal cord compression below the C2–3 disc level, as well as 10 healthy control individuals (3 women, 7 men; mean age 42.0 ± 24.1 years). Images of the cervical cord at the C2–3 level were obtained using a 3.0-T MR imaging system. Results Three-dimensional anisotropy contrast imaging clearly made possible tract-by-tract analysis of the fasciculus cuneatus, fasciculus gracilis, and spinocerebellar tract. Tract degeneration identified using 3DAC showed good correlation with a decline in fractional anisotropy. Degeneration of the fasciculus gracilis detected by “vector contrast” demonstrated a good correlation with Nurick grades. Conclusions The study unambiguously demonstrated that 3DAC imaging is capable of assessing ascending tract degeneration in patients with CSM. Degeneration of an individual tract can be easily identified as a vector contrast change on the 3DAC image, a reflection of quantitative changes in anisotropism, similar to fractional anisotropy. Excellent correlation between Nurick grades and fasciculus gracilis degeneration suggests potential application of 3DAC imaging for tract-by-tract clinical correlation.


Sensors ◽  
2019 ◽  
Vol 19 (7) ◽  
pp. 1651 ◽  
Author(s):  
Suk-Ju Hong ◽  
Yunhyeok Han ◽  
Sang-Yeon Kim ◽  
Ah-Yeong Lee ◽  
Ghiseok Kim

Wild birds are monitored with the important objectives of identifying their habitats and estimating the size of their populations. Especially in the case of migratory bird, they are significantly recorded during specific periods of time to forecast any possible spread of animal disease such as avian influenza. This study led to the construction of deep-learning-based object-detection models with the aid of aerial photographs collected by an unmanned aerial vehicle (UAV). The dataset containing the aerial photographs includes diverse images of birds in various bird habitats and in the vicinity of lakes and on farmland. In addition, aerial images of bird decoys are captured to achieve various bird patterns and more accurate bird information. Bird detection models such as Faster Region-based Convolutional Neural Network (R-CNN), Region-based Fully Convolutional Network (R-FCN), Single Shot MultiBox Detector (SSD), Retinanet, and You Only Look Once (YOLO) were created and the performance of all models was estimated by comparing their computing speed and average precision. The test results show Faster R-CNN to be the most accurate and YOLO to be the fastest among the models. The combined results demonstrate that the use of deep-learning-based detection methods in combination with UAV aerial imagery is fairly suitable for bird detection in various environments.


Author(s):  
Tianyi Zhao ◽  
Yang Hu ◽  
Liang Cheng

Abstract Motivation: The functional changes of the genes, RNAs and proteins will eventually be reflected in the metabolic level. Increasing number of researchers have researched mechanism, biomarkers and targeted drugs by metabolites. However, compared with our knowledge about genes, RNAs, and proteins, we still know few about diseases-related metabolites. All the few existed methods for identifying diseases-related metabolites ignore the chemical structure of metabolites, fail to recognize the association pattern between metabolites and diseases, and fail to apply to isolated diseases and metabolites. Results: In this study, we present a graph deep learning based method, named Deep-DRM, for identifying diseases-related metabolites. First, chemical structures of metabolites were used to calculate similarities of metabolites. The similarities of diseases were obtained based on their functional gene network and semantic associations. Therefore, both metabolites and diseases network could be built. Next, Graph Convolutional Network (GCN) was applied to encode the features of metabolites and diseases, respectively. Then, the dimension of these features was reduced by Principal components analysis (PCA) with retainment 99% information. Finally, Deep neural network was built for identifying true metabolite-disease pairs (MDPs) based on these features. The 10-cross validations on three testing setups showed outstanding AUC (0.952) and AUPR (0.939) of Deep-DRM compared with previous methods and similar approaches. Ten of top 15 predicted associations between diseases and metabolites got support by other studies, which suggests that Deep-DRM is an efficient method to identify MDPs. Contact: [email protected]. Availability and implementation: https://github.com/zty2009/GPDNN-for-Identify-ing-Disease-related-Metabolites.


2005 ◽  
Vol 13 (3) ◽  
pp. 36-39 ◽  
Author(s):  
Jerry Sedgewick

In order to achieve a three dimensional appearance to a pair of two dimensional images, two off-axis images can be produced and colorized. These can be overlayed slightly apart and then viewed through glasses with two differently colored sides, one color for the left eye and another for the right eye in combinations containing red, green or blue colors. These off-axis and colorized images are referred to as anaglyphs.Off-axis images can be achieved through the use of a tilting stage on a microscope, by physically changing the position of a camera in relation to a still object, or through changing the axis of an optical stack of sections, such as what is created by confocal/CT scans. Some images lend themselves more to a 3D look both by virtue of inherent three dimensionality limited by the resolution of the imaging system.


Sign in / Sign up

Export Citation Format

Share Document