scholarly journals Root Anatomy based on Root Cross-Section Image Analysis with Deep Learning

2018 ◽  
Author(s):  
Chaoxin Wang ◽  
Xukun Li ◽  
Doina Caragea ◽  
Raju Bheemanahalli ◽  
S.V. Krishna Jagadish

The aboveground plant efficiency has improved significantly in recent years, and the improvement has led to a steady increase in global food production. The improvement of belowground plant efficiency has the potential to further increase food production. However, the belowground plant roots are harder to study, due to inherent challenges presented by root phenotyping. Several tools for identifying root anatomical features in root cross-section images have been proposed. However, the existing tools are not fully automated and require significant human effort to produce accurate results. To address this limitation, we propose a fully automated approach, called Deep Learning for Root Anatomy (DL-RootAnatomy), for identifying anatomical traits in root cross-section images. Using the Faster Region-based Convolutional Neural Network (Faster R-CNN), the DL-RootAnatomy models detect objects such as root, stele and late metaxylem, and predict rectangular bounding boxes around such objects. Subsequently, the bounding boxes are used to estimate the root diameter, stele diameter, and late metaxylem number and average diameter. Experimental evaluation using standard object detection metrics, such as intersection-over-union and mean average precision, has shown that our models can accurately detect the root, stele and late metaxylem objects. Furthermore, the results have shown that the measurements estimated based on predicted bounding boxes have very small root mean square error when compared with the corresponding ground truth values, suggesting that DL-RootAnatomy can be used to accurately detect anatomical features. Finally, a comparison with existing approaches, which involve some degree of human interaction, has shown that the proposed approach is more accurate than existing approaches on a subset of our data. A webserver for performing root anatomy using our deep learning pre-trained models is available at https://rootanatomy.org, together with a link to a GitHub repository that contains code that can be used to re-train or fine-tune our network with other types of root-cross section images. The labeled images used for training and evaluating our models are also available from the GitHub repository.

2020 ◽  
Vol 175 ◽  
pp. 105549 ◽  
Author(s):  
Chaoxin Wang ◽  
Xukun Li ◽  
Doina Caragea ◽  
Raju Bheemanahallia ◽  
S.V. Krishna Jagadish

2020 ◽  
Author(s):  
Adrien Heymans ◽  
Valentin Couvreur ◽  
Guillaume Lobet

Root hydraulic properties play a central role in the global water cycle, agricultural systems productivity, and ecosystem survival as they impact the global canopy water supply. However, the available experimental methods to quantify root hydraulic conductivities, such as the root pressure probing, are particularly challenging and their applicability on thin roots and small root segments is limited. There is a gap in methods enabling easy estimations of root hydraulic conductivities across a diversity of root types and at high resolution along root axes. In this case study, we analysed Zea mays (maize) plants of the var. B73 that were grown in pots for 14 days. Root cross-section data were used to extract anatomical measurements. We used the Generator of Root Anatomy in R (GRANAR) model to generate root anatomical networks from anatomical features. Then we used the Model of Explicit Cross-section Hydraulic Anatomy (MECHA) to compute an estimation of the root axial and radial hydraulic conductivities (kx and kr, respectively), based on the generated anatomical networks and cell hydraulic properties from the literature. The root hydraulic conductivity maps obtained from the root cross-sections suggest significant functional variations along and between different root types. Predicted variations of kr along the root axis were strongly dependent on the maturation stage of hydrophobic barriers. The same was also true for the maturation rates of the metaxylem. The different anatomical features, as well as their evolution along the root type add significant variation to the kr estimation in between root type and along the root axe. Under the prism of root types, anatomy, and hydrophobic barriers, our results highlight the diversity of root radial and axial hydraulic conductivities, which may be veiled under low-resolution measurements of the root system hydraulic conductivity. While predictions of our root hydraulic maps match the range and trend of measurements reported in the literature, future studies could focus on the quantitative validation of hydraulic maps. From now on, a novel method, which turns root cross-section images into hydraulic maps will offer an inexpensive and easily applicable investigation tool for root hydraulics, in parallel to root pressure probing experiments.


Author(s):  
Alex Deakyne ◽  
Erik Gaasedelen ◽  
Paul A. Iaizzo

Recent advancements in deep learning have led to the possibility of increased performance in computer vision tools. A major development has been the usage of Convolutional Neural Networks (CNN) for automatically detecting features within a given image. Architectures such as YOLO1 have obtained incredibly high performances for the real-time detection of every-day objects within images. However to date, there have been few reports of deep learning applied to detect anatomical features within CT scans; especially those within the cardiovascular space. We propose here an automatic anatomical feature detection pipeline for identifying the features of the left atrium using a CNN. Slices of CT scans were fed into a single neural network which predicted the four bounding box coordinates that encapsulate the left atrium. The network can be optimized end-to-end and generate predictions at great speed, achieving a validation smooth L1 loss of 11.95 when predicting the left atrial bounding boxes.


2020 ◽  
Vol 71 (7) ◽  
pp. 868-880
Author(s):  
Nguyen Hong-Quan ◽  
Nguyen Thuy-Binh ◽  
Tran Duc-Long ◽  
Le Thi-Lan

Along with the strong development of camera networks, a video analysis system has been become more and more popular and has been applied in various practical applications. In this paper, we focus on person re-identification (person ReID) task that is a crucial step of video analysis systems. The purpose of person ReID is to associate multiple images of a given person when moving in a non-overlapping camera network. Many efforts have been made to person ReID. However, most of studies on person ReID only deal with well-alignment bounding boxes which are detected manually and considered as the perfect inputs for person ReID. In fact, when building a fully automated person ReID system the quality of the two previous steps that are person detection and tracking may have a strong effect on the person ReID performance. The contribution of this paper are two-folds. First, a unified framework for person ReID based on deep learning models is proposed. In this framework, the coupling of a deep neural network for person detection and a deep-learning-based tracking method is used. Besides, features extracted from an improved ResNet architecture are proposed for person representation to achieve a higher ReID accuracy. Second, our self-built dataset is introduced and employed for evaluation of all three steps in the fully automated person ReID framework.


2020 ◽  
Vol 8 ◽  
Author(s):  
Sohaib Younis ◽  
Marco Schmidt ◽  
Claus Weiland ◽  
Stefan Dressler ◽  
Bernhard Seeger ◽  
...  

As herbarium specimens are increasingly becoming digitised and accessible in online repositories, advanced computer vision techniques are being used to extract information from them. The presence of certain plant organs on herbarium sheets is useful information in various scientific contexts and automatic recognition of these organs will help mobilise such information. In our study, we use deep learning to detect plant organs on digitised herbarium specimens with Faster R-CNN. For our experiment, we manually annotated hundreds of herbarium scans with thousands of bounding boxes for six types of plant organs and used them for training and evaluating the plant organ detection model. The model worked particularly well on leaves and stems, while flowers were also present in large numbers in the sheets, but were not equally well recognised.


2021 ◽  
Vol 23 (06) ◽  
pp. 47-57
Author(s):  
Aditya Kulkarni ◽  
◽  
Manali Munot ◽  
Sai Salunkhe ◽  
Shubham Mhaske ◽  
...  

With the development in technologies right from serial to parallel computing, GPU, AI, and deep learning models a series of tools to process complex images have been developed. The main focus of this research is to compare various algorithms(pre-trained models) and their contributions to process complex images in terms of performance, accuracy, time, and their limitations. The pre-trained models we are using are CNN, R-CNN, R-FCN, and YOLO. These models are python language-based and use libraries like TensorFlow, OpenCV, and free image databases (Microsoft COCO and PAS-CAL VOC 2007/2012). These not only aim at object detection but also on building bounding boxes around appropriate locations. Thus, by this review, we get a better vision of these models and their performance and a good idea of which models are ideal for various situations.


2021 ◽  
Author(s):  
Benjamin Kellenberger ◽  
Devis Tuia ◽  
Dan Morris

<p>Ecological research like wildlife censuses increasingly relies on data on the scale of Terabytes. For example, modern camera trap datasets contain millions of images that require prohibitive amounts of manual labour to be annotated with species, bounding boxes, and the like. Machine learning, especially deep learning [3], could greatly accelerate this task through automated predictions, but involves expansive coding and expert knowledge.</p><p>In this abstract we present AIDE, the Annotation Interface for Data-driven Ecology [2]. In a first instance, AIDE is a web-based annotation suite for image labelling with support for concurrent access and scalability, up to the cloud. In a second instance, it tightly integrates deep learning models into the annotation process through active learning [7], where models learn from user-provided labels and in turn select the most relevant images for review from the large pool of unlabelled ones (Fig. 1). The result is a system where users only need to label what is required, which saves time and decreases errors due to fatigue.</p><p><img src="https://contentmanager.copernicus.org/fileStorageProxy.php?f=gnp.0402be60f60062057601161/sdaolpUECMynit/12UGE&app=m&a=0&c=131251398e575ac9974634bd0861fadc&ct=x&pn=gnp.elif&d=1" alt=""></p><p><em>Fig. 1: AIDE offers concurrent web image labelling support and uses annotations and deep learning models in an active learning loop.</em></p><p>AIDE includes a comprehensive set of built-in models, such as ResNet [1] for image classification, Faster R-CNN [5] and RetinaNet [4] for object detection, and U-Net [6] for semantic segmentation. All models can be customised and used without having to write a single line of code. Furthermore, AIDE accepts any third-party model with minimal implementation requirements. To complete the package, AIDE offers both user annotation and model prediction evaluation, access control, customisable model training, and more, all through the web browser.</p><p>AIDE is fully open source and available under https://github.com/microsoft/aerial_wildlife_detection.</p><p> </p><p><strong>References</strong></p>


Author(s):  
Titus Issac ◽  
Salaja Silas ◽  
Elijah Blessing Rajsingh

The 21st century is witnessing the emergence of a wide variety of wireless sensor network (WSN) applications ranging from simple environmental monitoring to complex satellite monitoring applications. The advent of complex WSN applications has led to a massive transition in the development, functioning, and capabilities of wireless sensor nodes. The contemporary nodes have multi-functional capabilities enabling the heterogeneous WSN applications. The future of WSN task assignment envisions WSN to be heterogeneous network with minimal human interaction. This led to the investigative model of a deep learning-based task assignment algorithm. The algorithm employs a multilayer feed forward neural network (MLFFNN) trained by particle swarm optimization (PSO) for solving task assignment problem in a dynamic centralized heterogeneous WSN. The analyses include the study of hidden layers and effectiveness of the task assignment algorithms. The chapter would be highly beneficial to a wide range of audiences employing the machine and deep learning in WSN.


Sign in / Sign up

Export Citation Format

Share Document