scholarly journals A Framework for Evaluating Field-Based, High-Throughput Phenotyping Systems: A Meta-Analysis

Sensors ◽  
2019 ◽  
Vol 19 (16) ◽  
pp. 3582
Author(s):  
Sierra N. Young

This paper presents a framework for the evaluation of system complexity and utility and the identification of bottlenecks in the deployment of field-based, high-throughput phenotyping (FB-HTP) systems. Although the capabilities of technology used for high-throughput phenotyping has improved and costs decreased, there have been few, if any, successful attempts at developing turnkey field-based phenotyping systems. To identify areas for future improvement in developing turnkey FB-HTP solutions, a framework for evaluating their complexity and utility was developed and applied to total of 10 case studies to highlight potential barriers in their development and adoption. The framework performs system factorization and rates the complexity and utility of subsystem factors, as well as each FB-HTP system as a whole, and provides data related to the trends and relationships within the complexity and utility factors. This work suggests that additional research and development are needed focused around the following areas: (i) data handling and management, specifically data transfer from the field to the data processing pipeline, (ii) improved human-machine interaction to facilitate usability across multiple users, and (iii) design standardization of the factors common across all FB-HTP systems to limit the competing drivers of system complexity and utility. This framework can be used to evaluate both previously developed and future proposed systems to approximate the overall system complexity and identify areas for improvement prior to implementation.

2020 ◽  
Vol 63 (4) ◽  
pp. 1133-1146
Author(s):  
Beichen Lyu ◽  
Stuart D. Smith ◽  
Yexiang Xue ◽  
Katy M. Rainey ◽  
Keith Cherkauer

HighlightsThis study addresses two computational challenges in high-throughput phenotyping: scalability and efficiency.Specifically, we focus on extracting crop images and deriving vegetation indices using unmanned aerial systems.To this end, we outline a data processing pipeline, featuring a crop localization algorithm and trie data structure.We demonstrate the efficacy of our approach by computing large-scale and high-precision vegetation indices in a soybean breeding experiment, where we evaluate soybean growth under water inundation and temporal change.Abstract. In agronomy, high-throughput phenotyping (HTP) can provide key information for agronomists in genomic selection as well as farmers in yield prediction. Recently, HTP using unmanned aerial systems (UAS) has shown advantages in both cost and efficiency. However, scalability and efficiency have not been well studied when processing images in complex contexts, such as using multispectral cameras, and when images are collected during early and late growth stages. These challenges hamper further analysis to quantify phenotypic traits for large-scale and high-precision applications in plant breeding. To solve these challenges, our research team previously built a three-step data processing pipeline, which is highly modular. For this project, we present improvements to the previous pipeline to improve canopy segmentation and crop plot localization, leading to improved accuracy in crop image extraction. Furthermore, we propose a novel workflow based on a trie data structure to compute vegetation indices efficiently and with greater flexibility. For each of our proposed changes, we evaluate the advantages by comparison with previous models in the literature or by comparing processing results using both the original and improved pipelines. The improved pipeline is implemented as two MATLAB programs: Crop Image Extraction version 2 (CIE 2.0) and Vegetation Index Derivation version 1 (VID 1.0). Using CIE 2.0 and VID 1.0, we compute canopy coverage and normalized difference vegetation indices (NDVIs) for a soybean phenotyping experiment. We use canopy coverage to investigate excess water stress and NDVIs to evaluate temporal patterns across the soybean growth stages. Both experimental results compare favorably with previous studies, especially for approximation of soybean reproductive stage. Overall, the proposed methodology and implemented experiments provide a scalable and efficient paradigm for applying HTP with UAS to general plant breeding. Keywords: Data processing pipeline, High-throughput phenotyping, Image processing, Soybean breeding, Unmanned aerial systems, Vegetation indices.


2017 ◽  
Vol 7 (1.1) ◽  
pp. 252
Author(s):  
M. S. R. Prasad ◽  
Amrutha Lingampalli ◽  
Kushal Kumar C. Verapalle ◽  
Eshwar N. Malraju

While it is possible to save potential victims during emergencies situations (such as cardiac arrests) using the required healthcare infrastructure around, the greatest challenge lies in setting up of the same. People are not always present at places with readily access to qualified doctors. As a result, there should be a provision to remain in touch with a specialist medical practitioner. This paper presents the methodology of medical content management through internet of things. Transmitting sensory data of the victims to any hospital on time can help save numerous lives across the planet. This mode of human-machine interaction constitutes telemedicine. By embracing information technology and telecommunication, telemedicine provides remote emergency healthcare services outside the regular medical establishments. These systems generate and process an increasing amount of sensory data. It supports real-time processing with the help of a content management system that averts impending life-threatening dangers. By utilizing suitable sensory data inputs, medical practitioners can analyze and convey the appropriate measures to be taken to save the victim. The underlying methodology is extended to various scenarios such as aircrafts, high-buildings and remote villages. Finally, potential for future improvement and the challenges that are currently faced are presented in this paper.


Author(s):  
Zhixu Ni ◽  
Maria Fedorova

AbstractModern high throughput lipidomics provides large-scale datasets reporting hundreds of lipid molecular species. However, cross-laboratory comparison, meta-analysis, and systems biology integration of in-house generated and published datasets remain challenging due to a high diversity of used lipid annotation systems, different levels of reported structural information, and shortage in links to data integration resources. To support lipidomics data integration and interoperability of experimental lipidomics with data integration tools, we developed LipidLynxX serving as a hub facilitating data flow from high-throughput lipidomics analysis to systems biology data integration. LipidLynxX provides the possibility to convert, cross-match, and link various lipid annotations to the tools supporting lipid ontology, pathway, and network analysis aiming systems-wide integration and functional annotation of lipidome dynamics in health and disease. LipidLynxX is a flexible, customizable open-access tool freely available for download at https://github.com/SysMedOs/LipidLynxX.


2019 ◽  
Author(s):  
Cedar Warman ◽  
John E Fowler

AbstractHigh-throughput phenotyping systems are becoming increasingly powerful, dramatically changing our ability to document, measure, and detect phenomena. Unfortunately, taking advantage of these trends can be difficult for scientists with few resources, particularly when studying nonstandard biological systems. Here, we describe a powerful, cost-effective combination of a custom-built imaging platform and open-source image processing pipeline. Our maize ear scanner was built with off-the-shelf parts for <$80. When combined with a cellphone or digital camera, videos of rotating maize ears were captured and digitally flattened into projections covering the entire surface of the ear. Segregating GFP and anthocyanin seed markers were clearly distinguishable in ear projections, allowing manual annotation using ImageJ. Using this method, statistically powerful transmission data can be collected for hundreds of maize ears, accelerating the phenotyping process.


Author(s):  
Vadym Bilous ◽  
J. Philipp Städter ◽  
Marc Gebauer ◽  
Ulrich Berger

AbstractFor future innovations, complex Industry 4.0-technologies need to improve the interaction of humans and technology. Augmented Reality (AR) has a significant potential for this task by introducing more interactivity into modern technical assistance systems. However, AR systems are usually very expensive and thus unsuitable for small and medium-sized enterprises (SMEs). Furthermore, the machine's reliable data transfer to the AR applications and the user activity indication appear to be problematic. This work proposes a solution to these problems. A simple and scalable data transfer from industrial systems to Android applications has been developed.The suggested prototype demonstrates an AR application for troubleshooting and error correction in real-time, even on mobile or wearable devices, while working in a laboratory unit to simulate and solve various errors. The unit components (small garage doors) are equipped with sensors. The information about the state of the system is available in real-time at any given moment and is transmitted to a mobile or wearable device (tablet or smart glass) equipped with AR application. The operator is enabled to preview the required information in a graphical form (marks and cursors). Potential errors are shown and solved with an interactive manual. The system can be used for training purposes to achieve more efficient error correction and faster repairing.


2011 ◽  
Author(s):  
E. Kyzar ◽  
S. Gaikwad ◽  
M. Pham ◽  
J. Green ◽  
A. Roth ◽  
...  

2021 ◽  
Author(s):  
Peng Song ◽  
Jinglu Wang ◽  
Xinyu Guo ◽  
Wanneng Yang ◽  
Chunjiang Zhao

2021 ◽  
pp. 1-9
Author(s):  
Harshadkumar B. Prajapati ◽  
Ankit S. Vyas ◽  
Vipul K. Dabhi

Face expression recognition (FER) has gained very much attraction to researchers in the field of computer vision because of its major usefulness in security, robotics, and HMI (Human-Machine Interaction) systems. We propose a CNN (Convolutional Neural Network) architecture to address FER. To show the effectiveness of the proposed model, we evaluate the performance of the model on JAFFE dataset. We derive a concise CNN architecture to address the issue of expression classification. Objective of various experiments is to achieve convincing performance by reducing computational overhead. The proposed CNN model is very compact as compared to other state-of-the-art models. We could achieve highest accuracy of 97.10% and average accuracy of 90.43% for top 10 best runs without any pre-processing methods applied, which justifies the effectiveness of our model. Furthermore, we have also included visualization of CNN layers to observe the learning of CNN.


Sign in / Sign up

Export Citation Format

Share Document